Oct 30 00:05:06.311687 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Oct 29 22:08:54 -00 2025 Oct 30 00:05:06.311714 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=56cc5d11e9ee9e328725323e5b298567de51aff19ad0756381062170c9c03796 Oct 30 00:05:06.311727 kernel: BIOS-provided physical RAM map: Oct 30 00:05:06.311734 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 30 00:05:06.311741 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 30 00:05:06.311748 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 30 00:05:06.311756 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Oct 30 00:05:06.311763 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 30 00:05:06.311775 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Oct 30 00:05:06.311782 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Oct 30 00:05:06.311792 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Oct 30 00:05:06.311799 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Oct 30 00:05:06.311806 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Oct 30 00:05:06.311813 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Oct 30 00:05:06.311822 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Oct 30 00:05:06.311831 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 30 00:05:06.311842 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Oct 30 00:05:06.311850 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Oct 30 00:05:06.311857 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Oct 30 00:05:06.311865 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Oct 30 00:05:06.311875 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Oct 30 00:05:06.311885 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 30 00:05:06.311895 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 30 00:05:06.311905 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 30 00:05:06.311915 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Oct 30 00:05:06.311929 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 30 00:05:06.311953 kernel: NX (Execute Disable) protection: active Oct 30 00:05:06.311964 kernel: APIC: Static calls initialized Oct 30 00:05:06.311975 kernel: e820: update [mem 0x9b319018-0x9b322c57] usable ==> usable Oct 30 00:05:06.311988 kernel: e820: update [mem 0x9b2dc018-0x9b318e57] usable ==> usable Oct 30 00:05:06.312000 kernel: extended physical RAM map: Oct 30 00:05:06.312011 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 30 00:05:06.312022 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 30 00:05:06.312033 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 30 00:05:06.312043 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Oct 30 00:05:06.312054 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 30 00:05:06.312069 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Oct 30 00:05:06.312079 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Oct 30 00:05:06.312090 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2dc017] usable Oct 30 00:05:06.312101 kernel: reserve setup_data: [mem 0x000000009b2dc018-0x000000009b318e57] usable Oct 30 00:05:06.312117 kernel: reserve setup_data: [mem 0x000000009b318e58-0x000000009b319017] usable Oct 30 00:05:06.312131 kernel: reserve setup_data: [mem 0x000000009b319018-0x000000009b322c57] usable Oct 30 00:05:06.312142 kernel: reserve setup_data: [mem 0x000000009b322c58-0x000000009bd3efff] usable Oct 30 00:05:06.312154 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Oct 30 00:05:06.312165 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Oct 30 00:05:06.312176 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Oct 30 00:05:06.312187 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Oct 30 00:05:06.312198 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 30 00:05:06.312209 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Oct 30 00:05:06.312222 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Oct 30 00:05:06.312233 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Oct 30 00:05:06.312244 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Oct 30 00:05:06.312254 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Oct 30 00:05:06.312265 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 30 00:05:06.312276 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 30 00:05:06.312287 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 30 00:05:06.312297 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Oct 30 00:05:06.312308 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 30 00:05:06.312323 kernel: efi: EFI v2.7 by EDK II Oct 30 00:05:06.312334 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Oct 30 00:05:06.312349 kernel: random: crng init done Oct 30 00:05:06.312362 kernel: efi: Remove mem150: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Oct 30 00:05:06.312373 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Oct 30 00:05:06.312387 kernel: secureboot: Secure boot disabled Oct 30 00:05:06.312397 kernel: SMBIOS 2.8 present. Oct 30 00:05:06.312408 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Oct 30 00:05:06.312419 kernel: DMI: Memory slots populated: 1/1 Oct 30 00:05:06.312429 kernel: Hypervisor detected: KVM Oct 30 00:05:06.312440 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Oct 30 00:05:06.312451 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 30 00:05:06.312462 kernel: kvm-clock: using sched offset of 4923724517 cycles Oct 30 00:05:06.312476 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 30 00:05:06.312488 kernel: tsc: Detected 2794.748 MHz processor Oct 30 00:05:06.312500 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 30 00:05:06.312511 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 30 00:05:06.312523 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Oct 30 00:05:06.312534 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 30 00:05:06.312545 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 30 00:05:06.312556 kernel: Using GB pages for direct mapping Oct 30 00:05:06.312587 kernel: ACPI: Early table checksum verification disabled Oct 30 00:05:06.312600 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Oct 30 00:05:06.312611 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Oct 30 00:05:06.312623 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 00:05:06.312634 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 00:05:06.312645 kernel: ACPI: FACS 0x000000009CBDD000 000040 Oct 30 00:05:06.312657 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 00:05:06.312672 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 00:05:06.312684 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 00:05:06.312695 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 00:05:06.312707 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Oct 30 00:05:06.312718 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Oct 30 00:05:06.312729 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Oct 30 00:05:06.312741 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Oct 30 00:05:06.312755 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Oct 30 00:05:06.312766 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Oct 30 00:05:06.312777 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Oct 30 00:05:06.312788 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Oct 30 00:05:06.312800 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Oct 30 00:05:06.312810 kernel: No NUMA configuration found Oct 30 00:05:06.312822 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Oct 30 00:05:06.312836 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Oct 30 00:05:06.312847 kernel: Zone ranges: Oct 30 00:05:06.312859 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 30 00:05:06.312870 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Oct 30 00:05:06.312881 kernel: Normal empty Oct 30 00:05:06.312893 kernel: Device empty Oct 30 00:05:06.312906 kernel: Movable zone start for each node Oct 30 00:05:06.312918 kernel: Early memory node ranges Oct 30 00:05:06.312932 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Oct 30 00:05:06.312957 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Oct 30 00:05:06.312968 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Oct 30 00:05:06.312979 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Oct 30 00:05:06.312990 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Oct 30 00:05:06.313001 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Oct 30 00:05:06.313013 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Oct 30 00:05:06.313028 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Oct 30 00:05:06.313042 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Oct 30 00:05:06.313054 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 30 00:05:06.313074 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Oct 30 00:05:06.313089 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Oct 30 00:05:06.313100 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 30 00:05:06.313112 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Oct 30 00:05:06.313124 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Oct 30 00:05:06.313135 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Oct 30 00:05:06.313147 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Oct 30 00:05:06.313163 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Oct 30 00:05:06.313175 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 30 00:05:06.313186 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 30 00:05:06.313198 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 30 00:05:06.313213 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 30 00:05:06.313225 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 30 00:05:06.313237 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 30 00:05:06.313249 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 30 00:05:06.313260 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 30 00:05:06.313272 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 30 00:05:06.313284 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 30 00:05:06.313299 kernel: TSC deadline timer available Oct 30 00:05:06.313311 kernel: CPU topo: Max. logical packages: 1 Oct 30 00:05:06.313323 kernel: CPU topo: Max. logical dies: 1 Oct 30 00:05:06.313335 kernel: CPU topo: Max. dies per package: 1 Oct 30 00:05:06.313347 kernel: CPU topo: Max. threads per core: 1 Oct 30 00:05:06.313358 kernel: CPU topo: Num. cores per package: 4 Oct 30 00:05:06.313370 kernel: CPU topo: Num. threads per package: 4 Oct 30 00:05:06.313385 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Oct 30 00:05:06.313397 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 30 00:05:06.313409 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 30 00:05:06.313421 kernel: kvm-guest: setup PV sched yield Oct 30 00:05:06.313433 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Oct 30 00:05:06.313445 kernel: Booting paravirtualized kernel on KVM Oct 30 00:05:06.313458 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 30 00:05:06.313470 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 30 00:05:06.313486 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Oct 30 00:05:06.313498 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Oct 30 00:05:06.313510 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 30 00:05:06.313522 kernel: kvm-guest: PV spinlocks enabled Oct 30 00:05:06.313534 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 30 00:05:06.313553 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=56cc5d11e9ee9e328725323e5b298567de51aff19ad0756381062170c9c03796 Oct 30 00:05:06.313568 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 30 00:05:06.313596 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 30 00:05:06.313608 kernel: Fallback order for Node 0: 0 Oct 30 00:05:06.313621 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Oct 30 00:05:06.313633 kernel: Policy zone: DMA32 Oct 30 00:05:06.313645 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 30 00:05:06.313657 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 30 00:05:06.313673 kernel: ftrace: allocating 40092 entries in 157 pages Oct 30 00:05:06.313685 kernel: ftrace: allocated 157 pages with 5 groups Oct 30 00:05:06.313697 kernel: Dynamic Preempt: voluntary Oct 30 00:05:06.313709 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 30 00:05:06.313723 kernel: rcu: RCU event tracing is enabled. Oct 30 00:05:06.313736 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 30 00:05:06.313748 kernel: Trampoline variant of Tasks RCU enabled. Oct 30 00:05:06.313760 kernel: Rude variant of Tasks RCU enabled. Oct 30 00:05:06.313775 kernel: Tracing variant of Tasks RCU enabled. Oct 30 00:05:06.313787 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 30 00:05:06.313800 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 30 00:05:06.313815 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 30 00:05:06.313828 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 30 00:05:06.313840 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 30 00:05:06.313853 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 30 00:05:06.313868 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 30 00:05:06.313880 kernel: Console: colour dummy device 80x25 Oct 30 00:05:06.313892 kernel: printk: legacy console [ttyS0] enabled Oct 30 00:05:06.313904 kernel: ACPI: Core revision 20240827 Oct 30 00:05:06.313917 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 30 00:05:06.313929 kernel: APIC: Switch to symmetric I/O mode setup Oct 30 00:05:06.313954 kernel: x2apic enabled Oct 30 00:05:06.313970 kernel: APIC: Switched APIC routing to: physical x2apic Oct 30 00:05:06.313984 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 30 00:05:06.313998 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 30 00:05:06.314012 kernel: kvm-guest: setup PV IPIs Oct 30 00:05:06.314024 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 30 00:05:06.314036 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Oct 30 00:05:06.314049 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 30 00:05:06.314064 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 30 00:05:06.314076 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 30 00:05:06.314088 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 30 00:05:06.314101 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 30 00:05:06.314113 kernel: Spectre V2 : Mitigation: Retpolines Oct 30 00:05:06.314126 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 30 00:05:06.314138 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 30 00:05:06.314152 kernel: active return thunk: retbleed_return_thunk Oct 30 00:05:06.314165 kernel: RETBleed: Mitigation: untrained return thunk Oct 30 00:05:06.314180 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 30 00:05:06.314193 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 30 00:05:06.314205 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 30 00:05:06.314219 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 30 00:05:06.314231 kernel: active return thunk: srso_return_thunk Oct 30 00:05:06.314246 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 30 00:05:06.314259 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 30 00:05:06.314271 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 30 00:05:06.314283 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 30 00:05:06.314296 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 30 00:05:06.314308 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 30 00:05:06.314320 kernel: Freeing SMP alternatives memory: 32K Oct 30 00:05:06.314335 kernel: pid_max: default: 32768 minimum: 301 Oct 30 00:05:06.314345 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 30 00:05:06.314356 kernel: landlock: Up and running. Oct 30 00:05:06.314368 kernel: SELinux: Initializing. Oct 30 00:05:06.314380 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 30 00:05:06.314391 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 30 00:05:06.314402 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 30 00:05:06.314416 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 30 00:05:06.314427 kernel: ... version: 0 Oct 30 00:05:06.314437 kernel: ... bit width: 48 Oct 30 00:05:06.314448 kernel: ... generic registers: 6 Oct 30 00:05:06.314458 kernel: ... value mask: 0000ffffffffffff Oct 30 00:05:06.314469 kernel: ... max period: 00007fffffffffff Oct 30 00:05:06.314479 kernel: ... fixed-purpose events: 0 Oct 30 00:05:06.314492 kernel: ... event mask: 000000000000003f Oct 30 00:05:06.314503 kernel: signal: max sigframe size: 1776 Oct 30 00:05:06.314514 kernel: rcu: Hierarchical SRCU implementation. Oct 30 00:05:06.314524 kernel: rcu: Max phase no-delay instances is 400. Oct 30 00:05:06.314539 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 30 00:05:06.314550 kernel: smp: Bringing up secondary CPUs ... Oct 30 00:05:06.314561 kernel: smpboot: x86: Booting SMP configuration: Oct 30 00:05:06.314589 kernel: .... node #0, CPUs: #1 #2 #3 Oct 30 00:05:06.314600 kernel: smp: Brought up 1 node, 4 CPUs Oct 30 00:05:06.314611 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 30 00:05:06.314623 kernel: Memory: 2445192K/2565800K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15956K init, 2088K bss, 114668K reserved, 0K cma-reserved) Oct 30 00:05:06.314634 kernel: devtmpfs: initialized Oct 30 00:05:06.314644 kernel: x86/mm: Memory block size: 128MB Oct 30 00:05:06.314655 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Oct 30 00:05:06.314670 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Oct 30 00:05:06.314681 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Oct 30 00:05:06.314691 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Oct 30 00:05:06.314703 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Oct 30 00:05:06.314713 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Oct 30 00:05:06.314724 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 30 00:05:06.314735 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 30 00:05:06.314748 kernel: pinctrl core: initialized pinctrl subsystem Oct 30 00:05:06.314759 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 30 00:05:06.314770 kernel: audit: initializing netlink subsys (disabled) Oct 30 00:05:06.314780 kernel: audit: type=2000 audit(1761782704.178:1): state=initialized audit_enabled=0 res=1 Oct 30 00:05:06.314791 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 30 00:05:06.314802 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 30 00:05:06.314812 kernel: cpuidle: using governor menu Oct 30 00:05:06.314825 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 30 00:05:06.314836 kernel: dca service started, version 1.12.1 Oct 30 00:05:06.314847 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Oct 30 00:05:06.314857 kernel: PCI: Using configuration type 1 for base access Oct 30 00:05:06.314868 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 30 00:05:06.314879 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 30 00:05:06.314890 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 30 00:05:06.314904 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 30 00:05:06.314916 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 30 00:05:06.314926 kernel: ACPI: Added _OSI(Module Device) Oct 30 00:05:06.314946 kernel: ACPI: Added _OSI(Processor Device) Oct 30 00:05:06.314957 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 30 00:05:06.314967 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 30 00:05:06.314978 kernel: ACPI: Interpreter enabled Oct 30 00:05:06.314989 kernel: ACPI: PM: (supports S0 S3 S5) Oct 30 00:05:06.315002 kernel: ACPI: Using IOAPIC for interrupt routing Oct 30 00:05:06.315013 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 30 00:05:06.315023 kernel: PCI: Using E820 reservations for host bridge windows Oct 30 00:05:06.315034 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 30 00:05:06.315045 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 30 00:05:06.315332 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 30 00:05:06.315589 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 30 00:05:06.315819 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 30 00:05:06.315835 kernel: PCI host bridge to bus 0000:00 Oct 30 00:05:06.316279 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 30 00:05:06.316750 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 30 00:05:06.316980 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 30 00:05:06.317235 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Oct 30 00:05:06.317452 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Oct 30 00:05:06.317693 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Oct 30 00:05:06.317909 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 30 00:05:06.318175 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Oct 30 00:05:06.318448 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Oct 30 00:05:06.318708 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Oct 30 00:05:06.318957 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Oct 30 00:05:06.319187 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Oct 30 00:05:06.319416 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 30 00:05:06.319687 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 30 00:05:06.319979 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Oct 30 00:05:06.320234 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Oct 30 00:05:06.320476 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Oct 30 00:05:06.320752 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 30 00:05:06.321013 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Oct 30 00:05:06.321257 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Oct 30 00:05:06.321475 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Oct 30 00:05:06.321754 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 30 00:05:06.322005 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Oct 30 00:05:06.322252 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Oct 30 00:05:06.322593 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Oct 30 00:05:06.322951 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Oct 30 00:05:06.323297 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Oct 30 00:05:06.323687 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 30 00:05:06.324052 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Oct 30 00:05:06.324385 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Oct 30 00:05:06.324752 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Oct 30 00:05:06.325111 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Oct 30 00:05:06.325324 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Oct 30 00:05:06.325339 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 30 00:05:06.325350 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 30 00:05:06.325361 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 30 00:05:06.325372 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 30 00:05:06.325388 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 30 00:05:06.325398 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 30 00:05:06.325409 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 30 00:05:06.325420 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 30 00:05:06.325431 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 30 00:05:06.325441 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 30 00:05:06.325452 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 30 00:05:06.325465 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 30 00:05:06.325476 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 30 00:05:06.325487 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 30 00:05:06.325497 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 30 00:05:06.325508 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 30 00:05:06.325518 kernel: iommu: Default domain type: Translated Oct 30 00:05:06.325529 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 30 00:05:06.325542 kernel: efivars: Registered efivars operations Oct 30 00:05:06.325553 kernel: PCI: Using ACPI for IRQ routing Oct 30 00:05:06.325563 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 30 00:05:06.325590 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Oct 30 00:05:06.325601 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Oct 30 00:05:06.325612 kernel: e820: reserve RAM buffer [mem 0x9b2dc018-0x9bffffff] Oct 30 00:05:06.325622 kernel: e820: reserve RAM buffer [mem 0x9b319018-0x9bffffff] Oct 30 00:05:06.325636 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Oct 30 00:05:06.325646 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Oct 30 00:05:06.325657 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Oct 30 00:05:06.325667 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Oct 30 00:05:06.325865 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 30 00:05:06.326072 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 30 00:05:06.326274 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 30 00:05:06.326294 kernel: vgaarb: loaded Oct 30 00:05:06.326306 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 30 00:05:06.326317 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 30 00:05:06.326328 kernel: clocksource: Switched to clocksource kvm-clock Oct 30 00:05:06.326339 kernel: VFS: Disk quotas dquot_6.6.0 Oct 30 00:05:06.326350 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 30 00:05:06.326362 kernel: pnp: PnP ACPI init Oct 30 00:05:06.326619 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Oct 30 00:05:06.326643 kernel: pnp: PnP ACPI: found 6 devices Oct 30 00:05:06.326656 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 30 00:05:06.326668 kernel: NET: Registered PF_INET protocol family Oct 30 00:05:06.326680 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 30 00:05:06.326693 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 30 00:05:06.326709 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 30 00:05:06.326721 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 30 00:05:06.326736 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 30 00:05:06.326749 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 30 00:05:06.326762 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 30 00:05:06.326775 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 30 00:05:06.326787 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 30 00:05:06.326803 kernel: NET: Registered PF_XDP protocol family Oct 30 00:05:06.327062 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Oct 30 00:05:06.327295 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Oct 30 00:05:06.327515 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 30 00:05:06.327770 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 30 00:05:06.328002 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 30 00:05:06.328227 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Oct 30 00:05:06.328447 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Oct 30 00:05:06.328689 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Oct 30 00:05:06.328710 kernel: PCI: CLS 0 bytes, default 64 Oct 30 00:05:06.328724 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Oct 30 00:05:06.328746 kernel: Initialise system trusted keyrings Oct 30 00:05:06.328759 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 30 00:05:06.328771 kernel: Key type asymmetric registered Oct 30 00:05:06.328784 kernel: Asymmetric key parser 'x509' registered Oct 30 00:05:06.328796 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 30 00:05:06.328809 kernel: io scheduler mq-deadline registered Oct 30 00:05:06.328821 kernel: io scheduler kyber registered Oct 30 00:05:06.328838 kernel: io scheduler bfq registered Oct 30 00:05:06.328851 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 30 00:05:06.328864 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 30 00:05:06.328877 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 30 00:05:06.328889 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 30 00:05:06.328902 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 30 00:05:06.328915 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 30 00:05:06.328931 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 30 00:05:06.328956 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 30 00:05:06.328969 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 30 00:05:06.328981 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 30 00:05:06.329232 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 30 00:05:06.329463 kernel: rtc_cmos 00:04: registered as rtc0 Oct 30 00:05:06.329734 kernel: rtc_cmos 00:04: setting system clock to 2025-10-30T00:05:04 UTC (1761782704) Oct 30 00:05:06.329970 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Oct 30 00:05:06.329989 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 30 00:05:06.330002 kernel: efifb: probing for efifb Oct 30 00:05:06.330014 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Oct 30 00:05:06.330027 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Oct 30 00:05:06.330039 kernel: efifb: scrolling: redraw Oct 30 00:05:06.330056 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 30 00:05:06.330069 kernel: Console: switching to colour frame buffer device 160x50 Oct 30 00:05:06.330082 kernel: fb0: EFI VGA frame buffer device Oct 30 00:05:06.330094 kernel: pstore: Using crash dump compression: deflate Oct 30 00:05:06.330106 kernel: pstore: Registered efi_pstore as persistent store backend Oct 30 00:05:06.330118 kernel: NET: Registered PF_INET6 protocol family Oct 30 00:05:06.330131 kernel: Segment Routing with IPv6 Oct 30 00:05:06.330146 kernel: In-situ OAM (IOAM) with IPv6 Oct 30 00:05:06.330158 kernel: NET: Registered PF_PACKET protocol family Oct 30 00:05:06.330170 kernel: Key type dns_resolver registered Oct 30 00:05:06.330182 kernel: IPI shorthand broadcast: enabled Oct 30 00:05:06.330195 kernel: sched_clock: Marking stable (1459003951, 371360937)->(1904577383, -74212495) Oct 30 00:05:06.330207 kernel: registered taskstats version 1 Oct 30 00:05:06.330219 kernel: Loading compiled-in X.509 certificates Oct 30 00:05:06.330232 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: b5a3367ee15a1313a0db8339b653e9e56c1bb8d0' Oct 30 00:05:06.330247 kernel: Demotion targets for Node 0: null Oct 30 00:05:06.330259 kernel: Key type .fscrypt registered Oct 30 00:05:06.330271 kernel: Key type fscrypt-provisioning registered Oct 30 00:05:06.330283 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 30 00:05:06.330295 kernel: ima: Allocated hash algorithm: sha1 Oct 30 00:05:06.330307 kernel: ima: No architecture policies found Oct 30 00:05:06.330318 kernel: clk: Disabling unused clocks Oct 30 00:05:06.330334 kernel: Freeing unused kernel image (initmem) memory: 15956K Oct 30 00:05:06.330346 kernel: Write protecting the kernel read-only data: 40960k Oct 30 00:05:06.330362 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Oct 30 00:05:06.330373 kernel: Run /init as init process Oct 30 00:05:06.330386 kernel: with arguments: Oct 30 00:05:06.330397 kernel: /init Oct 30 00:05:06.330410 kernel: with environment: Oct 30 00:05:06.330425 kernel: HOME=/ Oct 30 00:05:06.330437 kernel: TERM=linux Oct 30 00:05:06.330449 kernel: SCSI subsystem initialized Oct 30 00:05:06.330462 kernel: libata version 3.00 loaded. Oct 30 00:05:06.330732 kernel: ahci 0000:00:1f.2: version 3.0 Oct 30 00:05:06.330753 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 30 00:05:06.330999 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Oct 30 00:05:06.331235 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Oct 30 00:05:06.331461 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 30 00:05:06.331747 kernel: scsi host0: ahci Oct 30 00:05:06.332011 kernel: scsi host1: ahci Oct 30 00:05:06.332255 kernel: scsi host2: ahci Oct 30 00:05:06.332505 kernel: scsi host3: ahci Oct 30 00:05:06.332771 kernel: scsi host4: ahci Oct 30 00:05:06.333027 kernel: scsi host5: ahci Oct 30 00:05:06.333047 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Oct 30 00:05:06.333061 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Oct 30 00:05:06.333074 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Oct 30 00:05:06.333092 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Oct 30 00:05:06.333105 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Oct 30 00:05:06.333118 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Oct 30 00:05:06.333131 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 30 00:05:06.333144 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 30 00:05:06.333157 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 30 00:05:06.333170 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 30 00:05:06.333186 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 30 00:05:06.333199 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 30 00:05:06.333212 kernel: ata3.00: LPM support broken, forcing max_power Oct 30 00:05:06.333225 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 30 00:05:06.333237 kernel: ata3.00: applying bridge limits Oct 30 00:05:06.333250 kernel: ata3.00: LPM support broken, forcing max_power Oct 30 00:05:06.333262 kernel: ata3.00: configured for UDMA/100 Oct 30 00:05:06.333539 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 30 00:05:06.333784 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 30 00:05:06.333982 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 30 00:05:06.333999 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 30 00:05:06.334011 kernel: GPT:16515071 != 27000831 Oct 30 00:05:06.334023 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 30 00:05:06.334039 kernel: GPT:16515071 != 27000831 Oct 30 00:05:06.334050 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 30 00:05:06.334059 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 30 00:05:06.334259 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 30 00:05:06.334272 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 30 00:05:06.334461 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 30 00:05:06.334474 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 30 00:05:06.334487 kernel: device-mapper: uevent: version 1.0.3 Oct 30 00:05:06.334496 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 30 00:05:06.334505 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 30 00:05:06.334514 kernel: raid6: avx2x4 gen() 29781 MB/s Oct 30 00:05:06.334523 kernel: raid6: avx2x2 gen() 29789 MB/s Oct 30 00:05:06.334532 kernel: raid6: avx2x1 gen() 24795 MB/s Oct 30 00:05:06.334541 kernel: raid6: using algorithm avx2x2 gen() 29789 MB/s Oct 30 00:05:06.334553 kernel: raid6: .... xor() 18501 MB/s, rmw enabled Oct 30 00:05:06.334562 kernel: raid6: using avx2x2 recovery algorithm Oct 30 00:05:06.334589 kernel: xor: automatically using best checksumming function avx Oct 30 00:05:06.334603 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 30 00:05:06.334615 kernel: BTRFS: device fsid 6b7350c1-23d8-4ac8-84c6-3e4efb0085fe devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (182) Oct 30 00:05:06.334625 kernel: BTRFS info (device dm-0): first mount of filesystem 6b7350c1-23d8-4ac8-84c6-3e4efb0085fe Oct 30 00:05:06.334634 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 30 00:05:06.334646 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 30 00:05:06.334655 kernel: BTRFS info (device dm-0): enabling free space tree Oct 30 00:05:06.334664 kernel: loop: module loaded Oct 30 00:05:06.334673 kernel: loop0: detected capacity change from 0 to 100120 Oct 30 00:05:06.334682 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 30 00:05:06.334692 systemd[1]: Successfully made /usr/ read-only. Oct 30 00:05:06.334705 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 30 00:05:06.334718 systemd[1]: Detected virtualization kvm. Oct 30 00:05:06.334727 systemd[1]: Detected architecture x86-64. Oct 30 00:05:06.334736 systemd[1]: Running in initrd. Oct 30 00:05:06.334745 systemd[1]: No hostname configured, using default hostname. Oct 30 00:05:06.334758 systemd[1]: Hostname set to . Oct 30 00:05:06.334772 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 30 00:05:06.334789 systemd[1]: Queued start job for default target initrd.target. Oct 30 00:05:06.334802 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 30 00:05:06.334814 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 00:05:06.334828 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 00:05:06.334842 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 30 00:05:06.334855 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 30 00:05:06.334874 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 30 00:05:06.334888 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 30 00:05:06.334902 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 00:05:06.334915 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 30 00:05:06.334928 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 30 00:05:06.334953 systemd[1]: Reached target paths.target - Path Units. Oct 30 00:05:06.334973 systemd[1]: Reached target slices.target - Slice Units. Oct 30 00:05:06.334989 systemd[1]: Reached target swap.target - Swaps. Oct 30 00:05:06.335006 systemd[1]: Reached target timers.target - Timer Units. Oct 30 00:05:06.335022 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 30 00:05:06.335039 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 30 00:05:06.335056 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 30 00:05:06.335073 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 30 00:05:06.335094 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 30 00:05:06.335111 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 30 00:05:06.335127 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 00:05:06.335144 systemd[1]: Reached target sockets.target - Socket Units. Oct 30 00:05:06.335162 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 30 00:05:06.335178 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 30 00:05:06.335191 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 30 00:05:06.335208 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 30 00:05:06.335222 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 30 00:05:06.335236 systemd[1]: Starting systemd-fsck-usr.service... Oct 30 00:05:06.335249 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 30 00:05:06.335262 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 30 00:05:06.335276 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 00:05:06.335292 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 30 00:05:06.335306 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 00:05:06.335319 systemd[1]: Finished systemd-fsck-usr.service. Oct 30 00:05:06.335333 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 30 00:05:06.335380 systemd-journald[317]: Collecting audit messages is disabled. Oct 30 00:05:06.335409 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 30 00:05:06.335423 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 30 00:05:06.335438 kernel: Bridge firewalling registered Oct 30 00:05:06.335452 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 30 00:05:06.335465 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 30 00:05:06.335478 systemd-journald[317]: Journal started Oct 30 00:05:06.335504 systemd-journald[317]: Runtime Journal (/run/log/journal/fc998346d9f4437a8e59d2035e6d5bd9) is 6M, max 48.1M, 42.1M free. Oct 30 00:05:06.327713 systemd-modules-load[320]: Inserted module 'br_netfilter' Oct 30 00:05:06.343680 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 30 00:05:06.348729 systemd[1]: Started systemd-journald.service - Journal Service. Oct 30 00:05:06.349661 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:05:06.354396 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 30 00:05:06.361732 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 30 00:05:06.366977 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 00:05:06.371438 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 30 00:05:06.376777 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 30 00:05:06.382379 systemd-tmpfiles[342]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 30 00:05:06.389782 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 00:05:06.395872 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 30 00:05:06.399645 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 30 00:05:06.432744 dracut-cmdline[361]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=56cc5d11e9ee9e328725323e5b298567de51aff19ad0756381062170c9c03796 Oct 30 00:05:06.455333 systemd-resolved[348]: Positive Trust Anchors: Oct 30 00:05:06.455350 systemd-resolved[348]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 30 00:05:06.455355 systemd-resolved[348]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 30 00:05:06.455394 systemd-resolved[348]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 30 00:05:06.491338 systemd-resolved[348]: Defaulting to hostname 'linux'. Oct 30 00:05:06.492986 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 30 00:05:06.494021 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 30 00:05:06.572620 kernel: Loading iSCSI transport class v2.0-870. Oct 30 00:05:06.587613 kernel: iscsi: registered transport (tcp) Oct 30 00:05:06.614785 kernel: iscsi: registered transport (qla4xxx) Oct 30 00:05:06.614875 kernel: QLogic iSCSI HBA Driver Oct 30 00:05:06.650041 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 30 00:05:06.674542 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 00:05:06.680944 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 30 00:05:06.761757 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 30 00:05:06.764422 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 30 00:05:06.766886 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 30 00:05:06.815051 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 30 00:05:06.825880 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 00:05:06.864206 systemd-udevd[608]: Using default interface naming scheme 'v257'. Oct 30 00:05:06.883204 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 00:05:06.886027 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 30 00:05:06.923706 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 30 00:05:06.927987 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 30 00:05:06.932021 dracut-pre-trigger[672]: rd.md=0: removing MD RAID activation Oct 30 00:05:06.972821 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 30 00:05:06.978767 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 30 00:05:06.990413 systemd-networkd[712]: lo: Link UP Oct 30 00:05:06.990419 systemd-networkd[712]: lo: Gained carrier Oct 30 00:05:06.991416 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 30 00:05:06.993173 systemd[1]: Reached target network.target - Network. Oct 30 00:05:07.098286 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 00:05:07.102179 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 30 00:05:07.164962 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 30 00:05:07.186973 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 30 00:05:07.223596 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 30 00:05:07.227600 kernel: cryptd: max_cpu_qlen set to 1000 Oct 30 00:05:07.236846 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 30 00:05:07.243613 kernel: AES CTR mode by8 optimization enabled Oct 30 00:05:07.243645 systemd-networkd[712]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 30 00:05:07.243652 systemd-networkd[712]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 30 00:05:07.244192 systemd-networkd[712]: eth0: Link UP Oct 30 00:05:07.253977 systemd-networkd[712]: eth0: Gained carrier Oct 30 00:05:07.253996 systemd-networkd[712]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 30 00:05:07.267637 systemd-networkd[712]: eth0: DHCPv4 address 10.0.0.102/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 30 00:05:07.273047 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 30 00:05:07.286040 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 30 00:05:07.289139 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 00:05:07.289371 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:05:07.291384 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 00:05:07.296486 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 00:05:07.329389 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:05:07.385005 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 30 00:05:07.387329 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 30 00:05:07.393673 disk-uuid[840]: Primary Header is updated. Oct 30 00:05:07.393673 disk-uuid[840]: Secondary Entries is updated. Oct 30 00:05:07.393673 disk-uuid[840]: Secondary Header is updated. Oct 30 00:05:07.391238 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 00:05:07.396004 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 30 00:05:07.399846 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 30 00:05:07.457270 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 30 00:05:08.341941 systemd-networkd[712]: eth0: Gained IPv6LL Oct 30 00:05:08.478652 disk-uuid[851]: Warning: The kernel is still using the old partition table. Oct 30 00:05:08.478652 disk-uuid[851]: The new table will be used at the next reboot or after you Oct 30 00:05:08.478652 disk-uuid[851]: run partprobe(8) or kpartx(8) Oct 30 00:05:08.478652 disk-uuid[851]: The operation has completed successfully. Oct 30 00:05:08.499195 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 30 00:05:08.499349 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 30 00:05:08.501362 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 30 00:05:08.635474 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (869) Oct 30 00:05:08.635536 kernel: BTRFS info (device vda6): first mount of filesystem 03993d8b-786f-4e51-be25-d341ee6662e9 Oct 30 00:05:08.635549 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 30 00:05:08.641664 kernel: BTRFS info (device vda6): turning on async discard Oct 30 00:05:08.641758 kernel: BTRFS info (device vda6): enabling free space tree Oct 30 00:05:08.651629 kernel: BTRFS info (device vda6): last unmount of filesystem 03993d8b-786f-4e51-be25-d341ee6662e9 Oct 30 00:05:08.653307 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 30 00:05:08.657267 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 30 00:05:08.941496 ignition[888]: Ignition 2.22.0 Oct 30 00:05:08.941510 ignition[888]: Stage: fetch-offline Oct 30 00:05:08.941588 ignition[888]: no configs at "/usr/lib/ignition/base.d" Oct 30 00:05:08.941604 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 00:05:08.941798 ignition[888]: parsed url from cmdline: "" Oct 30 00:05:08.941802 ignition[888]: no config URL provided Oct 30 00:05:08.941808 ignition[888]: reading system config file "/usr/lib/ignition/user.ign" Oct 30 00:05:08.941821 ignition[888]: no config at "/usr/lib/ignition/user.ign" Oct 30 00:05:08.941880 ignition[888]: op(1): [started] loading QEMU firmware config module Oct 30 00:05:08.941885 ignition[888]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 30 00:05:08.986420 ignition[888]: op(1): [finished] loading QEMU firmware config module Oct 30 00:05:09.069891 ignition[888]: parsing config with SHA512: 844778a2177accc7560129dfcfc960ae3a3bd5295248232b7894863461204f43139668eaee869f79640bc5b5d3daaf68e007889a74fd183b6ab6746697b810fd Oct 30 00:05:09.076040 unknown[888]: fetched base config from "system" Oct 30 00:05:09.076053 unknown[888]: fetched user config from "qemu" Oct 30 00:05:09.076621 ignition[888]: fetch-offline: fetch-offline passed Oct 30 00:05:09.076748 ignition[888]: Ignition finished successfully Oct 30 00:05:09.082957 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 30 00:05:09.083805 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 30 00:05:09.084995 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 30 00:05:09.135519 ignition[899]: Ignition 2.22.0 Oct 30 00:05:09.135533 ignition[899]: Stage: kargs Oct 30 00:05:09.135807 ignition[899]: no configs at "/usr/lib/ignition/base.d" Oct 30 00:05:09.135821 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 00:05:09.136550 ignition[899]: kargs: kargs passed Oct 30 00:05:09.136620 ignition[899]: Ignition finished successfully Oct 30 00:05:09.144125 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 30 00:05:09.146880 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 30 00:05:09.210216 ignition[908]: Ignition 2.22.0 Oct 30 00:05:09.210244 ignition[908]: Stage: disks Oct 30 00:05:09.210391 ignition[908]: no configs at "/usr/lib/ignition/base.d" Oct 30 00:05:09.210402 ignition[908]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 00:05:09.211266 ignition[908]: disks: disks passed Oct 30 00:05:09.211316 ignition[908]: Ignition finished successfully Oct 30 00:05:09.242095 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 30 00:05:09.246192 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 30 00:05:09.247328 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 30 00:05:09.280279 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 30 00:05:09.281306 systemd[1]: Reached target sysinit.target - System Initialization. Oct 30 00:05:09.282344 systemd[1]: Reached target basic.target - Basic System. Oct 30 00:05:09.294431 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 30 00:05:09.362998 systemd-fsck[918]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 30 00:05:09.897099 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 30 00:05:09.902423 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 30 00:05:10.103612 kernel: EXT4-fs (vda9): mounted filesystem 357f8fb5-672c-465c-a10c-74ee57b7ef1c r/w with ordered data mode. Quota mode: none. Oct 30 00:05:10.104468 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 30 00:05:10.107703 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 30 00:05:10.112700 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 30 00:05:10.196648 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 30 00:05:10.227266 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 30 00:05:10.241454 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (926) Oct 30 00:05:10.241490 kernel: BTRFS info (device vda6): first mount of filesystem 03993d8b-786f-4e51-be25-d341ee6662e9 Oct 30 00:05:10.241518 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 30 00:05:10.227367 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 30 00:05:10.249402 kernel: BTRFS info (device vda6): turning on async discard Oct 30 00:05:10.249429 kernel: BTRFS info (device vda6): enabling free space tree Oct 30 00:05:10.241433 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 30 00:05:10.252929 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 30 00:05:10.264523 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 30 00:05:10.269946 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 30 00:05:10.348847 initrd-setup-root[950]: cut: /sysroot/etc/passwd: No such file or directory Oct 30 00:05:10.355893 initrd-setup-root[957]: cut: /sysroot/etc/group: No such file or directory Oct 30 00:05:10.361743 initrd-setup-root[964]: cut: /sysroot/etc/shadow: No such file or directory Oct 30 00:05:10.367328 initrd-setup-root[971]: cut: /sysroot/etc/gshadow: No such file or directory Oct 30 00:05:10.476975 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 30 00:05:10.481764 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 30 00:05:10.486187 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 30 00:05:10.515483 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 30 00:05:10.518257 kernel: BTRFS info (device vda6): last unmount of filesystem 03993d8b-786f-4e51-be25-d341ee6662e9 Oct 30 00:05:10.538769 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 30 00:05:10.576648 ignition[1040]: INFO : Ignition 2.22.0 Oct 30 00:05:10.576648 ignition[1040]: INFO : Stage: mount Oct 30 00:05:10.579330 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 00:05:10.579330 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 00:05:10.583407 ignition[1040]: INFO : mount: mount passed Oct 30 00:05:10.584741 ignition[1040]: INFO : Ignition finished successfully Oct 30 00:05:10.589086 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 30 00:05:10.591545 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 30 00:05:10.612097 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 30 00:05:10.638896 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1052) Oct 30 00:05:10.638941 kernel: BTRFS info (device vda6): first mount of filesystem 03993d8b-786f-4e51-be25-d341ee6662e9 Oct 30 00:05:10.638955 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 30 00:05:10.645170 kernel: BTRFS info (device vda6): turning on async discard Oct 30 00:05:10.645203 kernel: BTRFS info (device vda6): enabling free space tree Oct 30 00:05:10.648136 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 30 00:05:10.694441 ignition[1069]: INFO : Ignition 2.22.0 Oct 30 00:05:10.694441 ignition[1069]: INFO : Stage: files Oct 30 00:05:10.697239 ignition[1069]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 00:05:10.697239 ignition[1069]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 00:05:10.701244 ignition[1069]: DEBUG : files: compiled without relabeling support, skipping Oct 30 00:05:10.703246 ignition[1069]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 30 00:05:10.703246 ignition[1069]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 30 00:05:10.708203 ignition[1069]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 30 00:05:10.710474 ignition[1069]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 30 00:05:10.712882 ignition[1069]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 30 00:05:10.711072 unknown[1069]: wrote ssh authorized keys file for user: core Oct 30 00:05:10.717048 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 30 00:05:10.717048 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 30 00:05:10.802901 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 30 00:05:10.975699 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 30 00:05:10.979453 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 30 00:05:10.979453 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 30 00:05:10.979453 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 30 00:05:10.979453 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 30 00:05:10.979453 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 30 00:05:10.979453 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 30 00:05:10.979453 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 30 00:05:10.979453 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 30 00:05:11.027347 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 30 00:05:11.040736 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 30 00:05:11.055635 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 30 00:05:11.076146 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 30 00:05:11.076146 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 30 00:05:11.084479 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Oct 30 00:05:11.433245 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 30 00:05:12.114317 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 30 00:05:12.114317 ignition[1069]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 30 00:05:12.121143 ignition[1069]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 30 00:05:12.302570 ignition[1069]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 30 00:05:12.302570 ignition[1069]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 30 00:05:12.302570 ignition[1069]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 30 00:05:12.314198 ignition[1069]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 30 00:05:12.314198 ignition[1069]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 30 00:05:12.314198 ignition[1069]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 30 00:05:12.314198 ignition[1069]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 30 00:05:12.337366 ignition[1069]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 30 00:05:12.343875 ignition[1069]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 30 00:05:12.347235 ignition[1069]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 30 00:05:12.347235 ignition[1069]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 30 00:05:12.347235 ignition[1069]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 30 00:05:12.347235 ignition[1069]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 30 00:05:12.347235 ignition[1069]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 30 00:05:12.347235 ignition[1069]: INFO : files: files passed Oct 30 00:05:12.347235 ignition[1069]: INFO : Ignition finished successfully Oct 30 00:05:12.354162 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 30 00:05:12.361600 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 30 00:05:12.365213 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 30 00:05:12.391422 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 30 00:05:12.391649 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 30 00:05:12.400293 initrd-setup-root-after-ignition[1101]: grep: /sysroot/oem/oem-release: No such file or directory Oct 30 00:05:12.405608 initrd-setup-root-after-ignition[1103]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 30 00:05:12.405608 initrd-setup-root-after-ignition[1103]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 30 00:05:12.414701 initrd-setup-root-after-ignition[1107]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 30 00:05:12.418507 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 30 00:05:12.419387 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 30 00:05:12.426633 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 30 00:05:12.497312 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 30 00:05:12.497460 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 30 00:05:12.498777 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 30 00:05:12.503607 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 30 00:05:12.507353 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 30 00:05:12.511548 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 30 00:05:12.542912 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 30 00:05:12.545099 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 30 00:05:12.569879 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 30 00:05:12.570024 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 30 00:05:12.574609 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 00:05:12.580247 systemd[1]: Stopped target timers.target - Timer Units. Oct 30 00:05:12.583706 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 30 00:05:12.583895 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 30 00:05:12.589452 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 30 00:05:12.590416 systemd[1]: Stopped target basic.target - Basic System. Oct 30 00:05:12.595158 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 30 00:05:12.598039 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 30 00:05:12.601517 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 30 00:05:12.605305 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 30 00:05:12.609162 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 30 00:05:12.613308 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 30 00:05:12.616842 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 30 00:05:12.618078 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 30 00:05:12.632486 systemd[1]: Stopped target swap.target - Swaps. Oct 30 00:05:12.636003 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 30 00:05:12.636163 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 30 00:05:12.641214 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 30 00:05:12.644955 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 00:05:12.646267 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 30 00:05:12.650286 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 00:05:12.653790 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 30 00:05:12.653975 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 30 00:05:12.659277 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 30 00:05:12.659451 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 30 00:05:12.663231 systemd[1]: Stopped target paths.target - Path Units. Oct 30 00:05:12.666221 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 30 00:05:12.671718 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 00:05:12.672604 systemd[1]: Stopped target slices.target - Slice Units. Oct 30 00:05:12.673480 systemd[1]: Stopped target sockets.target - Socket Units. Oct 30 00:05:12.680072 systemd[1]: iscsid.socket: Deactivated successfully. Oct 30 00:05:12.680200 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 30 00:05:12.682603 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 30 00:05:12.682693 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 30 00:05:12.705297 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 30 00:05:12.705432 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 30 00:05:12.708051 systemd[1]: ignition-files.service: Deactivated successfully. Oct 30 00:05:12.708162 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 30 00:05:12.715724 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 30 00:05:12.717358 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 30 00:05:12.721433 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 30 00:05:12.721622 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 00:05:12.724722 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 30 00:05:12.724848 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 00:05:12.728244 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 30 00:05:12.728386 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 30 00:05:12.738906 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 30 00:05:12.739043 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 30 00:05:12.768677 ignition[1127]: INFO : Ignition 2.22.0 Oct 30 00:05:12.768677 ignition[1127]: INFO : Stage: umount Oct 30 00:05:12.772012 ignition[1127]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 00:05:12.772012 ignition[1127]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 00:05:12.772012 ignition[1127]: INFO : umount: umount passed Oct 30 00:05:12.772012 ignition[1127]: INFO : Ignition finished successfully Oct 30 00:05:12.777086 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 30 00:05:12.777841 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 30 00:05:12.777975 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 30 00:05:12.783112 systemd[1]: Stopped target network.target - Network. Oct 30 00:05:12.784099 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 30 00:05:12.784167 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 30 00:05:12.787147 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 30 00:05:12.787220 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 30 00:05:12.788072 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 30 00:05:12.788128 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 30 00:05:12.793098 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 30 00:05:12.793153 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 30 00:05:12.796183 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 30 00:05:12.838793 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 30 00:05:12.849821 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 30 00:05:12.850044 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 30 00:05:12.861311 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 30 00:05:12.861531 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 30 00:05:12.868109 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 30 00:05:12.912073 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 30 00:05:12.912156 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 30 00:05:12.918200 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 30 00:05:12.918956 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 30 00:05:12.919059 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 30 00:05:12.919585 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 30 00:05:12.919636 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 30 00:05:12.965453 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 30 00:05:12.965557 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 30 00:05:12.966414 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 00:05:12.977951 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 30 00:05:12.978079 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 30 00:05:13.013176 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 30 00:05:13.013324 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 30 00:05:13.033353 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 30 00:05:13.039885 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 00:05:13.044543 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 30 00:05:13.044637 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 30 00:05:13.048171 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 30 00:05:13.048214 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 00:05:13.051472 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 30 00:05:13.051542 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 30 00:05:13.053189 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 30 00:05:13.053243 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 30 00:05:13.062045 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 30 00:05:13.062111 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 30 00:05:13.069813 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 30 00:05:13.070464 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 30 00:05:13.070526 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 00:05:13.071081 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 30 00:05:13.071133 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 00:05:13.071651 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 30 00:05:13.071715 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 30 00:05:13.088275 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 30 00:05:13.088364 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 00:05:13.118021 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 00:05:13.118152 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:05:13.119829 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 30 00:05:13.129686 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 30 00:05:13.134413 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 30 00:05:13.134600 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 30 00:05:13.138535 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 30 00:05:13.155324 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 30 00:05:13.196198 systemd[1]: Switching root. Oct 30 00:05:13.273501 systemd-journald[317]: Journal stopped Oct 30 00:05:16.181436 systemd-journald[317]: Received SIGTERM from PID 1 (systemd). Oct 30 00:05:16.181547 kernel: SELinux: policy capability network_peer_controls=1 Oct 30 00:05:16.181606 kernel: SELinux: policy capability open_perms=1 Oct 30 00:05:16.181621 kernel: SELinux: policy capability extended_socket_class=1 Oct 30 00:05:16.181639 kernel: SELinux: policy capability always_check_network=0 Oct 30 00:05:16.181686 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 30 00:05:16.181710 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 30 00:05:16.181724 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 30 00:05:16.181742 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 30 00:05:16.181758 kernel: SELinux: policy capability userspace_initial_context=0 Oct 30 00:05:16.181779 kernel: audit: type=1403 audit(1761782715.034:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 30 00:05:16.181793 systemd[1]: Successfully loaded SELinux policy in 84.353ms. Oct 30 00:05:16.181809 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.747ms. Oct 30 00:05:16.181823 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 30 00:05:16.181837 systemd[1]: Detected virtualization kvm. Oct 30 00:05:16.181852 systemd[1]: Detected architecture x86-64. Oct 30 00:05:16.181865 systemd[1]: Detected first boot. Oct 30 00:05:16.181878 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 30 00:05:16.181891 zram_generator::config[1172]: No configuration found. Oct 30 00:05:16.181906 kernel: Guest personality initialized and is inactive Oct 30 00:05:16.181918 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 30 00:05:16.181931 kernel: Initialized host personality Oct 30 00:05:16.181949 kernel: NET: Registered PF_VSOCK protocol family Oct 30 00:05:16.181963 systemd[1]: Populated /etc with preset unit settings. Oct 30 00:05:16.181977 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 30 00:05:16.181990 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 30 00:05:16.182003 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 30 00:05:16.182016 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 30 00:05:16.182030 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 30 00:05:16.182045 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 30 00:05:16.182058 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 30 00:05:16.182071 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 30 00:05:16.182084 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 30 00:05:16.182098 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 30 00:05:16.182111 systemd[1]: Created slice user.slice - User and Session Slice. Oct 30 00:05:16.182127 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 00:05:16.182142 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 00:05:16.182155 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 30 00:05:16.182169 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 30 00:05:16.182182 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 30 00:05:16.182195 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 30 00:05:16.182209 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 30 00:05:16.182225 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 00:05:16.182238 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 30 00:05:16.182251 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 30 00:05:16.182264 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 30 00:05:16.182277 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 30 00:05:16.182290 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 30 00:05:16.182303 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 00:05:16.182319 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 30 00:05:16.182331 systemd[1]: Reached target slices.target - Slice Units. Oct 30 00:05:16.182344 systemd[1]: Reached target swap.target - Swaps. Oct 30 00:05:16.182357 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 30 00:05:16.182372 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 30 00:05:16.182385 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 30 00:05:16.182398 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 30 00:05:16.182411 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 30 00:05:16.182429 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 00:05:16.182442 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 30 00:05:16.182455 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 30 00:05:16.182468 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 30 00:05:16.182480 systemd[1]: Mounting media.mount - External Media Directory... Oct 30 00:05:16.182494 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:05:16.182506 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 30 00:05:16.182522 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 30 00:05:16.182535 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 30 00:05:16.182549 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 30 00:05:16.182561 systemd[1]: Reached target machines.target - Containers. Oct 30 00:05:16.182595 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 30 00:05:16.182609 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 00:05:16.182625 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 30 00:05:16.182637 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 30 00:05:16.182650 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 00:05:16.182673 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 30 00:05:16.182686 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 00:05:16.182699 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 30 00:05:16.182712 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 00:05:16.182729 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 30 00:05:16.182743 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 30 00:05:16.182756 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 30 00:05:16.182769 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 30 00:05:16.182783 systemd[1]: Stopped systemd-fsck-usr.service. Oct 30 00:05:16.182796 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 00:05:16.182809 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 30 00:05:16.182825 kernel: fuse: init (API version 7.41) Oct 30 00:05:16.182838 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 30 00:05:16.182851 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 30 00:05:16.182864 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 30 00:05:16.182880 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 30 00:05:16.182892 kernel: ACPI: bus type drm_connector registered Oct 30 00:05:16.182926 systemd-journald[1236]: Collecting audit messages is disabled. Oct 30 00:05:16.182955 systemd-journald[1236]: Journal started Oct 30 00:05:16.182981 systemd-journald[1236]: Runtime Journal (/run/log/journal/fc998346d9f4437a8e59d2035e6d5bd9) is 6M, max 48.1M, 42.1M free. Oct 30 00:05:15.730926 systemd[1]: Queued start job for default target multi-user.target. Oct 30 00:05:15.748920 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 30 00:05:15.749542 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 30 00:05:15.749968 systemd[1]: systemd-journald.service: Consumed 1.227s CPU time. Oct 30 00:05:16.194916 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 30 00:05:16.200646 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:05:16.204672 systemd[1]: Started systemd-journald.service - Journal Service. Oct 30 00:05:16.207786 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 30 00:05:16.209890 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 30 00:05:16.212429 systemd[1]: Mounted media.mount - External Media Directory. Oct 30 00:05:16.215173 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 30 00:05:16.217910 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 30 00:05:16.220015 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 30 00:05:16.223222 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 00:05:16.226077 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 30 00:05:16.226505 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 30 00:05:16.229241 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 00:05:16.229620 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 00:05:16.232103 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 30 00:05:16.232337 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 30 00:05:16.235069 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 00:05:16.235306 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 00:05:16.238246 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 30 00:05:16.238555 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 30 00:05:16.240850 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 00:05:16.241159 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 00:05:16.243777 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 30 00:05:16.246354 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 30 00:05:16.249735 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 00:05:16.254621 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 30 00:05:16.257436 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 30 00:05:16.278440 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 30 00:05:16.281637 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 30 00:05:16.285536 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 30 00:05:16.288933 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 30 00:05:16.291218 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 30 00:05:16.291256 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 30 00:05:16.294535 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 30 00:05:16.297332 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 00:05:16.302751 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 30 00:05:16.306411 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 30 00:05:16.308502 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 30 00:05:16.311250 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 30 00:05:16.315476 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 30 00:05:16.317888 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 30 00:05:16.321962 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 30 00:05:16.327909 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 30 00:05:16.329262 systemd-journald[1236]: Time spent on flushing to /var/log/journal/fc998346d9f4437a8e59d2035e6d5bd9 is 31.927ms for 1052 entries. Oct 30 00:05:16.329262 systemd-journald[1236]: System Journal (/var/log/journal/fc998346d9f4437a8e59d2035e6d5bd9) is 8M, max 163.5M, 155.5M free. Oct 30 00:05:16.374902 systemd-journald[1236]: Received client request to flush runtime journal. Oct 30 00:05:16.375461 kernel: loop1: detected capacity change from 0 to 229808 Oct 30 00:05:16.341831 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 00:05:16.345087 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 30 00:05:16.347613 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 30 00:05:16.350381 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 30 00:05:16.369496 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 30 00:05:16.376623 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 30 00:05:16.379945 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 30 00:05:16.395204 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 30 00:05:16.396268 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Oct 30 00:05:16.396511 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Oct 30 00:05:16.402763 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 30 00:05:16.409628 kernel: loop2: detected capacity change from 0 to 110976 Oct 30 00:05:16.411418 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 30 00:05:16.443364 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 30 00:05:16.449597 kernel: loop3: detected capacity change from 0 to 128048 Oct 30 00:05:16.463587 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 30 00:05:16.467894 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 30 00:05:16.470845 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 30 00:05:16.476609 kernel: loop4: detected capacity change from 0 to 229808 Oct 30 00:05:16.489761 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 30 00:05:16.494598 kernel: loop5: detected capacity change from 0 to 110976 Oct 30 00:05:16.504799 systemd-tmpfiles[1313]: ACLs are not supported, ignoring. Oct 30 00:05:16.504820 systemd-tmpfiles[1313]: ACLs are not supported, ignoring. Oct 30 00:05:16.507600 kernel: loop6: detected capacity change from 0 to 128048 Oct 30 00:05:16.510520 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 00:05:16.523493 (sd-merge)[1314]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 30 00:05:16.530680 (sd-merge)[1314]: Merged extensions into '/usr'. Oct 30 00:05:16.536584 systemd[1]: Reload requested from client PID 1291 ('systemd-sysext') (unit systemd-sysext.service)... Oct 30 00:05:16.536603 systemd[1]: Reloading... Oct 30 00:05:16.629605 zram_generator::config[1350]: No configuration found. Oct 30 00:05:16.653605 systemd-resolved[1312]: Positive Trust Anchors: Oct 30 00:05:16.653623 systemd-resolved[1312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 30 00:05:16.653629 systemd-resolved[1312]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 30 00:05:16.653669 systemd-resolved[1312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 30 00:05:16.658961 systemd-resolved[1312]: Defaulting to hostname 'linux'. Oct 30 00:05:16.858868 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 30 00:05:16.859073 systemd[1]: Reloading finished in 321 ms. Oct 30 00:05:16.895476 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 30 00:05:16.897858 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 30 00:05:16.900429 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 30 00:05:16.905796 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 30 00:05:16.927915 systemd[1]: Starting ensure-sysext.service... Oct 30 00:05:16.931606 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 30 00:05:16.950003 systemd[1]: Reload requested from client PID 1386 ('systemctl') (unit ensure-sysext.service)... Oct 30 00:05:16.950023 systemd[1]: Reloading... Oct 30 00:05:16.959943 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 30 00:05:16.960001 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 30 00:05:16.960468 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 30 00:05:16.961118 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 30 00:05:16.962345 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 30 00:05:16.962668 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Oct 30 00:05:16.962744 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Oct 30 00:05:16.969758 systemd-tmpfiles[1387]: Detected autofs mount point /boot during canonicalization of boot. Oct 30 00:05:16.969774 systemd-tmpfiles[1387]: Skipping /boot Oct 30 00:05:16.985370 systemd-tmpfiles[1387]: Detected autofs mount point /boot during canonicalization of boot. Oct 30 00:05:16.985390 systemd-tmpfiles[1387]: Skipping /boot Oct 30 00:05:17.037615 zram_generator::config[1417]: No configuration found. Oct 30 00:05:17.245282 systemd[1]: Reloading finished in 294 ms. Oct 30 00:05:17.267018 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 30 00:05:17.302051 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 00:05:17.316502 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 30 00:05:17.320682 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 30 00:05:17.341260 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 30 00:05:17.350900 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 30 00:05:17.355992 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 00:05:17.364854 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 30 00:05:17.375175 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:05:17.375412 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 00:05:17.382870 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 00:05:17.387431 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 00:05:17.395343 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 00:05:17.397813 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 00:05:17.397933 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 00:05:17.398032 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:05:17.399460 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 00:05:17.399733 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 00:05:17.404396 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 00:05:17.404883 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 00:05:17.419100 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 30 00:05:17.428366 systemd-udevd[1461]: Using default interface naming scheme 'v257'. Oct 30 00:05:17.429187 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 00:05:17.429490 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 00:05:17.437348 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 30 00:05:17.460369 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:05:17.461069 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 00:05:17.462743 augenrules[1490]: No rules Oct 30 00:05:17.462851 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 00:05:17.466745 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 30 00:05:17.479214 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 00:05:17.484864 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 00:05:17.487279 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 00:05:17.487555 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 00:05:17.487784 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:05:17.490141 systemd[1]: audit-rules.service: Deactivated successfully. Oct 30 00:05:17.490483 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 30 00:05:17.492977 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 00:05:17.496140 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 30 00:05:17.499079 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 00:05:17.499462 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 00:05:17.502149 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 30 00:05:17.502468 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 30 00:05:17.504876 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 00:05:17.505201 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 00:05:17.507920 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 00:05:17.508270 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 00:05:17.525722 systemd[1]: Finished ensure-sysext.service. Oct 30 00:05:17.533745 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 30 00:05:17.536655 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 30 00:05:17.536732 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 30 00:05:17.538770 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 30 00:05:17.540889 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 30 00:05:17.592130 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 30 00:05:17.596420 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 30 00:05:17.647117 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 30 00:05:17.682018 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 30 00:05:17.708534 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 30 00:05:17.713990 systemd[1]: Reached target time-set.target - System Time Set. Oct 30 00:05:17.724204 systemd-networkd[1519]: lo: Link UP Oct 30 00:05:17.724216 systemd-networkd[1519]: lo: Gained carrier Oct 30 00:05:17.729281 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 30 00:05:17.731355 systemd-networkd[1519]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 30 00:05:17.731368 systemd-networkd[1519]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 30 00:05:17.731475 systemd[1]: Reached target network.target - Network. Oct 30 00:05:17.734130 systemd-networkd[1519]: eth0: Link UP Oct 30 00:05:17.735027 systemd-networkd[1519]: eth0: Gained carrier Oct 30 00:05:17.735167 systemd-networkd[1519]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 30 00:05:17.735605 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 30 00:05:17.735655 kernel: mousedev: PS/2 mouse device common for all mice Oct 30 00:05:17.735557 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 30 00:05:17.744911 kernel: ACPI: button: Power Button [PWRF] Oct 30 00:05:17.744737 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 30 00:05:17.823741 systemd-networkd[1519]: eth0: DHCPv4 address 10.0.0.102/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 30 00:05:17.825267 systemd-timesyncd[1521]: Network configuration changed, trying to establish connection. Oct 30 00:05:19.152395 systemd-resolved[1312]: Clock change detected. Flushing caches. Oct 30 00:05:19.152477 systemd-timesyncd[1521]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 30 00:05:19.152825 systemd-timesyncd[1521]: Initial clock synchronization to Thu 2025-10-30 00:05:19.152207 UTC. Oct 30 00:05:19.184389 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 30 00:05:19.263280 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Oct 30 00:05:19.263718 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 30 00:05:19.266380 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 30 00:05:19.327910 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 00:05:19.344390 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 00:05:19.344665 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:05:19.349064 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 00:05:19.545444 ldconfig[1458]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 30 00:05:19.571362 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 30 00:05:19.580639 kernel: kvm_amd: TSC scaling supported Oct 30 00:05:19.580711 kernel: kvm_amd: Nested Virtualization enabled Oct 30 00:05:19.580759 kernel: kvm_amd: Nested Paging enabled Oct 30 00:05:19.580782 kernel: kvm_amd: LBR virtualization supported Oct 30 00:05:19.580802 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 30 00:05:19.580835 kernel: kvm_amd: Virtual GIF supported Oct 30 00:05:19.576414 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 30 00:05:19.616143 kernel: EDAC MC: Ver: 3.0.0 Oct 30 00:05:19.619424 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 30 00:05:19.656798 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:05:19.661373 systemd[1]: Reached target sysinit.target - System Initialization. Oct 30 00:05:19.663502 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 30 00:05:19.665923 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 30 00:05:19.668266 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 30 00:05:19.670704 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 30 00:05:19.672856 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 30 00:05:19.675234 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 30 00:05:19.677493 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 30 00:05:19.677522 systemd[1]: Reached target paths.target - Path Units. Oct 30 00:05:19.679178 systemd[1]: Reached target timers.target - Timer Units. Oct 30 00:05:19.681831 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 30 00:05:19.685668 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 30 00:05:19.689822 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 30 00:05:19.692431 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 30 00:05:19.694677 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 30 00:05:19.704586 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 30 00:05:19.706649 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 30 00:05:19.709530 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 30 00:05:19.712317 systemd[1]: Reached target sockets.target - Socket Units. Oct 30 00:05:19.713978 systemd[1]: Reached target basic.target - Basic System. Oct 30 00:05:19.715793 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 30 00:05:19.715881 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 30 00:05:19.717925 systemd[1]: Starting containerd.service - containerd container runtime... Oct 30 00:05:19.721740 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 30 00:05:19.724951 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 30 00:05:19.728922 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 30 00:05:19.733339 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 30 00:05:19.738001 jq[1580]: false Oct 30 00:05:19.738433 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 30 00:05:19.741200 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 30 00:05:19.745361 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 30 00:05:19.749765 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 30 00:05:19.755238 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 30 00:05:19.761298 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 30 00:05:19.765148 extend-filesystems[1581]: Found /dev/vda6 Oct 30 00:05:19.764272 oslogin_cache_refresh[1582]: Refreshing passwd entry cache Oct 30 00:05:19.768037 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Refreshing passwd entry cache Oct 30 00:05:19.768484 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 30 00:05:19.768879 extend-filesystems[1581]: Found /dev/vda9 Oct 30 00:05:19.771479 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 30 00:05:19.774155 extend-filesystems[1581]: Checking size of /dev/vda9 Oct 30 00:05:19.776254 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 30 00:05:19.777812 systemd[1]: Starting update-engine.service - Update Engine... Oct 30 00:05:19.779419 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Failure getting users, quitting Oct 30 00:05:19.779410 oslogin_cache_refresh[1582]: Failure getting users, quitting Oct 30 00:05:19.779689 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 30 00:05:19.779689 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Refreshing group entry cache Oct 30 00:05:19.779439 oslogin_cache_refresh[1582]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 30 00:05:19.779523 oslogin_cache_refresh[1582]: Refreshing group entry cache Oct 30 00:05:19.786579 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Failure getting groups, quitting Oct 30 00:05:19.786579 google_oslogin_nss_cache[1582]: oslogin_cache_refresh[1582]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 30 00:05:19.786550 oslogin_cache_refresh[1582]: Failure getting groups, quitting Oct 30 00:05:19.786566 oslogin_cache_refresh[1582]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 30 00:05:19.789473 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 30 00:05:19.795380 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 30 00:05:19.798024 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 30 00:05:19.799321 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 30 00:05:19.799777 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 30 00:05:19.803557 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 30 00:05:19.820232 jq[1604]: true Oct 30 00:05:19.806196 systemd[1]: motdgen.service: Deactivated successfully. Oct 30 00:05:19.820649 update_engine[1599]: I20251030 00:05:19.806264 1599 main.cc:92] Flatcar Update Engine starting Oct 30 00:05:19.806499 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 30 00:05:19.810809 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 30 00:05:19.811254 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 30 00:05:19.827669 jq[1611]: true Oct 30 00:05:19.845402 (ntainerd)[1621]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 30 00:05:19.846739 extend-filesystems[1581]: Resized partition /dev/vda9 Oct 30 00:05:19.863776 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 30 00:05:19.863804 extend-filesystems[1629]: resize2fs 1.47.3 (8-Jul-2025) Oct 30 00:05:19.867677 tar[1610]: linux-amd64/LICENSE Oct 30 00:05:19.868149 tar[1610]: linux-amd64/helm Oct 30 00:05:19.900112 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 30 00:05:19.941520 dbus-daemon[1578]: [system] SELinux support is enabled Oct 30 00:05:19.942871 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 30 00:05:19.945419 systemd-logind[1593]: Watching system buttons on /dev/input/event2 (Power Button) Oct 30 00:05:19.945440 systemd-logind[1593]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 30 00:05:19.948287 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 30 00:05:19.950175 extend-filesystems[1629]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 30 00:05:19.950175 extend-filesystems[1629]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 30 00:05:19.950175 extend-filesystems[1629]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 30 00:05:19.948315 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 30 00:05:19.976495 update_engine[1599]: I20251030 00:05:19.953933 1599 update_check_scheduler.cc:74] Next update check in 3m36s Oct 30 00:05:19.976534 extend-filesystems[1581]: Resized filesystem in /dev/vda9 Oct 30 00:05:19.951419 systemd-logind[1593]: New seat seat0. Oct 30 00:05:19.954933 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 30 00:05:19.954962 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 30 00:05:19.956169 systemd[1]: Started systemd-logind.service - User Login Management. Oct 30 00:05:19.956893 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 30 00:05:19.957712 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 30 00:05:19.979604 systemd[1]: Started update-engine.service - Update Engine. Oct 30 00:05:19.984944 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 30 00:05:19.992546 bash[1645]: Updated "/home/core/.ssh/authorized_keys" Oct 30 00:05:19.991848 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 30 00:05:19.996961 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 30 00:05:20.006208 sshd_keygen[1603]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 30 00:05:20.054720 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 30 00:05:20.064228 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 30 00:05:20.085121 locksmithd[1648]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 30 00:05:20.124792 systemd[1]: issuegen.service: Deactivated successfully. Oct 30 00:05:20.125184 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 30 00:05:20.130581 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 30 00:05:20.159629 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 30 00:05:20.165654 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 30 00:05:20.174323 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 30 00:05:20.176903 systemd[1]: Reached target getty.target - Login Prompts. Oct 30 00:05:20.299441 containerd[1621]: time="2025-10-30T00:05:20Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 30 00:05:20.301230 containerd[1621]: time="2025-10-30T00:05:20.300351758Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 30 00:05:20.326842 containerd[1621]: time="2025-10-30T00:05:20.326669955Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.33µs" Oct 30 00:05:20.326842 containerd[1621]: time="2025-10-30T00:05:20.326725138Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 30 00:05:20.326842 containerd[1621]: time="2025-10-30T00:05:20.326757208Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 30 00:05:20.327039 containerd[1621]: time="2025-10-30T00:05:20.327026964Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 30 00:05:20.327070 containerd[1621]: time="2025-10-30T00:05:20.327050779Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 30 00:05:20.327396 containerd[1621]: time="2025-10-30T00:05:20.327357374Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 30 00:05:20.327572 containerd[1621]: time="2025-10-30T00:05:20.327474203Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 30 00:05:20.328600 containerd[1621]: time="2025-10-30T00:05:20.328320380Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 30 00:05:20.328718 containerd[1621]: time="2025-10-30T00:05:20.328656280Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 30 00:05:20.328718 containerd[1621]: time="2025-10-30T00:05:20.328679183Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 30 00:05:20.328718 containerd[1621]: time="2025-10-30T00:05:20.328693710Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 30 00:05:20.328718 containerd[1621]: time="2025-10-30T00:05:20.328704209Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 30 00:05:20.328936 containerd[1621]: time="2025-10-30T00:05:20.328817923Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 30 00:05:20.329178 containerd[1621]: time="2025-10-30T00:05:20.329115481Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 30 00:05:20.329178 containerd[1621]: time="2025-10-30T00:05:20.329159483Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 30 00:05:20.329178 containerd[1621]: time="2025-10-30T00:05:20.329172638Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 30 00:05:20.329384 containerd[1621]: time="2025-10-30T00:05:20.329228533Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 30 00:05:20.329724 containerd[1621]: time="2025-10-30T00:05:20.329685760Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 30 00:05:20.329800 containerd[1621]: time="2025-10-30T00:05:20.329780919Z" level=info msg="metadata content store policy set" policy=shared Oct 30 00:05:20.339304 containerd[1621]: time="2025-10-30T00:05:20.339248574Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 30 00:05:20.339304 containerd[1621]: time="2025-10-30T00:05:20.339312153Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 30 00:05:20.339481 containerd[1621]: time="2025-10-30T00:05:20.339338573Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 30 00:05:20.339481 containerd[1621]: time="2025-10-30T00:05:20.339365523Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 30 00:05:20.339481 containerd[1621]: time="2025-10-30T00:05:20.339381373Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 30 00:05:20.339481 containerd[1621]: time="2025-10-30T00:05:20.339395740Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 30 00:05:20.339582 containerd[1621]: time="2025-10-30T00:05:20.339498853Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 30 00:05:20.339582 containerd[1621]: time="2025-10-30T00:05:20.339558104Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 30 00:05:20.339582 containerd[1621]: time="2025-10-30T00:05:20.339576699Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 30 00:05:20.339667 containerd[1621]: time="2025-10-30T00:05:20.339593260Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 30 00:05:20.339667 containerd[1621]: time="2025-10-30T00:05:20.339607778Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 30 00:05:20.339667 containerd[1621]: time="2025-10-30T00:05:20.339626593Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 30 00:05:20.339847 containerd[1621]: time="2025-10-30T00:05:20.339814786Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 30 00:05:20.339890 containerd[1621]: time="2025-10-30T00:05:20.339845644Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 30 00:05:20.339890 containerd[1621]: time="2025-10-30T00:05:20.339867445Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 30 00:05:20.339890 containerd[1621]: time="2025-10-30T00:05:20.339882643Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 30 00:05:20.339978 containerd[1621]: time="2025-10-30T00:05:20.339917528Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 30 00:05:20.339978 containerd[1621]: time="2025-10-30T00:05:20.339950641Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 30 00:05:20.339978 containerd[1621]: time="2025-10-30T00:05:20.339969967Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 30 00:05:20.340071 containerd[1621]: time="2025-10-30T00:05:20.339983843Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 30 00:05:20.340071 containerd[1621]: time="2025-10-30T00:05:20.339999081Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 30 00:05:20.340071 containerd[1621]: time="2025-10-30T00:05:20.340013428Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 30 00:05:20.340182 containerd[1621]: time="2025-10-30T00:05:20.340144905Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 30 00:05:20.340283 containerd[1621]: time="2025-10-30T00:05:20.340250202Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 30 00:05:20.340283 containerd[1621]: time="2025-10-30T00:05:20.340274839Z" level=info msg="Start snapshots syncer" Oct 30 00:05:20.340366 containerd[1621]: time="2025-10-30T00:05:20.340318240Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 30 00:05:20.340873 containerd[1621]: time="2025-10-30T00:05:20.340739891Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 30 00:05:20.340873 containerd[1621]: time="2025-10-30T00:05:20.340843425Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 30 00:05:20.341232 containerd[1621]: time="2025-10-30T00:05:20.340954633Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 30 00:05:20.341232 containerd[1621]: time="2025-10-30T00:05:20.341158726Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 30 00:05:20.341232 containerd[1621]: time="2025-10-30T00:05:20.341187069Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 30 00:05:20.341232 containerd[1621]: time="2025-10-30T00:05:20.341200384Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 30 00:05:20.341232 containerd[1621]: time="2025-10-30T00:05:20.341217096Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 30 00:05:20.341232 containerd[1621]: time="2025-10-30T00:05:20.341233667Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 30 00:05:20.341411 containerd[1621]: time="2025-10-30T00:05:20.341264034Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 30 00:05:20.341411 containerd[1621]: time="2025-10-30T00:05:20.341279903Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 30 00:05:20.341411 containerd[1621]: time="2025-10-30T00:05:20.341310371Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 30 00:05:20.341411 containerd[1621]: time="2025-10-30T00:05:20.341325709Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 30 00:05:20.341411 containerd[1621]: time="2025-10-30T00:05:20.341340006Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 30 00:05:20.341548 containerd[1621]: time="2025-10-30T00:05:20.341414065Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 30 00:05:20.341548 containerd[1621]: time="2025-10-30T00:05:20.341519493Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 30 00:05:20.341548 containerd[1621]: time="2025-10-30T00:05:20.341537326Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 30 00:05:20.341639 containerd[1621]: time="2025-10-30T00:05:20.341551142Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 30 00:05:20.341639 containerd[1621]: time="2025-10-30T00:05:20.341563435Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 30 00:05:20.341639 containerd[1621]: time="2025-10-30T00:05:20.341576309Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 30 00:05:20.341639 containerd[1621]: time="2025-10-30T00:05:20.341589835Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 30 00:05:20.341639 containerd[1621]: time="2025-10-30T00:05:20.341612697Z" level=info msg="runtime interface created" Oct 30 00:05:20.341639 containerd[1621]: time="2025-10-30T00:05:20.341620081Z" level=info msg="created NRI interface" Oct 30 00:05:20.341639 containerd[1621]: time="2025-10-30T00:05:20.341631653Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 30 00:05:20.341821 containerd[1621]: time="2025-10-30T00:05:20.341647773Z" level=info msg="Connect containerd service" Oct 30 00:05:20.341821 containerd[1621]: time="2025-10-30T00:05:20.341677249Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 30 00:05:20.342763 containerd[1621]: time="2025-10-30T00:05:20.342720936Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 30 00:05:20.465118 tar[1610]: linux-amd64/README.md Oct 30 00:05:20.512938 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 30 00:05:20.557457 containerd[1621]: time="2025-10-30T00:05:20.557378355Z" level=info msg="Start subscribing containerd event" Oct 30 00:05:20.557613 containerd[1621]: time="2025-10-30T00:05:20.557479445Z" level=info msg="Start recovering state" Oct 30 00:05:20.557697 containerd[1621]: time="2025-10-30T00:05:20.557662318Z" level=info msg="Start event monitor" Oct 30 00:05:20.557740 containerd[1621]: time="2025-10-30T00:05:20.557696422Z" level=info msg="Start cni network conf syncer for default" Oct 30 00:05:20.557740 containerd[1621]: time="2025-10-30T00:05:20.557711390Z" level=info msg="Start streaming server" Oct 30 00:05:20.557740 containerd[1621]: time="2025-10-30T00:05:20.557737208Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 30 00:05:20.557861 containerd[1621]: time="2025-10-30T00:05:20.557747287Z" level=info msg="runtime interface starting up..." Oct 30 00:05:20.557861 containerd[1621]: time="2025-10-30T00:05:20.557755282Z" level=info msg="starting plugins..." Oct 30 00:05:20.557861 containerd[1621]: time="2025-10-30T00:05:20.557777724Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 30 00:05:20.558902 containerd[1621]: time="2025-10-30T00:05:20.558487756Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 30 00:05:20.558902 containerd[1621]: time="2025-10-30T00:05:20.558594756Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 30 00:05:20.558902 containerd[1621]: time="2025-10-30T00:05:20.558762411Z" level=info msg="containerd successfully booted in 0.260142s" Oct 30 00:05:20.559065 systemd[1]: Started containerd.service - containerd container runtime. Oct 30 00:05:20.995380 systemd-networkd[1519]: eth0: Gained IPv6LL Oct 30 00:05:20.999661 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 30 00:05:21.022469 systemd[1]: Reached target network-online.target - Network is Online. Oct 30 00:05:21.026586 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 30 00:05:21.031540 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:05:21.051477 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 30 00:05:21.084444 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 30 00:05:21.087568 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 30 00:05:21.087926 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 30 00:05:21.091529 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 30 00:05:22.351601 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:05:22.595535 (kubelet)[1717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 00:05:22.596634 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 30 00:05:22.600535 systemd[1]: Startup finished in 2.803s (kernel) + 9.070s (initrd) + 6.322s (userspace) = 18.196s. Oct 30 00:05:23.474877 kubelet[1717]: E1030 00:05:23.474755 1717 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 00:05:23.479384 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 00:05:23.479585 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 00:05:23.481122 systemd[1]: kubelet.service: Consumed 1.800s CPU time, 267.9M memory peak. Oct 30 00:05:29.236287 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 30 00:05:29.237863 systemd[1]: Started sshd@0-10.0.0.102:22-10.0.0.1:49596.service - OpenSSH per-connection server daemon (10.0.0.1:49596). Oct 30 00:05:29.339717 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 49596 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:05:29.342377 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:05:29.350324 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 30 00:05:29.351687 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 30 00:05:29.358973 systemd-logind[1593]: New session 1 of user core. Oct 30 00:05:29.383007 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 30 00:05:29.387028 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 30 00:05:29.412881 (systemd)[1735]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 30 00:05:29.416027 systemd-logind[1593]: New session c1 of user core. Oct 30 00:05:29.599794 systemd[1735]: Queued start job for default target default.target. Oct 30 00:05:29.621686 systemd[1735]: Created slice app.slice - User Application Slice. Oct 30 00:05:29.621714 systemd[1735]: Reached target paths.target - Paths. Oct 30 00:05:29.621758 systemd[1735]: Reached target timers.target - Timers. Oct 30 00:05:29.623751 systemd[1735]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 30 00:05:29.636446 systemd[1735]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 30 00:05:29.636746 systemd[1735]: Reached target sockets.target - Sockets. Oct 30 00:05:29.636857 systemd[1735]: Reached target basic.target - Basic System. Oct 30 00:05:29.636930 systemd[1735]: Reached target default.target - Main User Target. Oct 30 00:05:29.636990 systemd[1735]: Startup finished in 213ms. Oct 30 00:05:29.637201 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 30 00:05:29.654215 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 30 00:05:29.726661 systemd[1]: Started sshd@1-10.0.0.102:22-10.0.0.1:49608.service - OpenSSH per-connection server daemon (10.0.0.1:49608). Oct 30 00:05:29.778173 sshd[1746]: Accepted publickey for core from 10.0.0.1 port 49608 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:05:29.780199 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:05:29.785418 systemd-logind[1593]: New session 2 of user core. Oct 30 00:05:29.799295 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 30 00:05:29.855631 sshd[1749]: Connection closed by 10.0.0.1 port 49608 Oct 30 00:05:29.855997 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Oct 30 00:05:29.873135 systemd[1]: sshd@1-10.0.0.102:22-10.0.0.1:49608.service: Deactivated successfully. Oct 30 00:05:29.875487 systemd[1]: session-2.scope: Deactivated successfully. Oct 30 00:05:29.876319 systemd-logind[1593]: Session 2 logged out. Waiting for processes to exit. Oct 30 00:05:29.879598 systemd[1]: Started sshd@2-10.0.0.102:22-10.0.0.1:49616.service - OpenSSH per-connection server daemon (10.0.0.1:49616). Oct 30 00:05:29.880548 systemd-logind[1593]: Removed session 2. Oct 30 00:05:29.931736 sshd[1755]: Accepted publickey for core from 10.0.0.1 port 49616 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:05:29.933257 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:05:29.939184 systemd-logind[1593]: New session 3 of user core. Oct 30 00:05:29.957276 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 30 00:05:30.008690 sshd[1759]: Connection closed by 10.0.0.1 port 49616 Oct 30 00:05:30.009151 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Oct 30 00:05:30.025135 systemd[1]: sshd@2-10.0.0.102:22-10.0.0.1:49616.service: Deactivated successfully. Oct 30 00:05:30.027393 systemd[1]: session-3.scope: Deactivated successfully. Oct 30 00:05:30.028336 systemd-logind[1593]: Session 3 logged out. Waiting for processes to exit. Oct 30 00:05:30.031426 systemd[1]: Started sshd@3-10.0.0.102:22-10.0.0.1:36200.service - OpenSSH per-connection server daemon (10.0.0.1:36200). Oct 30 00:05:30.032238 systemd-logind[1593]: Removed session 3. Oct 30 00:05:30.094734 sshd[1765]: Accepted publickey for core from 10.0.0.1 port 36200 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:05:30.096760 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:05:30.102990 systemd-logind[1593]: New session 4 of user core. Oct 30 00:05:30.112248 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 30 00:05:30.170196 sshd[1768]: Connection closed by 10.0.0.1 port 36200 Oct 30 00:05:30.170647 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Oct 30 00:05:30.181200 systemd[1]: sshd@3-10.0.0.102:22-10.0.0.1:36200.service: Deactivated successfully. Oct 30 00:05:30.183990 systemd[1]: session-4.scope: Deactivated successfully. Oct 30 00:05:30.185099 systemd-logind[1593]: Session 4 logged out. Waiting for processes to exit. Oct 30 00:05:30.188882 systemd[1]: Started sshd@4-10.0.0.102:22-10.0.0.1:36206.service - OpenSSH per-connection server daemon (10.0.0.1:36206). Oct 30 00:05:30.189934 systemd-logind[1593]: Removed session 4. Oct 30 00:05:30.247345 sshd[1774]: Accepted publickey for core from 10.0.0.1 port 36206 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:05:30.249166 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:05:30.254764 systemd-logind[1593]: New session 5 of user core. Oct 30 00:05:30.264202 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 30 00:05:30.330783 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 30 00:05:30.331227 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 00:05:30.356040 sudo[1778]: pam_unix(sudo:session): session closed for user root Oct 30 00:05:30.358444 sshd[1777]: Connection closed by 10.0.0.1 port 36206 Oct 30 00:05:30.358838 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Oct 30 00:05:30.374338 systemd[1]: sshd@4-10.0.0.102:22-10.0.0.1:36206.service: Deactivated successfully. Oct 30 00:05:30.376566 systemd[1]: session-5.scope: Deactivated successfully. Oct 30 00:05:30.377548 systemd-logind[1593]: Session 5 logged out. Waiting for processes to exit. Oct 30 00:05:30.381483 systemd[1]: Started sshd@5-10.0.0.102:22-10.0.0.1:36212.service - OpenSSH per-connection server daemon (10.0.0.1:36212). Oct 30 00:05:30.382398 systemd-logind[1593]: Removed session 5. Oct 30 00:05:30.456680 sshd[1784]: Accepted publickey for core from 10.0.0.1 port 36212 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:05:30.458582 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:05:30.464560 systemd-logind[1593]: New session 6 of user core. Oct 30 00:05:30.474417 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 30 00:05:30.534177 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 30 00:05:30.534516 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 00:05:30.549423 sudo[1789]: pam_unix(sudo:session): session closed for user root Oct 30 00:05:30.559946 sudo[1788]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 30 00:05:30.560433 sudo[1788]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 00:05:30.575854 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 30 00:05:30.633486 augenrules[1811]: No rules Oct 30 00:05:30.634620 systemd[1]: audit-rules.service: Deactivated successfully. Oct 30 00:05:30.634924 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 30 00:05:30.636319 sudo[1788]: pam_unix(sudo:session): session closed for user root Oct 30 00:05:30.638792 sshd[1787]: Connection closed by 10.0.0.1 port 36212 Oct 30 00:05:30.639213 sshd-session[1784]: pam_unix(sshd:session): session closed for user core Oct 30 00:05:30.660256 systemd[1]: sshd@5-10.0.0.102:22-10.0.0.1:36212.service: Deactivated successfully. Oct 30 00:05:30.662752 systemd[1]: session-6.scope: Deactivated successfully. Oct 30 00:05:30.663682 systemd-logind[1593]: Session 6 logged out. Waiting for processes to exit. Oct 30 00:05:30.667379 systemd[1]: Started sshd@6-10.0.0.102:22-10.0.0.1:36224.service - OpenSSH per-connection server daemon (10.0.0.1:36224). Oct 30 00:05:30.668270 systemd-logind[1593]: Removed session 6. Oct 30 00:05:30.731747 sshd[1820]: Accepted publickey for core from 10.0.0.1 port 36224 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:05:30.733388 sshd-session[1820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:05:30.739543 systemd-logind[1593]: New session 7 of user core. Oct 30 00:05:30.753285 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 30 00:05:30.811811 sudo[1824]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 30 00:05:30.812299 sudo[1824]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 00:05:31.833134 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 30 00:05:31.851928 (dockerd)[1844]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 30 00:05:32.632250 dockerd[1844]: time="2025-10-30T00:05:32.632148670Z" level=info msg="Starting up" Oct 30 00:05:32.633104 dockerd[1844]: time="2025-10-30T00:05:32.633024372Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 30 00:05:32.679621 dockerd[1844]: time="2025-10-30T00:05:32.679558380Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 30 00:05:33.730137 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 30 00:05:33.731846 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:05:34.650305 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:05:34.664398 (kubelet)[1877]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 00:05:34.739813 kubelet[1877]: E1030 00:05:34.739733 1877 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 00:05:34.747974 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 00:05:34.748222 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 00:05:34.748629 systemd[1]: kubelet.service: Consumed 278ms CPU time, 113.1M memory peak. Oct 30 00:05:35.447811 dockerd[1844]: time="2025-10-30T00:05:35.447724429Z" level=info msg="Loading containers: start." Oct 30 00:05:35.540132 kernel: Initializing XFRM netlink socket Oct 30 00:05:35.944891 systemd-networkd[1519]: docker0: Link UP Oct 30 00:05:35.954438 dockerd[1844]: time="2025-10-30T00:05:35.954322622Z" level=info msg="Loading containers: done." Oct 30 00:05:35.976027 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1068653216-merged.mount: Deactivated successfully. Oct 30 00:05:35.981579 dockerd[1844]: time="2025-10-30T00:05:35.981431921Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 30 00:05:35.981579 dockerd[1844]: time="2025-10-30T00:05:35.981588335Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 30 00:05:35.981861 dockerd[1844]: time="2025-10-30T00:05:35.981766970Z" level=info msg="Initializing buildkit" Oct 30 00:05:36.638858 dockerd[1844]: time="2025-10-30T00:05:36.638793509Z" level=info msg="Completed buildkit initialization" Oct 30 00:05:36.646295 dockerd[1844]: time="2025-10-30T00:05:36.646224945Z" level=info msg="Daemon has completed initialization" Oct 30 00:05:36.646463 dockerd[1844]: time="2025-10-30T00:05:36.646338408Z" level=info msg="API listen on /run/docker.sock" Oct 30 00:05:36.646651 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 30 00:05:38.020033 containerd[1621]: time="2025-10-30T00:05:38.019930474Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Oct 30 00:05:39.719137 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1184484293 wd_nsec: 1184483719 Oct 30 00:05:39.770585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2119006988.mount: Deactivated successfully. Oct 30 00:05:41.472146 containerd[1621]: time="2025-10-30T00:05:41.472058356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:05:41.473836 containerd[1621]: time="2025-10-30T00:05:41.473807556Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Oct 30 00:05:41.476182 containerd[1621]: time="2025-10-30T00:05:41.476069528Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:05:41.481307 containerd[1621]: time="2025-10-30T00:05:41.481246447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:05:41.482634 containerd[1621]: time="2025-10-30T00:05:41.482597410Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 3.462573431s" Oct 30 00:05:41.482679 containerd[1621]: time="2025-10-30T00:05:41.482648566Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Oct 30 00:05:41.483723 containerd[1621]: time="2025-10-30T00:05:41.483675532Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Oct 30 00:05:43.386948 containerd[1621]: time="2025-10-30T00:05:43.386841467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:05:43.391759 containerd[1621]: time="2025-10-30T00:05:43.391675753Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Oct 30 00:05:43.396106 containerd[1621]: time="2025-10-30T00:05:43.396000764Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:05:43.447798 containerd[1621]: time="2025-10-30T00:05:43.447689730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:05:43.448908 containerd[1621]: time="2025-10-30T00:05:43.448526268Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.964795883s" Oct 30 00:05:43.448908 containerd[1621]: time="2025-10-30T00:05:43.448588415Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Oct 30 00:05:43.449762 containerd[1621]: time="2025-10-30T00:05:43.449520834Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Oct 30 00:05:44.766733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 30 00:05:44.769019 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:05:45.109692 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:05:45.131725 (kubelet)[2150]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 00:05:45.337970 kubelet[2150]: E1030 00:05:45.337891 2150 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 00:05:45.343848 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 00:05:45.344110 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 00:05:45.344640 systemd[1]: kubelet.service: Consumed 391ms CPU time, 109.4M memory peak. Oct 30 00:05:46.609012 containerd[1621]: time="2025-10-30T00:05:46.608942146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:05:46.610224 containerd[1621]: time="2025-10-30T00:05:46.610176441Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Oct 30 00:05:46.611906 containerd[1621]: time="2025-10-30T00:05:46.611863705Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:05:46.615477 containerd[1621]: time="2025-10-30T00:05:46.615369049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:05:46.618482 containerd[1621]: time="2025-10-30T00:05:46.618423257Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 3.168838954s" Oct 30 00:05:46.618482 containerd[1621]: time="2025-10-30T00:05:46.618472108Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Oct 30 00:05:46.619022 containerd[1621]: time="2025-10-30T00:05:46.618986603Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Oct 30 00:05:48.895292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3399566629.mount: Deactivated successfully. Oct 30 00:05:49.760491 containerd[1621]: time="2025-10-30T00:05:49.760404233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:05:49.761568 containerd[1621]: time="2025-10-30T00:05:49.761534232Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Oct 30 00:05:49.763038 containerd[1621]: time="2025-10-30T00:05:49.762990032Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:05:49.765440 containerd[1621]: time="2025-10-30T00:05:49.765399801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:05:49.766201 containerd[1621]: time="2025-10-30T00:05:49.766026777Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 3.147005258s" Oct 30 00:05:49.766201 containerd[1621]: time="2025-10-30T00:05:49.766064848Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Oct 30 00:05:49.766708 containerd[1621]: time="2025-10-30T00:05:49.766552222Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Oct 30 00:05:50.819342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount649828590.mount: Deactivated successfully. Oct 30 00:05:53.178104 containerd[1621]: time="2025-10-30T00:05:53.177987224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:05:53.181454 containerd[1621]: time="2025-10-30T00:05:53.181404724Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Oct 30 00:05:53.190706 containerd[1621]: time="2025-10-30T00:05:53.190594334Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:05:53.195203 containerd[1621]: time="2025-10-30T00:05:53.195134929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:05:53.196392 containerd[1621]: time="2025-10-30T00:05:53.196326646Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 3.429739509s" Oct 30 00:05:53.196392 containerd[1621]: time="2025-10-30T00:05:53.196371753Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Oct 30 00:05:53.197287 containerd[1621]: time="2025-10-30T00:05:53.197237655Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 30 00:05:55.516737 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 30 00:05:55.519092 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:05:55.816585 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:05:55.831496 (kubelet)[2231]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 00:05:56.292387 kubelet[2231]: E1030 00:05:56.292299 2231 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 00:05:56.297438 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 00:05:56.297743 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 00:05:56.298218 systemd[1]: kubelet.service: Consumed 290ms CPU time, 111.1M memory peak. Oct 30 00:05:56.632826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2142422674.mount: Deactivated successfully. Oct 30 00:05:56.646851 containerd[1621]: time="2025-10-30T00:05:56.645815608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 00:05:56.647558 containerd[1621]: time="2025-10-30T00:05:56.647509215Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 30 00:05:56.648948 containerd[1621]: time="2025-10-30T00:05:56.648851131Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 00:05:56.654876 containerd[1621]: time="2025-10-30T00:05:56.653807656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 00:05:56.654876 containerd[1621]: time="2025-10-30T00:05:56.654538384Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 3.45725477s" Oct 30 00:05:56.654876 containerd[1621]: time="2025-10-30T00:05:56.654575464Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 30 00:05:56.655485 containerd[1621]: time="2025-10-30T00:05:56.655178838Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Oct 30 00:05:58.370153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1513190125.mount: Deactivated successfully. Oct 30 00:06:01.035783 containerd[1621]: time="2025-10-30T00:06:01.035364199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:06:01.050300 containerd[1621]: time="2025-10-30T00:06:01.050186086Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Oct 30 00:06:01.098407 containerd[1621]: time="2025-10-30T00:06:01.098302328Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:06:01.163468 containerd[1621]: time="2025-10-30T00:06:01.163395046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:06:01.164631 containerd[1621]: time="2025-10-30T00:06:01.164595700Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 4.509380703s" Oct 30 00:06:01.164631 containerd[1621]: time="2025-10-30T00:06:01.164624645Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Oct 30 00:06:03.433532 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:06:03.433706 systemd[1]: kubelet.service: Consumed 290ms CPU time, 111.1M memory peak. Oct 30 00:06:03.435930 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:06:03.465137 systemd[1]: Reload requested from client PID 2329 ('systemctl') (unit session-7.scope)... Oct 30 00:06:03.465168 systemd[1]: Reloading... Oct 30 00:06:03.566116 zram_generator::config[2375]: No configuration found. Oct 30 00:06:03.918504 systemd[1]: Reloading finished in 452 ms. Oct 30 00:06:03.985822 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 30 00:06:03.985927 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 30 00:06:03.986323 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:06:03.986381 systemd[1]: kubelet.service: Consumed 187ms CPU time, 98.4M memory peak. Oct 30 00:06:03.988289 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:06:04.192587 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:06:04.197721 (kubelet)[2420]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 30 00:06:04.234659 kubelet[2420]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 00:06:04.234659 kubelet[2420]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 30 00:06:04.234659 kubelet[2420]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 00:06:04.234659 kubelet[2420]: I1030 00:06:04.234654 2420 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 30 00:06:04.791242 kubelet[2420]: I1030 00:06:04.791179 2420 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 30 00:06:04.791242 kubelet[2420]: I1030 00:06:04.791213 2420 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 30 00:06:04.792330 kubelet[2420]: I1030 00:06:04.791775 2420 server.go:956] "Client rotation is on, will bootstrap in background" Oct 30 00:06:04.826368 kubelet[2420]: E1030 00:06:04.826315 2420 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.102:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 30 00:06:04.826546 kubelet[2420]: I1030 00:06:04.826436 2420 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 30 00:06:04.833280 kubelet[2420]: I1030 00:06:04.833248 2420 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 30 00:06:04.839254 kubelet[2420]: I1030 00:06:04.839212 2420 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 30 00:06:04.839554 kubelet[2420]: I1030 00:06:04.839507 2420 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 30 00:06:04.839753 kubelet[2420]: I1030 00:06:04.839542 2420 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 30 00:06:04.839943 kubelet[2420]: I1030 00:06:04.839756 2420 topology_manager.go:138] "Creating topology manager with none policy" Oct 30 00:06:04.839943 kubelet[2420]: I1030 00:06:04.839770 2420 container_manager_linux.go:303] "Creating device plugin manager" Oct 30 00:06:04.839943 kubelet[2420]: I1030 00:06:04.839942 2420 state_mem.go:36] "Initialized new in-memory state store" Oct 30 00:06:04.841823 kubelet[2420]: I1030 00:06:04.841794 2420 kubelet.go:480] "Attempting to sync node with API server" Oct 30 00:06:04.841823 kubelet[2420]: I1030 00:06:04.841813 2420 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 30 00:06:04.841914 kubelet[2420]: I1030 00:06:04.841839 2420 kubelet.go:386] "Adding apiserver pod source" Oct 30 00:06:04.844467 kubelet[2420]: I1030 00:06:04.844372 2420 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 30 00:06:04.849463 kubelet[2420]: E1030 00:06:04.849403 2420 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 30 00:06:04.849590 kubelet[2420]: I1030 00:06:04.849529 2420 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 30 00:06:04.850509 kubelet[2420]: I1030 00:06:04.850156 2420 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 30 00:06:04.850509 kubelet[2420]: E1030 00:06:04.850328 2420 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 30 00:06:04.850905 kubelet[2420]: W1030 00:06:04.850887 2420 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 30 00:06:04.853642 kubelet[2420]: I1030 00:06:04.853624 2420 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 30 00:06:04.853707 kubelet[2420]: I1030 00:06:04.853676 2420 server.go:1289] "Started kubelet" Oct 30 00:06:04.859937 kubelet[2420]: I1030 00:06:04.859167 2420 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 30 00:06:04.859937 kubelet[2420]: I1030 00:06:04.859757 2420 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 30 00:06:04.860188 kubelet[2420]: I1030 00:06:04.860164 2420 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 30 00:06:04.862211 kubelet[2420]: E1030 00:06:04.860892 2420 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.102:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.102:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18731c171e81c7bc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-30 00:06:04.853643196 +0000 UTC m=+0.651385159,LastTimestamp:2025-10-30 00:06:04.853643196 +0000 UTC m=+0.651385159,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 30 00:06:04.864490 kubelet[2420]: E1030 00:06:04.863884 2420 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 30 00:06:04.864490 kubelet[2420]: I1030 00:06:04.863943 2420 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 30 00:06:04.864490 kubelet[2420]: I1030 00:06:04.864367 2420 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 30 00:06:04.864789 kubelet[2420]: E1030 00:06:04.864763 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:04.864829 kubelet[2420]: I1030 00:06:04.864797 2420 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 30 00:06:04.864866 kubelet[2420]: I1030 00:06:04.864862 2420 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 30 00:06:04.864930 kubelet[2420]: I1030 00:06:04.864918 2420 reconciler.go:26] "Reconciler: start to sync state" Oct 30 00:06:04.865005 kubelet[2420]: I1030 00:06:04.864988 2420 server.go:317] "Adding debug handlers to kubelet server" Oct 30 00:06:04.865446 kubelet[2420]: E1030 00:06:04.865283 2420 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 30 00:06:04.865501 kubelet[2420]: I1030 00:06:04.865469 2420 factory.go:223] Registration of the systemd container factory successfully Oct 30 00:06:04.865552 kubelet[2420]: I1030 00:06:04.865531 2420 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 30 00:06:04.866996 kubelet[2420]: I1030 00:06:04.866976 2420 factory.go:223] Registration of the containerd container factory successfully Oct 30 00:06:04.868925 kubelet[2420]: E1030 00:06:04.868869 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="200ms" Oct 30 00:06:04.870252 kubelet[2420]: I1030 00:06:04.870200 2420 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 30 00:06:04.883965 kubelet[2420]: I1030 00:06:04.883937 2420 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 30 00:06:04.883965 kubelet[2420]: I1030 00:06:04.883952 2420 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 30 00:06:04.883965 kubelet[2420]: I1030 00:06:04.883970 2420 state_mem.go:36] "Initialized new in-memory state store" Oct 30 00:06:04.965409 kubelet[2420]: E1030 00:06:04.965308 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:05.066038 kubelet[2420]: E1030 00:06:05.065877 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:05.069982 kubelet[2420]: E1030 00:06:05.069935 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="400ms" Oct 30 00:06:05.166720 kubelet[2420]: E1030 00:06:05.166622 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:05.266994 kubelet[2420]: E1030 00:06:05.266839 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:05.367434 kubelet[2420]: E1030 00:06:05.367259 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:05.411313 update_engine[1599]: I20251030 00:06:05.411198 1599 update_attempter.cc:509] Updating boot flags... Oct 30 00:06:05.468095 kubelet[2420]: E1030 00:06:05.468014 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:05.470802 kubelet[2420]: E1030 00:06:05.470734 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="800ms" Oct 30 00:06:05.501946 kubelet[2420]: I1030 00:06:05.501833 2420 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 30 00:06:05.501946 kubelet[2420]: I1030 00:06:05.501892 2420 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 30 00:06:05.502107 kubelet[2420]: I1030 00:06:05.502006 2420 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 30 00:06:05.502107 kubelet[2420]: I1030 00:06:05.502020 2420 kubelet.go:2436] "Starting kubelet main sync loop" Oct 30 00:06:05.502184 kubelet[2420]: E1030 00:06:05.502138 2420 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 30 00:06:05.502851 kubelet[2420]: E1030 00:06:05.502813 2420 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 30 00:06:05.568936 kubelet[2420]: E1030 00:06:05.568872 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:05.603245 kubelet[2420]: E1030 00:06:05.603151 2420 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 30 00:06:05.669923 kubelet[2420]: E1030 00:06:05.669768 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:05.770501 kubelet[2420]: E1030 00:06:05.770411 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:05.803676 kubelet[2420]: E1030 00:06:05.803605 2420 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 30 00:06:05.870734 kubelet[2420]: E1030 00:06:05.870671 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:05.893374 kubelet[2420]: E1030 00:06:05.893289 2420 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 30 00:06:05.971369 kubelet[2420]: E1030 00:06:05.971211 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:06.050062 kubelet[2420]: E1030 00:06:06.049948 2420 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 30 00:06:06.071895 kubelet[2420]: E1030 00:06:06.071823 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:06.172538 kubelet[2420]: E1030 00:06:06.172466 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:06.177159 kubelet[2420]: E1030 00:06:06.177130 2420 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 30 00:06:06.204503 kubelet[2420]: E1030 00:06:06.204458 2420 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 30 00:06:06.271601 kubelet[2420]: E1030 00:06:06.271533 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="1.6s" Oct 30 00:06:06.272563 kubelet[2420]: E1030 00:06:06.272527 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:06.373195 kubelet[2420]: E1030 00:06:06.373108 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:06.432903 kubelet[2420]: E1030 00:06:06.432829 2420 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 30 00:06:06.473750 kubelet[2420]: E1030 00:06:06.473660 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:06.574765 kubelet[2420]: E1030 00:06:06.574602 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:06.675399 kubelet[2420]: E1030 00:06:06.675314 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:06.776236 kubelet[2420]: E1030 00:06:06.776146 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:06.877463 kubelet[2420]: E1030 00:06:06.877294 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:06.978063 kubelet[2420]: E1030 00:06:06.977979 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:06.985828 kubelet[2420]: I1030 00:06:06.985760 2420 policy_none.go:49] "None policy: Start" Oct 30 00:06:06.985828 kubelet[2420]: I1030 00:06:06.985816 2420 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 30 00:06:06.985906 kubelet[2420]: I1030 00:06:06.985856 2420 state_mem.go:35] "Initializing new in-memory state store" Oct 30 00:06:06.993040 kubelet[2420]: E1030 00:06:06.992989 2420 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.102:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 30 00:06:07.005265 kubelet[2420]: E1030 00:06:07.005174 2420 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 30 00:06:07.054799 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 30 00:06:07.078296 kubelet[2420]: E1030 00:06:07.078253 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:07.165429 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 30 00:06:07.178481 kubelet[2420]: E1030 00:06:07.178415 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:07.192980 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 30 00:06:07.215398 kubelet[2420]: E1030 00:06:07.215275 2420 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 30 00:06:07.215612 kubelet[2420]: I1030 00:06:07.215580 2420 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 30 00:06:07.215666 kubelet[2420]: I1030 00:06:07.215601 2420 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 30 00:06:07.215978 kubelet[2420]: I1030 00:06:07.215945 2420 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 30 00:06:07.216844 kubelet[2420]: E1030 00:06:07.216793 2420 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 30 00:06:07.216991 kubelet[2420]: E1030 00:06:07.216881 2420 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 30 00:06:07.317716 kubelet[2420]: I1030 00:06:07.317657 2420 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 00:06:07.318385 kubelet[2420]: E1030 00:06:07.318177 2420 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.102:6443/api/v1/nodes\": dial tcp 10.0.0.102:6443: connect: connection refused" node="localhost" Oct 30 00:06:07.520509 kubelet[2420]: I1030 00:06:07.520444 2420 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 00:06:07.520915 kubelet[2420]: E1030 00:06:07.520860 2420 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.102:6443/api/v1/nodes\": dial tcp 10.0.0.102:6443: connect: connection refused" node="localhost" Oct 30 00:06:07.866736 kubelet[2420]: E1030 00:06:07.866569 2420 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 30 00:06:07.872655 kubelet[2420]: E1030 00:06:07.872603 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="3.2s" Oct 30 00:06:07.922485 kubelet[2420]: I1030 00:06:07.922421 2420 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 00:06:07.923188 kubelet[2420]: E1030 00:06:07.923066 2420 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.102:6443/api/v1/nodes\": dial tcp 10.0.0.102:6443: connect: connection refused" node="localhost" Oct 30 00:06:08.223873 kubelet[2420]: E1030 00:06:08.223713 2420 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 30 00:06:08.510870 kubelet[2420]: E1030 00:06:08.510711 2420 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 30 00:06:08.660158 systemd[1]: Created slice kubepods-burstable-pod80cb26873732f72c00d1e84007240e3c.slice - libcontainer container kubepods-burstable-pod80cb26873732f72c00d1e84007240e3c.slice. Oct 30 00:06:08.686276 kubelet[2420]: E1030 00:06:08.686228 2420 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 00:06:08.687287 kubelet[2420]: I1030 00:06:08.687240 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:06:08.687366 kubelet[2420]: I1030 00:06:08.687300 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:06:08.687366 kubelet[2420]: I1030 00:06:08.687340 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80cb26873732f72c00d1e84007240e3c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"80cb26873732f72c00d1e84007240e3c\") " pod="kube-system/kube-apiserver-localhost" Oct 30 00:06:08.687450 kubelet[2420]: I1030 00:06:08.687367 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80cb26873732f72c00d1e84007240e3c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"80cb26873732f72c00d1e84007240e3c\") " pod="kube-system/kube-apiserver-localhost" Oct 30 00:06:08.687450 kubelet[2420]: I1030 00:06:08.687388 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80cb26873732f72c00d1e84007240e3c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"80cb26873732f72c00d1e84007240e3c\") " pod="kube-system/kube-apiserver-localhost" Oct 30 00:06:08.687450 kubelet[2420]: I1030 00:06:08.687407 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:06:08.687550 kubelet[2420]: I1030 00:06:08.687449 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:06:08.687550 kubelet[2420]: I1030 00:06:08.687489 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:06:08.724689 kubelet[2420]: I1030 00:06:08.724655 2420 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 00:06:08.724996 kubelet[2420]: E1030 00:06:08.724967 2420 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.102:6443/api/v1/nodes\": dial tcp 10.0.0.102:6443: connect: connection refused" node="localhost" Oct 30 00:06:08.730902 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Oct 30 00:06:08.732649 kubelet[2420]: E1030 00:06:08.732620 2420 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 00:06:08.788312 kubelet[2420]: I1030 00:06:08.788168 2420 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 30 00:06:08.817279 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Oct 30 00:06:08.819234 kubelet[2420]: E1030 00:06:08.819179 2420 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 00:06:08.987198 kubelet[2420]: E1030 00:06:08.986886 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:08.987862 containerd[1621]: time="2025-10-30T00:06:08.987795577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:80cb26873732f72c00d1e84007240e3c,Namespace:kube-system,Attempt:0,}" Oct 30 00:06:09.026679 containerd[1621]: time="2025-10-30T00:06:09.026617729Z" level=info msg="connecting to shim 5661d8520f21b509922703a32d0cd2bae1f749be2b9c60077d642f8e71f6b300" address="unix:///run/containerd/s/ab9261b0c3b26ea8578a4c342cbc7e6477ebe877ebdbde7c265ac63b2090d5f7" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:06:09.033476 kubelet[2420]: E1030 00:06:09.033422 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:09.034138 containerd[1621]: time="2025-10-30T00:06:09.034093125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Oct 30 00:06:09.088402 systemd[1]: Started cri-containerd-5661d8520f21b509922703a32d0cd2bae1f749be2b9c60077d642f8e71f6b300.scope - libcontainer container 5661d8520f21b509922703a32d0cd2bae1f749be2b9c60077d642f8e71f6b300. Oct 30 00:06:09.099360 containerd[1621]: time="2025-10-30T00:06:09.099287633Z" level=info msg="connecting to shim ee8b2166e946b66fdbc704347fde55228a8513b5081dd523a412dd2faffd0962" address="unix:///run/containerd/s/dff7af8998395890ad04853fe553b74977c9b7f0f2bb9f00a28aaf37122b301d" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:06:09.120480 kubelet[2420]: E1030 00:06:09.120439 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:09.121129 containerd[1621]: time="2025-10-30T00:06:09.121061279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Oct 30 00:06:09.138442 systemd[1]: Started cri-containerd-ee8b2166e946b66fdbc704347fde55228a8513b5081dd523a412dd2faffd0962.scope - libcontainer container ee8b2166e946b66fdbc704347fde55228a8513b5081dd523a412dd2faffd0962. Oct 30 00:06:09.148374 kubelet[2420]: E1030 00:06:09.148328 2420 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 30 00:06:09.160932 containerd[1621]: time="2025-10-30T00:06:09.160852316Z" level=info msg="connecting to shim 9c9f075fe883a059528cb477d9bb9c4ec1555e62a11359c7939aa0e3c8676411" address="unix:///run/containerd/s/15736e264c3c85d3aa84f77fdd9aea9c2d4d6f1e7bb093ac9a4845cd4dd69606" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:06:09.164986 containerd[1621]: time="2025-10-30T00:06:09.164873702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:80cb26873732f72c00d1e84007240e3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"5661d8520f21b509922703a32d0cd2bae1f749be2b9c60077d642f8e71f6b300\"" Oct 30 00:06:09.166475 kubelet[2420]: E1030 00:06:09.166444 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:09.174554 containerd[1621]: time="2025-10-30T00:06:09.174500446Z" level=info msg="CreateContainer within sandbox \"5661d8520f21b509922703a32d0cd2bae1f749be2b9c60077d642f8e71f6b300\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 30 00:06:09.189874 containerd[1621]: time="2025-10-30T00:06:09.189761947Z" level=info msg="Container 8950b7cead2e57ace2b14ff2b3b755645e157dee8ce9bbf096aafb17d9a81fac: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:06:09.203988 containerd[1621]: time="2025-10-30T00:06:09.203590538Z" level=info msg="CreateContainer within sandbox \"5661d8520f21b509922703a32d0cd2bae1f749be2b9c60077d642f8e71f6b300\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8950b7cead2e57ace2b14ff2b3b755645e157dee8ce9bbf096aafb17d9a81fac\"" Oct 30 00:06:09.204541 containerd[1621]: time="2025-10-30T00:06:09.204507763Z" level=info msg="StartContainer for \"8950b7cead2e57ace2b14ff2b3b755645e157dee8ce9bbf096aafb17d9a81fac\"" Oct 30 00:06:09.205775 containerd[1621]: time="2025-10-30T00:06:09.205743780Z" level=info msg="connecting to shim 8950b7cead2e57ace2b14ff2b3b755645e157dee8ce9bbf096aafb17d9a81fac" address="unix:///run/containerd/s/ab9261b0c3b26ea8578a4c342cbc7e6477ebe877ebdbde7c265ac63b2090d5f7" protocol=ttrpc version=3 Oct 30 00:06:09.232052 systemd[1]: Started cri-containerd-9c9f075fe883a059528cb477d9bb9c4ec1555e62a11359c7939aa0e3c8676411.scope - libcontainer container 9c9f075fe883a059528cb477d9bb9c4ec1555e62a11359c7939aa0e3c8676411. Oct 30 00:06:09.247058 containerd[1621]: time="2025-10-30T00:06:09.247010398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee8b2166e946b66fdbc704347fde55228a8513b5081dd523a412dd2faffd0962\"" Oct 30 00:06:09.248099 kubelet[2420]: E1030 00:06:09.247950 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:09.253636 containerd[1621]: time="2025-10-30T00:06:09.253604408Z" level=info msg="CreateContainer within sandbox \"ee8b2166e946b66fdbc704347fde55228a8513b5081dd523a412dd2faffd0962\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 30 00:06:09.257286 systemd[1]: Started cri-containerd-8950b7cead2e57ace2b14ff2b3b755645e157dee8ce9bbf096aafb17d9a81fac.scope - libcontainer container 8950b7cead2e57ace2b14ff2b3b755645e157dee8ce9bbf096aafb17d9a81fac. Oct 30 00:06:09.266274 containerd[1621]: time="2025-10-30T00:06:09.266164130Z" level=info msg="Container af39af0ba7d51f64e3a1f0cc9332c2f4823e2ed95fc741ddbdd540a0b2c28fb1: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:06:09.281145 containerd[1621]: time="2025-10-30T00:06:09.281095676Z" level=info msg="CreateContainer within sandbox \"ee8b2166e946b66fdbc704347fde55228a8513b5081dd523a412dd2faffd0962\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"af39af0ba7d51f64e3a1f0cc9332c2f4823e2ed95fc741ddbdd540a0b2c28fb1\"" Oct 30 00:06:09.285549 containerd[1621]: time="2025-10-30T00:06:09.285500706Z" level=info msg="StartContainer for \"af39af0ba7d51f64e3a1f0cc9332c2f4823e2ed95fc741ddbdd540a0b2c28fb1\"" Oct 30 00:06:09.287662 containerd[1621]: time="2025-10-30T00:06:09.287619734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c9f075fe883a059528cb477d9bb9c4ec1555e62a11359c7939aa0e3c8676411\"" Oct 30 00:06:09.288331 kubelet[2420]: E1030 00:06:09.288308 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:09.290033 containerd[1621]: time="2025-10-30T00:06:09.290001457Z" level=info msg="connecting to shim af39af0ba7d51f64e3a1f0cc9332c2f4823e2ed95fc741ddbdd540a0b2c28fb1" address="unix:///run/containerd/s/dff7af8998395890ad04853fe553b74977c9b7f0f2bb9f00a28aaf37122b301d" protocol=ttrpc version=3 Oct 30 00:06:09.294651 containerd[1621]: time="2025-10-30T00:06:09.294613439Z" level=info msg="CreateContainer within sandbox \"9c9f075fe883a059528cb477d9bb9c4ec1555e62a11359c7939aa0e3c8676411\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 30 00:06:09.304606 containerd[1621]: time="2025-10-30T00:06:09.304560620Z" level=info msg="Container 97e00954a426f24c2d6f7becbe88202549ba08522ceac8c42548887c0784be80: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:06:09.319941 containerd[1621]: time="2025-10-30T00:06:09.319894177Z" level=info msg="CreateContainer within sandbox \"9c9f075fe883a059528cb477d9bb9c4ec1555e62a11359c7939aa0e3c8676411\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"97e00954a426f24c2d6f7becbe88202549ba08522ceac8c42548887c0784be80\"" Oct 30 00:06:09.320368 containerd[1621]: time="2025-10-30T00:06:09.320343647Z" level=info msg="StartContainer for \"97e00954a426f24c2d6f7becbe88202549ba08522ceac8c42548887c0784be80\"" Oct 30 00:06:09.321422 containerd[1621]: time="2025-10-30T00:06:09.321387641Z" level=info msg="connecting to shim 97e00954a426f24c2d6f7becbe88202549ba08522ceac8c42548887c0784be80" address="unix:///run/containerd/s/15736e264c3c85d3aa84f77fdd9aea9c2d4d6f1e7bb093ac9a4845cd4dd69606" protocol=ttrpc version=3 Oct 30 00:06:09.323286 systemd[1]: Started cri-containerd-af39af0ba7d51f64e3a1f0cc9332c2f4823e2ed95fc741ddbdd540a0b2c28fb1.scope - libcontainer container af39af0ba7d51f64e3a1f0cc9332c2f4823e2ed95fc741ddbdd540a0b2c28fb1. Oct 30 00:06:09.340176 containerd[1621]: time="2025-10-30T00:06:09.339311126Z" level=info msg="StartContainer for \"8950b7cead2e57ace2b14ff2b3b755645e157dee8ce9bbf096aafb17d9a81fac\" returns successfully" Oct 30 00:06:09.344516 systemd[1]: Started cri-containerd-97e00954a426f24c2d6f7becbe88202549ba08522ceac8c42548887c0784be80.scope - libcontainer container 97e00954a426f24c2d6f7becbe88202549ba08522ceac8c42548887c0784be80. Oct 30 00:06:09.417429 containerd[1621]: time="2025-10-30T00:06:09.417377375Z" level=info msg="StartContainer for \"97e00954a426f24c2d6f7becbe88202549ba08522ceac8c42548887c0784be80\" returns successfully" Oct 30 00:06:09.431244 containerd[1621]: time="2025-10-30T00:06:09.431147616Z" level=info msg="StartContainer for \"af39af0ba7d51f64e3a1f0cc9332c2f4823e2ed95fc741ddbdd540a0b2c28fb1\" returns successfully" Oct 30 00:06:09.515374 kubelet[2420]: E1030 00:06:09.515332 2420 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 00:06:09.515832 kubelet[2420]: E1030 00:06:09.515459 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:09.521707 kubelet[2420]: E1030 00:06:09.521673 2420 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 00:06:09.521793 kubelet[2420]: E1030 00:06:09.521752 2420 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 00:06:09.521903 kubelet[2420]: E1030 00:06:09.521884 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:09.522206 kubelet[2420]: E1030 00:06:09.521988 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:10.328950 kubelet[2420]: I1030 00:06:10.328892 2420 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 00:06:10.523477 kubelet[2420]: E1030 00:06:10.523364 2420 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 00:06:10.525353 kubelet[2420]: E1030 00:06:10.523977 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:10.525353 kubelet[2420]: E1030 00:06:10.524870 2420 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 00:06:10.525353 kubelet[2420]: E1030 00:06:10.524965 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:11.156268 kubelet[2420]: E1030 00:06:11.156205 2420 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 30 00:06:11.322888 kubelet[2420]: I1030 00:06:11.322835 2420 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 30 00:06:11.322888 kubelet[2420]: E1030 00:06:11.322886 2420 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 30 00:06:11.475307 kubelet[2420]: E1030 00:06:11.475169 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:11.523490 kubelet[2420]: E1030 00:06:11.523451 2420 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 00:06:11.523979 kubelet[2420]: E1030 00:06:11.523580 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:11.576336 kubelet[2420]: E1030 00:06:11.576249 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:11.676743 kubelet[2420]: E1030 00:06:11.676672 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:11.777789 kubelet[2420]: E1030 00:06:11.777692 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:11.878347 kubelet[2420]: E1030 00:06:11.878278 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:11.979274 kubelet[2420]: E1030 00:06:11.979196 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:12.080441 kubelet[2420]: E1030 00:06:12.080270 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:12.181407 kubelet[2420]: E1030 00:06:12.181336 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:12.281531 kubelet[2420]: E1030 00:06:12.281479 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:12.382147 kubelet[2420]: E1030 00:06:12.381969 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:12.482853 kubelet[2420]: E1030 00:06:12.482768 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:12.583270 kubelet[2420]: E1030 00:06:12.583203 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:12.683939 kubelet[2420]: E1030 00:06:12.683788 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:12.784601 kubelet[2420]: E1030 00:06:12.784542 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:12.885262 kubelet[2420]: E1030 00:06:12.885175 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:12.985667 kubelet[2420]: E1030 00:06:12.985414 2420 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:13.066365 kubelet[2420]: I1030 00:06:13.066289 2420 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 30 00:06:13.203861 kubelet[2420]: I1030 00:06:13.203692 2420 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 30 00:06:13.308597 kubelet[2420]: I1030 00:06:13.308533 2420 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 30 00:06:13.851103 kubelet[2420]: I1030 00:06:13.851048 2420 apiserver.go:52] "Watching apiserver" Oct 30 00:06:13.853379 kubelet[2420]: E1030 00:06:13.853332 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:13.854287 kubelet[2420]: E1030 00:06:13.854227 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:13.854657 kubelet[2420]: E1030 00:06:13.854611 2420 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:13.865277 kubelet[2420]: I1030 00:06:13.865229 2420 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 30 00:06:14.687777 systemd[1]: Reload requested from client PID 2720 ('systemctl') (unit session-7.scope)... Oct 30 00:06:14.687796 systemd[1]: Reloading... Oct 30 00:06:14.781159 zram_generator::config[2764]: No configuration found. Oct 30 00:06:15.104449 systemd[1]: Reloading finished in 416 ms. Oct 30 00:06:15.139616 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:06:15.163874 systemd[1]: kubelet.service: Deactivated successfully. Oct 30 00:06:15.164339 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:06:15.164415 systemd[1]: kubelet.service: Consumed 1.290s CPU time, 130.5M memory peak. Oct 30 00:06:15.168305 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:06:15.441353 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:06:15.451587 (kubelet)[2810]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 30 00:06:15.540456 kubelet[2810]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 00:06:15.540456 kubelet[2810]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 30 00:06:15.540456 kubelet[2810]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 00:06:15.540887 kubelet[2810]: I1030 00:06:15.540486 2810 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 30 00:06:15.554912 kubelet[2810]: I1030 00:06:15.554852 2810 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 30 00:06:15.554912 kubelet[2810]: I1030 00:06:15.554886 2810 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 30 00:06:15.555208 kubelet[2810]: I1030 00:06:15.555189 2810 server.go:956] "Client rotation is on, will bootstrap in background" Oct 30 00:06:15.556472 kubelet[2810]: I1030 00:06:15.556437 2810 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 30 00:06:15.559416 kubelet[2810]: I1030 00:06:15.559377 2810 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 30 00:06:15.565509 kubelet[2810]: I1030 00:06:15.565457 2810 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 30 00:06:15.571556 kubelet[2810]: I1030 00:06:15.571492 2810 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 30 00:06:15.571807 kubelet[2810]: I1030 00:06:15.571743 2810 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 30 00:06:15.572909 kubelet[2810]: I1030 00:06:15.571789 2810 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 30 00:06:15.572909 kubelet[2810]: I1030 00:06:15.572364 2810 topology_manager.go:138] "Creating topology manager with none policy" Oct 30 00:06:15.572909 kubelet[2810]: I1030 00:06:15.572385 2810 container_manager_linux.go:303] "Creating device plugin manager" Oct 30 00:06:15.572909 kubelet[2810]: I1030 00:06:15.572463 2810 state_mem.go:36] "Initialized new in-memory state store" Oct 30 00:06:15.573319 kubelet[2810]: I1030 00:06:15.573010 2810 kubelet.go:480] "Attempting to sync node with API server" Oct 30 00:06:15.573319 kubelet[2810]: I1030 00:06:15.573025 2810 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 30 00:06:15.573319 kubelet[2810]: I1030 00:06:15.573058 2810 kubelet.go:386] "Adding apiserver pod source" Oct 30 00:06:15.573319 kubelet[2810]: I1030 00:06:15.573103 2810 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 30 00:06:15.577744 kubelet[2810]: I1030 00:06:15.577705 2810 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 30 00:06:15.578421 kubelet[2810]: I1030 00:06:15.578392 2810 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 30 00:06:15.585676 kubelet[2810]: I1030 00:06:15.585637 2810 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 30 00:06:15.585825 kubelet[2810]: I1030 00:06:15.585707 2810 server.go:1289] "Started kubelet" Oct 30 00:06:15.586503 kubelet[2810]: I1030 00:06:15.586402 2810 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 30 00:06:15.587214 kubelet[2810]: I1030 00:06:15.587172 2810 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 30 00:06:15.587449 kubelet[2810]: I1030 00:06:15.587419 2810 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 30 00:06:15.587541 kubelet[2810]: I1030 00:06:15.587518 2810 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 30 00:06:15.588106 kubelet[2810]: I1030 00:06:15.587987 2810 server.go:317] "Adding debug handlers to kubelet server" Oct 30 00:06:15.589113 kubelet[2810]: I1030 00:06:15.589061 2810 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 30 00:06:15.592151 kubelet[2810]: I1030 00:06:15.592127 2810 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 30 00:06:15.592237 kubelet[2810]: I1030 00:06:15.592220 2810 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 30 00:06:15.592379 kubelet[2810]: I1030 00:06:15.592329 2810 reconciler.go:26] "Reconciler: start to sync state" Oct 30 00:06:15.592613 kubelet[2810]: E1030 00:06:15.592591 2810 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 30 00:06:15.593259 kubelet[2810]: E1030 00:06:15.593163 2810 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:06:15.593695 kubelet[2810]: I1030 00:06:15.593675 2810 factory.go:223] Registration of the systemd container factory successfully Oct 30 00:06:15.593793 kubelet[2810]: I1030 00:06:15.593766 2810 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 30 00:06:15.596874 kubelet[2810]: I1030 00:06:15.596840 2810 factory.go:223] Registration of the containerd container factory successfully Oct 30 00:06:15.605113 kubelet[2810]: I1030 00:06:15.603725 2810 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 30 00:06:15.605429 kubelet[2810]: I1030 00:06:15.605403 2810 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 30 00:06:15.605475 kubelet[2810]: I1030 00:06:15.605431 2810 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 30 00:06:15.605475 kubelet[2810]: I1030 00:06:15.605462 2810 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 30 00:06:15.605475 kubelet[2810]: I1030 00:06:15.605474 2810 kubelet.go:2436] "Starting kubelet main sync loop" Oct 30 00:06:15.605580 kubelet[2810]: E1030 00:06:15.605530 2810 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 30 00:06:15.645856 kubelet[2810]: I1030 00:06:15.645813 2810 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 30 00:06:15.645856 kubelet[2810]: I1030 00:06:15.645836 2810 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 30 00:06:15.645856 kubelet[2810]: I1030 00:06:15.645863 2810 state_mem.go:36] "Initialized new in-memory state store" Oct 30 00:06:15.646139 kubelet[2810]: I1030 00:06:15.646030 2810 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 30 00:06:15.646139 kubelet[2810]: I1030 00:06:15.646041 2810 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 30 00:06:15.646139 kubelet[2810]: I1030 00:06:15.646057 2810 policy_none.go:49] "None policy: Start" Oct 30 00:06:15.646139 kubelet[2810]: I1030 00:06:15.646068 2810 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 30 00:06:15.646139 kubelet[2810]: I1030 00:06:15.646111 2810 state_mem.go:35] "Initializing new in-memory state store" Oct 30 00:06:15.646294 kubelet[2810]: I1030 00:06:15.646228 2810 state_mem.go:75] "Updated machine memory state" Oct 30 00:06:15.651768 kubelet[2810]: E1030 00:06:15.651647 2810 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 30 00:06:15.652043 kubelet[2810]: I1030 00:06:15.652008 2810 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 30 00:06:15.652123 kubelet[2810]: I1030 00:06:15.652026 2810 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 30 00:06:15.652423 kubelet[2810]: I1030 00:06:15.652401 2810 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 30 00:06:15.655304 kubelet[2810]: E1030 00:06:15.655206 2810 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 30 00:06:15.707941 kubelet[2810]: I1030 00:06:15.707270 2810 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 30 00:06:15.707941 kubelet[2810]: I1030 00:06:15.707421 2810 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 30 00:06:15.707941 kubelet[2810]: I1030 00:06:15.707555 2810 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 30 00:06:15.748107 kubelet[2810]: E1030 00:06:15.747970 2810 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 30 00:06:15.748431 kubelet[2810]: E1030 00:06:15.748400 2810 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 30 00:06:15.748598 kubelet[2810]: E1030 00:06:15.748492 2810 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 30 00:06:15.759308 kubelet[2810]: I1030 00:06:15.759071 2810 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 00:06:15.778259 kubelet[2810]: I1030 00:06:15.778215 2810 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 30 00:06:15.778445 kubelet[2810]: I1030 00:06:15.778366 2810 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 30 00:06:15.793974 kubelet[2810]: I1030 00:06:15.793916 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80cb26873732f72c00d1e84007240e3c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"80cb26873732f72c00d1e84007240e3c\") " pod="kube-system/kube-apiserver-localhost" Oct 30 00:06:15.793974 kubelet[2810]: I1030 00:06:15.793961 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80cb26873732f72c00d1e84007240e3c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"80cb26873732f72c00d1e84007240e3c\") " pod="kube-system/kube-apiserver-localhost" Oct 30 00:06:15.793974 kubelet[2810]: I1030 00:06:15.793981 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:06:15.793974 kubelet[2810]: I1030 00:06:15.793996 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:06:15.793974 kubelet[2810]: I1030 00:06:15.794010 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:06:15.794336 kubelet[2810]: I1030 00:06:15.794036 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:06:15.794336 kubelet[2810]: I1030 00:06:15.794151 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Oct 30 00:06:15.794336 kubelet[2810]: I1030 00:06:15.794210 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80cb26873732f72c00d1e84007240e3c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"80cb26873732f72c00d1e84007240e3c\") " pod="kube-system/kube-apiserver-localhost" Oct 30 00:06:15.794336 kubelet[2810]: I1030 00:06:15.794330 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:06:16.049266 kubelet[2810]: E1030 00:06:16.048798 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:16.049266 kubelet[2810]: E1030 00:06:16.049191 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:16.049266 kubelet[2810]: E1030 00:06:16.049209 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:16.577602 kubelet[2810]: I1030 00:06:16.577525 2810 apiserver.go:52] "Watching apiserver" Oct 30 00:06:16.592672 kubelet[2810]: I1030 00:06:16.592594 2810 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 30 00:06:16.623056 kubelet[2810]: I1030 00:06:16.623012 2810 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 30 00:06:16.623298 kubelet[2810]: I1030 00:06:16.623204 2810 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 30 00:06:16.623584 kubelet[2810]: E1030 00:06:16.623560 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:16.691949 kubelet[2810]: E1030 00:06:16.691181 2810 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 30 00:06:16.691949 kubelet[2810]: E1030 00:06:16.691422 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:16.691949 kubelet[2810]: E1030 00:06:16.691446 2810 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 30 00:06:16.691949 kubelet[2810]: E1030 00:06:16.691793 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:16.728684 kubelet[2810]: I1030 00:06:16.728559 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.728541268 podStartE2EDuration="3.728541268s" podCreationTimestamp="2025-10-30 00:06:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:06:16.728237184 +0000 UTC m=+1.271065412" watchObservedRunningTime="2025-10-30 00:06:16.728541268 +0000 UTC m=+1.271369485" Oct 30 00:06:16.754100 kubelet[2810]: I1030 00:06:16.753992 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.753971032 podStartE2EDuration="3.753971032s" podCreationTimestamp="2025-10-30 00:06:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:06:16.741304451 +0000 UTC m=+1.284132688" watchObservedRunningTime="2025-10-30 00:06:16.753971032 +0000 UTC m=+1.296799249" Oct 30 00:06:16.777797 kubelet[2810]: I1030 00:06:16.777710 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.777688278 podStartE2EDuration="3.777688278s" podCreationTimestamp="2025-10-30 00:06:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:06:16.754922286 +0000 UTC m=+1.297750523" watchObservedRunningTime="2025-10-30 00:06:16.777688278 +0000 UTC m=+1.320516485" Oct 30 00:06:17.625764 kubelet[2810]: E1030 00:06:17.625285 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:17.626598 kubelet[2810]: E1030 00:06:17.626511 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:17.626685 kubelet[2810]: E1030 00:06:17.626598 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:18.627101 kubelet[2810]: E1030 00:06:18.627040 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:18.627575 kubelet[2810]: E1030 00:06:18.627255 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:19.500967 kubelet[2810]: I1030 00:06:19.500920 2810 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 30 00:06:19.501540 containerd[1621]: time="2025-10-30T00:06:19.501470916Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 30 00:06:19.501970 kubelet[2810]: I1030 00:06:19.501702 2810 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 30 00:06:20.538316 systemd[1]: Created slice kubepods-besteffort-poda237e278_019b_42a0_8692_0fa35fd6a734.slice - libcontainer container kubepods-besteffort-poda237e278_019b_42a0_8692_0fa35fd6a734.slice. Oct 30 00:06:20.627758 kubelet[2810]: I1030 00:06:20.627663 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24cth\" (UniqueName: \"kubernetes.io/projected/a237e278-019b-42a0-8692-0fa35fd6a734-kube-api-access-24cth\") pod \"kube-proxy-d9cw2\" (UID: \"a237e278-019b-42a0-8692-0fa35fd6a734\") " pod="kube-system/kube-proxy-d9cw2" Oct 30 00:06:20.628388 kubelet[2810]: I1030 00:06:20.627888 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a237e278-019b-42a0-8692-0fa35fd6a734-kube-proxy\") pod \"kube-proxy-d9cw2\" (UID: \"a237e278-019b-42a0-8692-0fa35fd6a734\") " pod="kube-system/kube-proxy-d9cw2" Oct 30 00:06:20.628388 kubelet[2810]: I1030 00:06:20.627939 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a237e278-019b-42a0-8692-0fa35fd6a734-xtables-lock\") pod \"kube-proxy-d9cw2\" (UID: \"a237e278-019b-42a0-8692-0fa35fd6a734\") " pod="kube-system/kube-proxy-d9cw2" Oct 30 00:06:20.628388 kubelet[2810]: I1030 00:06:20.627963 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a237e278-019b-42a0-8692-0fa35fd6a734-lib-modules\") pod \"kube-proxy-d9cw2\" (UID: \"a237e278-019b-42a0-8692-0fa35fd6a734\") " pod="kube-system/kube-proxy-d9cw2" Oct 30 00:06:20.664790 systemd[1]: Created slice kubepods-besteffort-podbe90fb42_dc32_4379_8397_9212913eede0.slice - libcontainer container kubepods-besteffort-podbe90fb42_dc32_4379_8397_9212913eede0.slice. Oct 30 00:06:20.729335 kubelet[2810]: I1030 00:06:20.729193 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxsd4\" (UniqueName: \"kubernetes.io/projected/be90fb42-dc32-4379-8397-9212913eede0-kube-api-access-wxsd4\") pod \"tigera-operator-7dcd859c48-sb768\" (UID: \"be90fb42-dc32-4379-8397-9212913eede0\") " pod="tigera-operator/tigera-operator-7dcd859c48-sb768" Oct 30 00:06:20.729538 kubelet[2810]: I1030 00:06:20.729462 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/be90fb42-dc32-4379-8397-9212913eede0-var-lib-calico\") pod \"tigera-operator-7dcd859c48-sb768\" (UID: \"be90fb42-dc32-4379-8397-9212913eede0\") " pod="tigera-operator/tigera-operator-7dcd859c48-sb768" Oct 30 00:06:20.851529 kubelet[2810]: E1030 00:06:20.851117 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:20.852645 containerd[1621]: time="2025-10-30T00:06:20.852593002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d9cw2,Uid:a237e278-019b-42a0-8692-0fa35fd6a734,Namespace:kube-system,Attempt:0,}" Oct 30 00:06:20.907147 containerd[1621]: time="2025-10-30T00:06:20.907070237Z" level=info msg="connecting to shim 597ad7457fa73b754215a9d1b8f6ec19f5c1695e8dd92998077fc53247888440" address="unix:///run/containerd/s/1cf657a1e8b74aa12586c6390620a3a3e9db142bd6b8afba66b8b3830942b537" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:06:20.970317 containerd[1621]: time="2025-10-30T00:06:20.969726013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-sb768,Uid:be90fb42-dc32-4379-8397-9212913eede0,Namespace:tigera-operator,Attempt:0,}" Oct 30 00:06:21.000595 systemd[1]: Started cri-containerd-597ad7457fa73b754215a9d1b8f6ec19f5c1695e8dd92998077fc53247888440.scope - libcontainer container 597ad7457fa73b754215a9d1b8f6ec19f5c1695e8dd92998077fc53247888440. Oct 30 00:06:21.014012 containerd[1621]: time="2025-10-30T00:06:21.013787939Z" level=info msg="connecting to shim 097f7bd5adeb4f30f0215bfb1cce1d7e383f908583aee35534fd8cc1f0270fb5" address="unix:///run/containerd/s/d220f6c7e8afc5c3fcec46c65a4dbd95c182497d7593b01c98715ba539c6c8c4" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:06:21.050407 systemd[1]: Started cri-containerd-097f7bd5adeb4f30f0215bfb1cce1d7e383f908583aee35534fd8cc1f0270fb5.scope - libcontainer container 097f7bd5adeb4f30f0215bfb1cce1d7e383f908583aee35534fd8cc1f0270fb5. Oct 30 00:06:21.060058 containerd[1621]: time="2025-10-30T00:06:21.059044320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d9cw2,Uid:a237e278-019b-42a0-8692-0fa35fd6a734,Namespace:kube-system,Attempt:0,} returns sandbox id \"597ad7457fa73b754215a9d1b8f6ec19f5c1695e8dd92998077fc53247888440\"" Oct 30 00:06:21.062750 kubelet[2810]: E1030 00:06:21.062682 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:21.073066 containerd[1621]: time="2025-10-30T00:06:21.072997528Z" level=info msg="CreateContainer within sandbox \"597ad7457fa73b754215a9d1b8f6ec19f5c1695e8dd92998077fc53247888440\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 30 00:06:21.088750 containerd[1621]: time="2025-10-30T00:06:21.088673199Z" level=info msg="Container 371de173b4efb97cb3e8d656b56dc88a4095c5968ba4f4872b1daea5e39690b9: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:06:21.102203 containerd[1621]: time="2025-10-30T00:06:21.101962447Z" level=info msg="CreateContainer within sandbox \"597ad7457fa73b754215a9d1b8f6ec19f5c1695e8dd92998077fc53247888440\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"371de173b4efb97cb3e8d656b56dc88a4095c5968ba4f4872b1daea5e39690b9\"" Oct 30 00:06:21.103956 containerd[1621]: time="2025-10-30T00:06:21.103111371Z" level=info msg="StartContainer for \"371de173b4efb97cb3e8d656b56dc88a4095c5968ba4f4872b1daea5e39690b9\"" Oct 30 00:06:21.110322 containerd[1621]: time="2025-10-30T00:06:21.110162525Z" level=info msg="connecting to shim 371de173b4efb97cb3e8d656b56dc88a4095c5968ba4f4872b1daea5e39690b9" address="unix:///run/containerd/s/1cf657a1e8b74aa12586c6390620a3a3e9db142bd6b8afba66b8b3830942b537" protocol=ttrpc version=3 Oct 30 00:06:21.117265 containerd[1621]: time="2025-10-30T00:06:21.117199613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-sb768,Uid:be90fb42-dc32-4379-8397-9212913eede0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"097f7bd5adeb4f30f0215bfb1cce1d7e383f908583aee35534fd8cc1f0270fb5\"" Oct 30 00:06:21.120044 containerd[1621]: time="2025-10-30T00:06:21.119987491Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 30 00:06:21.141429 systemd[1]: Started cri-containerd-371de173b4efb97cb3e8d656b56dc88a4095c5968ba4f4872b1daea5e39690b9.scope - libcontainer container 371de173b4efb97cb3e8d656b56dc88a4095c5968ba4f4872b1daea5e39690b9. Oct 30 00:06:21.250321 containerd[1621]: time="2025-10-30T00:06:21.250266224Z" level=info msg="StartContainer for \"371de173b4efb97cb3e8d656b56dc88a4095c5968ba4f4872b1daea5e39690b9\" returns successfully" Oct 30 00:06:21.636204 kubelet[2810]: E1030 00:06:21.636090 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:21.699217 kubelet[2810]: I1030 00:06:21.699119 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d9cw2" podStartSLOduration=1.69909467 podStartE2EDuration="1.69909467s" podCreationTimestamp="2025-10-30 00:06:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:06:21.698520128 +0000 UTC m=+6.241348365" watchObservedRunningTime="2025-10-30 00:06:21.69909467 +0000 UTC m=+6.241922887" Oct 30 00:06:21.794227 kubelet[2810]: E1030 00:06:21.794188 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:22.639109 kubelet[2810]: E1030 00:06:22.639034 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:23.359537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2545382906.mount: Deactivated successfully. Oct 30 00:06:24.170575 containerd[1621]: time="2025-10-30T00:06:24.170460074Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:06:24.174028 containerd[1621]: time="2025-10-30T00:06:24.173945392Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 30 00:06:24.175835 containerd[1621]: time="2025-10-30T00:06:24.175777789Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:06:24.179010 containerd[1621]: time="2025-10-30T00:06:24.178906595Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:06:24.179561 containerd[1621]: time="2025-10-30T00:06:24.179509589Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.059471344s" Oct 30 00:06:24.179561 containerd[1621]: time="2025-10-30T00:06:24.179553922Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 30 00:06:24.190927 containerd[1621]: time="2025-10-30T00:06:24.190869221Z" level=info msg="CreateContainer within sandbox \"097f7bd5adeb4f30f0215bfb1cce1d7e383f908583aee35534fd8cc1f0270fb5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 30 00:06:24.201091 containerd[1621]: time="2025-10-30T00:06:24.201006432Z" level=info msg="Container 7e9c57ee4d2ee9b2890d01bfd9802e49502752856f88fef5e6f057518a8e7cbc: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:06:24.213058 containerd[1621]: time="2025-10-30T00:06:24.212997922Z" level=info msg="CreateContainer within sandbox \"097f7bd5adeb4f30f0215bfb1cce1d7e383f908583aee35534fd8cc1f0270fb5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7e9c57ee4d2ee9b2890d01bfd9802e49502752856f88fef5e6f057518a8e7cbc\"" Oct 30 00:06:24.213818 containerd[1621]: time="2025-10-30T00:06:24.213791224Z" level=info msg="StartContainer for \"7e9c57ee4d2ee9b2890d01bfd9802e49502752856f88fef5e6f057518a8e7cbc\"" Oct 30 00:06:24.214877 containerd[1621]: time="2025-10-30T00:06:24.214850787Z" level=info msg="connecting to shim 7e9c57ee4d2ee9b2890d01bfd9802e49502752856f88fef5e6f057518a8e7cbc" address="unix:///run/containerd/s/d220f6c7e8afc5c3fcec46c65a4dbd95c182497d7593b01c98715ba539c6c8c4" protocol=ttrpc version=3 Oct 30 00:06:24.239269 systemd[1]: Started cri-containerd-7e9c57ee4d2ee9b2890d01bfd9802e49502752856f88fef5e6f057518a8e7cbc.scope - libcontainer container 7e9c57ee4d2ee9b2890d01bfd9802e49502752856f88fef5e6f057518a8e7cbc. Oct 30 00:06:24.280299 containerd[1621]: time="2025-10-30T00:06:24.280244054Z" level=info msg="StartContainer for \"7e9c57ee4d2ee9b2890d01bfd9802e49502752856f88fef5e6f057518a8e7cbc\" returns successfully" Oct 30 00:06:24.701517 kubelet[2810]: I1030 00:06:24.701437 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-sb768" podStartSLOduration=1.639649515 podStartE2EDuration="4.701404084s" podCreationTimestamp="2025-10-30 00:06:20 +0000 UTC" firstStartedPulling="2025-10-30 00:06:21.119107916 +0000 UTC m=+5.661936133" lastFinishedPulling="2025-10-30 00:06:24.180862485 +0000 UTC m=+8.723690702" observedRunningTime="2025-10-30 00:06:24.701378817 +0000 UTC m=+9.244207044" watchObservedRunningTime="2025-10-30 00:06:24.701404084 +0000 UTC m=+9.244232301" Oct 30 00:06:26.823975 kubelet[2810]: E1030 00:06:26.823350 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:27.177207 kubelet[2810]: E1030 00:06:27.176985 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:27.652285 kubelet[2810]: E1030 00:06:27.652247 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:27.652483 kubelet[2810]: E1030 00:06:27.652388 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:31.907473 sudo[1824]: pam_unix(sudo:session): session closed for user root Oct 30 00:06:31.909509 sshd[1823]: Connection closed by 10.0.0.1 port 36224 Oct 30 00:06:31.910343 sshd-session[1820]: pam_unix(sshd:session): session closed for user core Oct 30 00:06:31.915637 systemd[1]: sshd@6-10.0.0.102:22-10.0.0.1:36224.service: Deactivated successfully. Oct 30 00:06:31.924847 systemd[1]: session-7.scope: Deactivated successfully. Oct 30 00:06:31.926245 systemd[1]: session-7.scope: Consumed 5.138s CPU time, 218M memory peak. Oct 30 00:06:31.931175 systemd-logind[1593]: Session 7 logged out. Waiting for processes to exit. Oct 30 00:06:31.935285 systemd-logind[1593]: Removed session 7. Oct 30 00:06:37.866485 systemd[1]: Created slice kubepods-besteffort-podec280b25_621d_4ddb_896d_22f49aa7a5fa.slice - libcontainer container kubepods-besteffort-podec280b25_621d_4ddb_896d_22f49aa7a5fa.slice. Oct 30 00:06:37.945650 kubelet[2810]: I1030 00:06:37.945565 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec280b25-621d-4ddb-896d-22f49aa7a5fa-tigera-ca-bundle\") pod \"calico-typha-868d486c47-nd5cz\" (UID: \"ec280b25-621d-4ddb-896d-22f49aa7a5fa\") " pod="calico-system/calico-typha-868d486c47-nd5cz" Oct 30 00:06:37.945650 kubelet[2810]: I1030 00:06:37.945638 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ec280b25-621d-4ddb-896d-22f49aa7a5fa-typha-certs\") pod \"calico-typha-868d486c47-nd5cz\" (UID: \"ec280b25-621d-4ddb-896d-22f49aa7a5fa\") " pod="calico-system/calico-typha-868d486c47-nd5cz" Oct 30 00:06:37.945650 kubelet[2810]: I1030 00:06:37.945658 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bstrt\" (UniqueName: \"kubernetes.io/projected/ec280b25-621d-4ddb-896d-22f49aa7a5fa-kube-api-access-bstrt\") pod \"calico-typha-868d486c47-nd5cz\" (UID: \"ec280b25-621d-4ddb-896d-22f49aa7a5fa\") " pod="calico-system/calico-typha-868d486c47-nd5cz" Oct 30 00:06:38.005941 systemd[1]: Created slice kubepods-besteffort-pod665abdc2_126d_4be1_9fe9_144daad4992b.slice - libcontainer container kubepods-besteffort-pod665abdc2_126d_4be1_9fe9_144daad4992b.slice. Oct 30 00:06:38.045985 kubelet[2810]: I1030 00:06:38.045896 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/665abdc2-126d-4be1-9fe9-144daad4992b-tigera-ca-bundle\") pod \"calico-node-p998t\" (UID: \"665abdc2-126d-4be1-9fe9-144daad4992b\") " pod="calico-system/calico-node-p998t" Oct 30 00:06:38.046216 kubelet[2810]: I1030 00:06:38.046027 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/665abdc2-126d-4be1-9fe9-144daad4992b-cni-bin-dir\") pod \"calico-node-p998t\" (UID: \"665abdc2-126d-4be1-9fe9-144daad4992b\") " pod="calico-system/calico-node-p998t" Oct 30 00:06:38.046216 kubelet[2810]: I1030 00:06:38.046054 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/665abdc2-126d-4be1-9fe9-144daad4992b-cni-log-dir\") pod \"calico-node-p998t\" (UID: \"665abdc2-126d-4be1-9fe9-144daad4992b\") " pod="calico-system/calico-node-p998t" Oct 30 00:06:38.046216 kubelet[2810]: I1030 00:06:38.046112 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/665abdc2-126d-4be1-9fe9-144daad4992b-var-lib-calico\") pod \"calico-node-p998t\" (UID: \"665abdc2-126d-4be1-9fe9-144daad4992b\") " pod="calico-system/calico-node-p998t" Oct 30 00:06:38.046305 kubelet[2810]: I1030 00:06:38.046217 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/665abdc2-126d-4be1-9fe9-144daad4992b-xtables-lock\") pod \"calico-node-p998t\" (UID: \"665abdc2-126d-4be1-9fe9-144daad4992b\") " pod="calico-system/calico-node-p998t" Oct 30 00:06:38.046305 kubelet[2810]: I1030 00:06:38.046250 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/665abdc2-126d-4be1-9fe9-144daad4992b-flexvol-driver-host\") pod \"calico-node-p998t\" (UID: \"665abdc2-126d-4be1-9fe9-144daad4992b\") " pod="calico-system/calico-node-p998t" Oct 30 00:06:38.046305 kubelet[2810]: I1030 00:06:38.046277 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/665abdc2-126d-4be1-9fe9-144daad4992b-policysync\") pod \"calico-node-p998t\" (UID: \"665abdc2-126d-4be1-9fe9-144daad4992b\") " pod="calico-system/calico-node-p998t" Oct 30 00:06:38.046305 kubelet[2810]: I1030 00:06:38.046303 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/665abdc2-126d-4be1-9fe9-144daad4992b-cni-net-dir\") pod \"calico-node-p998t\" (UID: \"665abdc2-126d-4be1-9fe9-144daad4992b\") " pod="calico-system/calico-node-p998t" Oct 30 00:06:38.046430 kubelet[2810]: I1030 00:06:38.046327 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/665abdc2-126d-4be1-9fe9-144daad4992b-lib-modules\") pod \"calico-node-p998t\" (UID: \"665abdc2-126d-4be1-9fe9-144daad4992b\") " pod="calico-system/calico-node-p998t" Oct 30 00:06:38.046430 kubelet[2810]: I1030 00:06:38.046354 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/665abdc2-126d-4be1-9fe9-144daad4992b-node-certs\") pod \"calico-node-p998t\" (UID: \"665abdc2-126d-4be1-9fe9-144daad4992b\") " pod="calico-system/calico-node-p998t" Oct 30 00:06:38.046430 kubelet[2810]: I1030 00:06:38.046396 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/665abdc2-126d-4be1-9fe9-144daad4992b-var-run-calico\") pod \"calico-node-p998t\" (UID: \"665abdc2-126d-4be1-9fe9-144daad4992b\") " pod="calico-system/calico-node-p998t" Oct 30 00:06:38.046430 kubelet[2810]: I1030 00:06:38.046420 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9989m\" (UniqueName: \"kubernetes.io/projected/665abdc2-126d-4be1-9fe9-144daad4992b-kube-api-access-9989m\") pod \"calico-node-p998t\" (UID: \"665abdc2-126d-4be1-9fe9-144daad4992b\") " pod="calico-system/calico-node-p998t" Oct 30 00:06:38.152233 kubelet[2810]: E1030 00:06:38.151714 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.152233 kubelet[2810]: W1030 00:06:38.151743 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.158110 kubelet[2810]: E1030 00:06:38.157069 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.160525 kubelet[2810]: E1030 00:06:38.160493 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.160525 kubelet[2810]: W1030 00:06:38.160521 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.160632 kubelet[2810]: E1030 00:06:38.160558 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.176266 kubelet[2810]: E1030 00:06:38.176208 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:38.177197 containerd[1621]: time="2025-10-30T00:06:38.177148975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-868d486c47-nd5cz,Uid:ec280b25-621d-4ddb-896d-22f49aa7a5fa,Namespace:calico-system,Attempt:0,}" Oct 30 00:06:38.203156 kubelet[2810]: E1030 00:06:38.202866 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bgd9q" podUID="d62c2877-00ac-4394-911e-002e28febfd2" Oct 30 00:06:38.221456 containerd[1621]: time="2025-10-30T00:06:38.221298127Z" level=info msg="connecting to shim c73fcf9570cbdcf3c9f58ea66e0567007f7b974507e6bd2066da1452bfe91620" address="unix:///run/containerd/s/0f6793a6fa2e79a74f2a4fcbc90c95b977e1addbaf5d577811a6fb41c7dc363b" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:06:38.229231 kubelet[2810]: E1030 00:06:38.229194 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.229231 kubelet[2810]: W1030 00:06:38.229219 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.229406 kubelet[2810]: E1030 00:06:38.229246 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.231910 kubelet[2810]: E1030 00:06:38.231878 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.231910 kubelet[2810]: W1030 00:06:38.231909 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.232028 kubelet[2810]: E1030 00:06:38.231929 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.232197 kubelet[2810]: E1030 00:06:38.232176 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.232197 kubelet[2810]: W1030 00:06:38.232193 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.232540 kubelet[2810]: E1030 00:06:38.232205 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.232540 kubelet[2810]: E1030 00:06:38.232467 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.232540 kubelet[2810]: W1030 00:06:38.232478 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.232540 kubelet[2810]: E1030 00:06:38.232490 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.234468 kubelet[2810]: E1030 00:06:38.232735 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.234468 kubelet[2810]: W1030 00:06:38.232748 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.234468 kubelet[2810]: E1030 00:06:38.232759 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.234468 kubelet[2810]: E1030 00:06:38.233135 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.234468 kubelet[2810]: W1030 00:06:38.233147 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.234468 kubelet[2810]: E1030 00:06:38.233159 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.234468 kubelet[2810]: E1030 00:06:38.233407 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.234468 kubelet[2810]: W1030 00:06:38.233544 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.234468 kubelet[2810]: E1030 00:06:38.233558 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.234468 kubelet[2810]: E1030 00:06:38.234067 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.234809 kubelet[2810]: W1030 00:06:38.234133 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.234809 kubelet[2810]: E1030 00:06:38.234147 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.234809 kubelet[2810]: E1030 00:06:38.234520 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.234809 kubelet[2810]: W1030 00:06:38.234532 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.234809 kubelet[2810]: E1030 00:06:38.234543 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.234809 kubelet[2810]: E1030 00:06:38.234805 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.235003 kubelet[2810]: W1030 00:06:38.234817 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.235003 kubelet[2810]: E1030 00:06:38.234829 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.235173 kubelet[2810]: E1030 00:06:38.235151 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.235173 kubelet[2810]: W1030 00:06:38.235164 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.235173 kubelet[2810]: E1030 00:06:38.235176 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.235443 kubelet[2810]: E1030 00:06:38.235412 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.235443 kubelet[2810]: W1030 00:06:38.235424 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.235443 kubelet[2810]: E1030 00:06:38.235436 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.235703 kubelet[2810]: E1030 00:06:38.235661 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.235703 kubelet[2810]: W1030 00:06:38.235673 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.235703 kubelet[2810]: E1030 00:06:38.235684 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.236286 kubelet[2810]: E1030 00:06:38.236267 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.236286 kubelet[2810]: W1030 00:06:38.236281 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.236407 kubelet[2810]: E1030 00:06:38.236293 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.236521 kubelet[2810]: E1030 00:06:38.236500 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.236521 kubelet[2810]: W1030 00:06:38.236513 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.236671 kubelet[2810]: E1030 00:06:38.236524 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.236737 kubelet[2810]: E1030 00:06:38.236728 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.236821 kubelet[2810]: W1030 00:06:38.236739 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.236821 kubelet[2810]: E1030 00:06:38.236754 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.236982 kubelet[2810]: E1030 00:06:38.236962 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.236982 kubelet[2810]: W1030 00:06:38.236974 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.237115 kubelet[2810]: E1030 00:06:38.236985 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.238341 kubelet[2810]: E1030 00:06:38.238304 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.238341 kubelet[2810]: W1030 00:06:38.238317 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.238341 kubelet[2810]: E1030 00:06:38.238329 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.238575 kubelet[2810]: E1030 00:06:38.238540 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.238575 kubelet[2810]: W1030 00:06:38.238553 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.238668 kubelet[2810]: E1030 00:06:38.238611 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.238852 kubelet[2810]: E1030 00:06:38.238818 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.238852 kubelet[2810]: W1030 00:06:38.238831 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.238852 kubelet[2810]: E1030 00:06:38.238842 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.248336 kubelet[2810]: E1030 00:06:38.248295 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.248336 kubelet[2810]: W1030 00:06:38.248318 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.248485 kubelet[2810]: E1030 00:06:38.248341 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.248485 kubelet[2810]: I1030 00:06:38.248378 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d62c2877-00ac-4394-911e-002e28febfd2-varrun\") pod \"csi-node-driver-bgd9q\" (UID: \"d62c2877-00ac-4394-911e-002e28febfd2\") " pod="calico-system/csi-node-driver-bgd9q" Oct 30 00:06:38.248722 kubelet[2810]: E1030 00:06:38.248689 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.248722 kubelet[2810]: W1030 00:06:38.248707 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.248722 kubelet[2810]: E1030 00:06:38.248719 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.248819 kubelet[2810]: I1030 00:06:38.248755 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d62c2877-00ac-4394-911e-002e28febfd2-registration-dir\") pod \"csi-node-driver-bgd9q\" (UID: \"d62c2877-00ac-4394-911e-002e28febfd2\") " pod="calico-system/csi-node-driver-bgd9q" Oct 30 00:06:38.249090 kubelet[2810]: E1030 00:06:38.249053 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.249125 kubelet[2810]: W1030 00:06:38.249069 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.249125 kubelet[2810]: E1030 00:06:38.249104 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.249168 kubelet[2810]: I1030 00:06:38.249131 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn2nd\" (UniqueName: \"kubernetes.io/projected/d62c2877-00ac-4394-911e-002e28febfd2-kube-api-access-xn2nd\") pod \"csi-node-driver-bgd9q\" (UID: \"d62c2877-00ac-4394-911e-002e28febfd2\") " pod="calico-system/csi-node-driver-bgd9q" Oct 30 00:06:38.250714 kubelet[2810]: E1030 00:06:38.249395 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.250714 kubelet[2810]: W1030 00:06:38.249410 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.250714 kubelet[2810]: E1030 00:06:38.249421 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.250714 kubelet[2810]: E1030 00:06:38.249627 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.250714 kubelet[2810]: W1030 00:06:38.249636 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.250714 kubelet[2810]: E1030 00:06:38.249644 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.250714 kubelet[2810]: E1030 00:06:38.249866 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.250714 kubelet[2810]: W1030 00:06:38.249874 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.250714 kubelet[2810]: E1030 00:06:38.249883 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.250714 kubelet[2810]: E1030 00:06:38.250131 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.251064 kubelet[2810]: W1030 00:06:38.250141 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.251064 kubelet[2810]: E1030 00:06:38.250153 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.251064 kubelet[2810]: E1030 00:06:38.250373 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.251064 kubelet[2810]: W1030 00:06:38.250381 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.251064 kubelet[2810]: E1030 00:06:38.250389 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.251064 kubelet[2810]: I1030 00:06:38.250423 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d62c2877-00ac-4394-911e-002e28febfd2-kubelet-dir\") pod \"csi-node-driver-bgd9q\" (UID: \"d62c2877-00ac-4394-911e-002e28febfd2\") " pod="calico-system/csi-node-driver-bgd9q" Oct 30 00:06:38.251064 kubelet[2810]: E1030 00:06:38.250636 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.251064 kubelet[2810]: W1030 00:06:38.250646 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.251064 kubelet[2810]: E1030 00:06:38.250656 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.251364 kubelet[2810]: I1030 00:06:38.250701 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d62c2877-00ac-4394-911e-002e28febfd2-socket-dir\") pod \"csi-node-driver-bgd9q\" (UID: \"d62c2877-00ac-4394-911e-002e28febfd2\") " pod="calico-system/csi-node-driver-bgd9q" Oct 30 00:06:38.251364 kubelet[2810]: E1030 00:06:38.250902 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.251364 kubelet[2810]: W1030 00:06:38.250912 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.251364 kubelet[2810]: E1030 00:06:38.250923 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.251364 kubelet[2810]: E1030 00:06:38.251242 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.251364 kubelet[2810]: W1030 00:06:38.251254 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.251364 kubelet[2810]: E1030 00:06:38.251266 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.251653 kubelet[2810]: E1030 00:06:38.251490 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.251653 kubelet[2810]: W1030 00:06:38.251500 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.251653 kubelet[2810]: E1030 00:06:38.251511 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.251761 kubelet[2810]: E1030 00:06:38.251726 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.251761 kubelet[2810]: W1030 00:06:38.251736 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.251761 kubelet[2810]: E1030 00:06:38.251747 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.252018 kubelet[2810]: E1030 00:06:38.251988 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.252018 kubelet[2810]: W1030 00:06:38.252010 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.252135 kubelet[2810]: E1030 00:06:38.252023 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.252617 kubelet[2810]: E1030 00:06:38.252594 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.252617 kubelet[2810]: W1030 00:06:38.252615 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.252718 kubelet[2810]: E1030 00:06:38.252628 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.255340 systemd[1]: Started cri-containerd-c73fcf9570cbdcf3c9f58ea66e0567007f7b974507e6bd2066da1452bfe91620.scope - libcontainer container c73fcf9570cbdcf3c9f58ea66e0567007f7b974507e6bd2066da1452bfe91620. Oct 30 00:06:38.312457 kubelet[2810]: E1030 00:06:38.312387 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:38.314461 containerd[1621]: time="2025-10-30T00:06:38.314406665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p998t,Uid:665abdc2-126d-4be1-9fe9-144daad4992b,Namespace:calico-system,Attempt:0,}" Oct 30 00:06:38.325880 containerd[1621]: time="2025-10-30T00:06:38.325818424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-868d486c47-nd5cz,Uid:ec280b25-621d-4ddb-896d-22f49aa7a5fa,Namespace:calico-system,Attempt:0,} returns sandbox id \"c73fcf9570cbdcf3c9f58ea66e0567007f7b974507e6bd2066da1452bfe91620\"" Oct 30 00:06:38.331887 kubelet[2810]: E1030 00:06:38.331849 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:38.337099 containerd[1621]: time="2025-10-30T00:06:38.336841402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 30 00:06:38.351743 kubelet[2810]: E1030 00:06:38.351702 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.351743 kubelet[2810]: W1030 00:06:38.351732 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.351950 kubelet[2810]: E1030 00:06:38.351758 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.352872 kubelet[2810]: E1030 00:06:38.352850 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.352872 kubelet[2810]: W1030 00:06:38.352869 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.352987 kubelet[2810]: E1030 00:06:38.352883 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.353197 kubelet[2810]: E1030 00:06:38.353167 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.353197 kubelet[2810]: W1030 00:06:38.353190 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.353326 kubelet[2810]: E1030 00:06:38.353204 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.353512 kubelet[2810]: E1030 00:06:38.353493 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.353549 kubelet[2810]: W1030 00:06:38.353513 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.353549 kubelet[2810]: E1030 00:06:38.353527 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.353935 kubelet[2810]: E1030 00:06:38.353901 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.353935 kubelet[2810]: W1030 00:06:38.353922 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.354133 kubelet[2810]: E1030 00:06:38.353935 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.354453 kubelet[2810]: E1030 00:06:38.354404 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.354453 kubelet[2810]: W1030 00:06:38.354420 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.354453 kubelet[2810]: E1030 00:06:38.354433 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.355105 kubelet[2810]: E1030 00:06:38.355049 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.355249 kubelet[2810]: W1030 00:06:38.355071 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.355249 kubelet[2810]: E1030 00:06:38.355232 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.355635 kubelet[2810]: E1030 00:06:38.355614 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.355635 kubelet[2810]: W1030 00:06:38.355631 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.355718 kubelet[2810]: E1030 00:06:38.355644 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.356370 kubelet[2810]: E1030 00:06:38.356333 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.356370 kubelet[2810]: W1030 00:06:38.356347 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.356370 kubelet[2810]: E1030 00:06:38.356361 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.356697 kubelet[2810]: E1030 00:06:38.356677 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.356697 kubelet[2810]: W1030 00:06:38.356694 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.356750 kubelet[2810]: E1030 00:06:38.356709 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.357188 kubelet[2810]: E1030 00:06:38.357155 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.357236 kubelet[2810]: W1030 00:06:38.357200 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.357236 kubelet[2810]: E1030 00:06:38.357216 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.357487 kubelet[2810]: E1030 00:06:38.357468 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.357487 kubelet[2810]: W1030 00:06:38.357484 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.357554 kubelet[2810]: E1030 00:06:38.357496 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.358192 containerd[1621]: time="2025-10-30T00:06:38.358048935Z" level=info msg="connecting to shim fa09e9157bdfa322e65d0978acc3f41658d2f1434aae079e232868156e2ffbb6" address="unix:///run/containerd/s/a21d5004ed885a30e5f00c198042a8272a83c2b0e07a0cb8b234426ceddb907f" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:06:38.358420 kubelet[2810]: E1030 00:06:38.358158 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.358420 kubelet[2810]: W1030 00:06:38.358171 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.358420 kubelet[2810]: E1030 00:06:38.358184 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.358699 kubelet[2810]: E1030 00:06:38.358570 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.358699 kubelet[2810]: W1030 00:06:38.358583 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.358699 kubelet[2810]: E1030 00:06:38.358622 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.359141 kubelet[2810]: E1030 00:06:38.359095 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.359141 kubelet[2810]: W1030 00:06:38.359114 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.359141 kubelet[2810]: E1030 00:06:38.359128 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.359477 kubelet[2810]: E1030 00:06:38.359455 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.359477 kubelet[2810]: W1030 00:06:38.359472 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.359540 kubelet[2810]: E1030 00:06:38.359485 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.359936 kubelet[2810]: E1030 00:06:38.359915 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.359936 kubelet[2810]: W1030 00:06:38.359932 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.360021 kubelet[2810]: E1030 00:06:38.359946 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.360326 kubelet[2810]: E1030 00:06:38.360304 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.360326 kubelet[2810]: W1030 00:06:38.360322 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.360431 kubelet[2810]: E1030 00:06:38.360334 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.360668 kubelet[2810]: E1030 00:06:38.360640 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.360668 kubelet[2810]: W1030 00:06:38.360658 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.360741 kubelet[2810]: E1030 00:06:38.360671 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.361114 kubelet[2810]: E1030 00:06:38.361058 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.361177 kubelet[2810]: W1030 00:06:38.361153 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.361177 kubelet[2810]: E1030 00:06:38.361169 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.361898 kubelet[2810]: E1030 00:06:38.361877 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.361898 kubelet[2810]: W1030 00:06:38.361894 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.361971 kubelet[2810]: E1030 00:06:38.361908 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.362275 kubelet[2810]: E1030 00:06:38.362255 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.362275 kubelet[2810]: W1030 00:06:38.362271 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.362347 kubelet[2810]: E1030 00:06:38.362284 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.363398 kubelet[2810]: E1030 00:06:38.363374 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.363398 kubelet[2810]: W1030 00:06:38.363395 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.363680 kubelet[2810]: E1030 00:06:38.363408 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.365134 kubelet[2810]: E1030 00:06:38.365041 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.365134 kubelet[2810]: W1030 00:06:38.365063 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.365134 kubelet[2810]: E1030 00:06:38.365118 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.365463 kubelet[2810]: E1030 00:06:38.365387 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.365463 kubelet[2810]: W1030 00:06:38.365403 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.365463 kubelet[2810]: E1030 00:06:38.365426 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.373288 kubelet[2810]: E1030 00:06:38.373243 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:38.373392 kubelet[2810]: W1030 00:06:38.373298 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:38.373392 kubelet[2810]: E1030 00:06:38.373324 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:38.399440 systemd[1]: Started cri-containerd-fa09e9157bdfa322e65d0978acc3f41658d2f1434aae079e232868156e2ffbb6.scope - libcontainer container fa09e9157bdfa322e65d0978acc3f41658d2f1434aae079e232868156e2ffbb6. Oct 30 00:06:38.440612 containerd[1621]: time="2025-10-30T00:06:38.440421662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p998t,Uid:665abdc2-126d-4be1-9fe9-144daad4992b,Namespace:calico-system,Attempt:0,} returns sandbox id \"fa09e9157bdfa322e65d0978acc3f41658d2f1434aae079e232868156e2ffbb6\"" Oct 30 00:06:38.442355 kubelet[2810]: E1030 00:06:38.442311 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:39.607062 kubelet[2810]: E1030 00:06:39.606987 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bgd9q" podUID="d62c2877-00ac-4394-911e-002e28febfd2" Oct 30 00:06:40.227998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4177740170.mount: Deactivated successfully. Oct 30 00:06:40.662110 containerd[1621]: time="2025-10-30T00:06:40.662034058Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:06:40.664122 containerd[1621]: time="2025-10-30T00:06:40.664054251Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Oct 30 00:06:40.665511 containerd[1621]: time="2025-10-30T00:06:40.665460030Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:06:40.668579 containerd[1621]: time="2025-10-30T00:06:40.668514215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:06:40.669749 containerd[1621]: time="2025-10-30T00:06:40.669159736Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.332260776s" Oct 30 00:06:40.669749 containerd[1621]: time="2025-10-30T00:06:40.669195643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 30 00:06:40.670246 containerd[1621]: time="2025-10-30T00:06:40.670221159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 30 00:06:40.683828 containerd[1621]: time="2025-10-30T00:06:40.683778152Z" level=info msg="CreateContainer within sandbox \"c73fcf9570cbdcf3c9f58ea66e0567007f7b974507e6bd2066da1452bfe91620\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 30 00:06:40.690409 containerd[1621]: time="2025-10-30T00:06:40.690365770Z" level=info msg="Container 9b36ada699969fff5097df3f03e85b68386445438924e372622d2c3a27e8f28c: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:06:40.698583 containerd[1621]: time="2025-10-30T00:06:40.698553563Z" level=info msg="CreateContainer within sandbox \"c73fcf9570cbdcf3c9f58ea66e0567007f7b974507e6bd2066da1452bfe91620\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9b36ada699969fff5097df3f03e85b68386445438924e372622d2c3a27e8f28c\"" Oct 30 00:06:40.699144 containerd[1621]: time="2025-10-30T00:06:40.699054303Z" level=info msg="StartContainer for \"9b36ada699969fff5097df3f03e85b68386445438924e372622d2c3a27e8f28c\"" Oct 30 00:06:40.700526 containerd[1621]: time="2025-10-30T00:06:40.700477084Z" level=info msg="connecting to shim 9b36ada699969fff5097df3f03e85b68386445438924e372622d2c3a27e8f28c" address="unix:///run/containerd/s/0f6793a6fa2e79a74f2a4fcbc90c95b977e1addbaf5d577811a6fb41c7dc363b" protocol=ttrpc version=3 Oct 30 00:06:40.727244 systemd[1]: Started cri-containerd-9b36ada699969fff5097df3f03e85b68386445438924e372622d2c3a27e8f28c.scope - libcontainer container 9b36ada699969fff5097df3f03e85b68386445438924e372622d2c3a27e8f28c. Oct 30 00:06:40.787709 containerd[1621]: time="2025-10-30T00:06:40.787656021Z" level=info msg="StartContainer for \"9b36ada699969fff5097df3f03e85b68386445438924e372622d2c3a27e8f28c\" returns successfully" Oct 30 00:06:41.606632 kubelet[2810]: E1030 00:06:41.606581 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bgd9q" podUID="d62c2877-00ac-4394-911e-002e28febfd2" Oct 30 00:06:41.684765 kubelet[2810]: E1030 00:06:41.684731 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:41.763615 kubelet[2810]: E1030 00:06:41.763557 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:41.763615 kubelet[2810]: W1030 00:06:41.763588 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:41.763615 kubelet[2810]: E1030 00:06:41.763612 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:41.763846 kubelet[2810]: E1030 00:06:41.763826 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:41.763846 kubelet[2810]: W1030 00:06:41.763837 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:41.763900 kubelet[2810]: E1030 00:06:41.763847 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:41.764098 kubelet[2810]: E1030 00:06:41.764048 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:41.764098 kubelet[2810]: W1030 00:06:41.764062 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:41.764098 kubelet[2810]: E1030 00:06:41.764072 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:41.764355 kubelet[2810]: E1030 00:06:41.764326 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:41.764355 kubelet[2810]: W1030 00:06:41.764339 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:41.764355 kubelet[2810]: E1030 00:06:41.764349 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:41.764591 kubelet[2810]: E1030 00:06:41.764562 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:41.764591 kubelet[2810]: W1030 00:06:41.764576 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:41.764591 kubelet[2810]: E1030 00:06:41.764586 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:41.764778 kubelet[2810]: E1030 00:06:41.764760 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:41.764778 kubelet[2810]: W1030 00:06:41.764772 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:41.764826 kubelet[2810]: E1030 00:06:41.764782 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:41.764974 kubelet[2810]: E1030 00:06:41.764951 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:41.764974 kubelet[2810]: W1030 00:06:41.764963 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:41.765036 kubelet[2810]: E1030 00:06:41.764973 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:41.765199 kubelet[2810]: E1030 00:06:41.765170 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:41.765199 kubelet[2810]: W1030 00:06:41.765183 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:41.765199 kubelet[2810]: E1030 00:06:41.765194 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:41.765395 kubelet[2810]: E1030 00:06:41.765379 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:41.765395 kubelet[2810]: W1030 00:06:41.765391 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:41.765482 kubelet[2810]: E1030 00:06:41.765400 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:41.765608 kubelet[2810]: E1030 00:06:41.765588 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:41.765608 kubelet[2810]: W1030 00:06:41.765600 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:41.765656 kubelet[2810]: E1030 00:06:41.765610 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:41.765802 kubelet[2810]: E1030 00:06:41.765785 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:41.765802 kubelet[2810]: W1030 00:06:41.765797 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:41.765852 kubelet[2810]: E1030 00:06:41.765806 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:41.765996 kubelet[2810]: E1030 00:06:41.765980 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:41.765996 kubelet[2810]: W1030 00:06:41.765991 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:41.766043 kubelet[2810]: E1030 00:06:41.766001 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:41.766227 kubelet[2810]: E1030 00:06:41.766208 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:41.766227 kubelet[2810]: W1030 00:06:41.766220 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:41.766282 kubelet[2810]: E1030 00:06:41.766230 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:41.766435 kubelet[2810]: E1030 00:06:41.766416 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:41.766435 kubelet[2810]: W1030 00:06:41.766427 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:41.766435 kubelet[2810]: E1030 00:06:41.766436 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:41.766647 kubelet[2810]: E1030 00:06:41.766630 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:41.766647 kubelet[2810]: W1030 00:06:41.766643 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:41.766701 kubelet[2810]: E1030 00:06:41.766652 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:41.782959 kubelet[2810]: E1030 00:06:41.782928 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:41.782959 kubelet[2810]: W1030 00:06:41.782947 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:41.782959 kubelet[2810]: E1030 00:06:41.782959 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:41.783203 kubelet[2810]: E1030 00:06:41.783185 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:41.783203 kubelet[2810]: W1030 00:06:41.783198 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:41.783291 kubelet[2810]: E1030 00:06:41.783210 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:41.783483 kubelet[2810]: E1030 00:06:41.783443 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:41.783483 kubelet[2810]: W1030 00:06:41.783459 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:41.783483 kubelet[2810]: E1030 00:06:41.783481 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:41.783722 kubelet[2810]: E1030 00:06:41.783693 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:41.783722 kubelet[2810]: W1030 00:06:41.783708 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:41.783722 kubelet[2810]: E1030 00:06:41.783719 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:41.783930 kubelet[2810]: E1030 00:06:41.783910 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:41.783930 kubelet[2810]: W1030 00:06:41.783922 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:41.783994 kubelet[2810]: E1030 00:06:41.783931 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:41.784192 kubelet[2810]: E1030 00:06:41.784170 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:41.784192 kubelet[2810]: W1030 00:06:41.784183 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:41.784192 kubelet[2810]: E1030 00:06:41.784193 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:41.784528 kubelet[2810]: E1030 00:06:41.784500 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:41.784528 kubelet[2810]: W1030 00:06:41.784517 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:41.784619 kubelet[2810]: E1030 00:06:41.784531 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:41.784759 kubelet[2810]: E1030 00:06:41.784737 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:41.784759 kubelet[2810]: W1030 00:06:41.784750 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:41.784759 kubelet[2810]: E1030 00:06:41.784760 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:41.784985 kubelet[2810]: E1030 00:06:41.784964 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:41.784985 kubelet[2810]: W1030 00:06:41.784978 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:41.785060 kubelet[2810]: E1030 00:06:41.784989 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:41.785213 kubelet[2810]: E1030 00:06:41.785193 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:41.785213 kubelet[2810]: W1030 00:06:41.785205 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:41.785283 kubelet[2810]: E1030 00:06:41.785215 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:41.785437 kubelet[2810]: E1030 00:06:41.785418 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:41.785437 kubelet[2810]: W1030 00:06:41.785431 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:41.785511 kubelet[2810]: E1030 00:06:41.785442 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:41.785758 kubelet[2810]: E1030 00:06:41.785720 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:41.785758 kubelet[2810]: W1030 00:06:41.785737 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:41.785758 kubelet[2810]: E1030 00:06:41.785750 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:41.785984 kubelet[2810]: E1030 00:06:41.785963 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:41.785984 kubelet[2810]: W1030 00:06:41.785975 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:41.785984 kubelet[2810]: E1030 00:06:41.785986 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:41.786234 kubelet[2810]: E1030 00:06:41.786212 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:41.786234 kubelet[2810]: W1030 00:06:41.786224 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:41.786234 kubelet[2810]: E1030 00:06:41.786234 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:41.786439 kubelet[2810]: E1030 00:06:41.786419 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:41.786439 kubelet[2810]: W1030 00:06:41.786431 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:41.786439 kubelet[2810]: E1030 00:06:41.786440 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:41.786675 kubelet[2810]: E1030 00:06:41.786652 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:41.786675 kubelet[2810]: W1030 00:06:41.786665 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:41.786675 kubelet[2810]: E1030 00:06:41.786675 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:41.786965 kubelet[2810]: E1030 00:06:41.786943 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:41.786965 kubelet[2810]: W1030 00:06:41.786956 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:41.787037 kubelet[2810]: E1030 00:06:41.786966 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:41.787206 kubelet[2810]: E1030 00:06:41.787177 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:41.787206 kubelet[2810]: W1030 00:06:41.787188 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:41.787206 kubelet[2810]: E1030 00:06:41.787198 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:41.895969 kubelet[2810]: I1030 00:06:41.895634 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-868d486c47-nd5cz" podStartSLOduration=2.558341948 podStartE2EDuration="4.895615387s" podCreationTimestamp="2025-10-30 00:06:37 +0000 UTC" firstStartedPulling="2025-10-30 00:06:38.332635524 +0000 UTC m=+22.875463741" lastFinishedPulling="2025-10-30 00:06:40.669908963 +0000 UTC m=+25.212737180" observedRunningTime="2025-10-30 00:06:41.895383782 +0000 UTC m=+26.438211999" watchObservedRunningTime="2025-10-30 00:06:41.895615387 +0000 UTC m=+26.438443604" Oct 30 00:06:42.683176 containerd[1621]: time="2025-10-30T00:06:42.683102742Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:06:42.684050 containerd[1621]: time="2025-10-30T00:06:42.684011157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Oct 30 00:06:42.685602 containerd[1621]: time="2025-10-30T00:06:42.685550537Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:06:42.686433 kubelet[2810]: I1030 00:06:42.686389 2810 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 30 00:06:42.686838 kubelet[2810]: E1030 00:06:42.686747 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:42.688038 containerd[1621]: time="2025-10-30T00:06:42.687997590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:06:42.688657 containerd[1621]: time="2025-10-30T00:06:42.688593941Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.018342333s" Oct 30 00:06:42.688657 containerd[1621]: time="2025-10-30T00:06:42.688634757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 30 00:06:42.694101 containerd[1621]: time="2025-10-30T00:06:42.694047658Z" level=info msg="CreateContainer within sandbox \"fa09e9157bdfa322e65d0978acc3f41658d2f1434aae079e232868156e2ffbb6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 30 00:06:42.704216 containerd[1621]: time="2025-10-30T00:06:42.704153780Z" level=info msg="Container aa8450a7e313589d3f76edffa95aa8235e84bab1a156306d9bf0d6658bb0dff3: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:06:42.714679 containerd[1621]: time="2025-10-30T00:06:42.714555616Z" level=info msg="CreateContainer within sandbox \"fa09e9157bdfa322e65d0978acc3f41658d2f1434aae079e232868156e2ffbb6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"aa8450a7e313589d3f76edffa95aa8235e84bab1a156306d9bf0d6658bb0dff3\"" Oct 30 00:06:42.715283 containerd[1621]: time="2025-10-30T00:06:42.715226946Z" level=info msg="StartContainer for \"aa8450a7e313589d3f76edffa95aa8235e84bab1a156306d9bf0d6658bb0dff3\"" Oct 30 00:06:42.717191 containerd[1621]: time="2025-10-30T00:06:42.717152431Z" level=info msg="connecting to shim aa8450a7e313589d3f76edffa95aa8235e84bab1a156306d9bf0d6658bb0dff3" address="unix:///run/containerd/s/a21d5004ed885a30e5f00c198042a8272a83c2b0e07a0cb8b234426ceddb907f" protocol=ttrpc version=3 Oct 30 00:06:42.746426 systemd[1]: Started cri-containerd-aa8450a7e313589d3f76edffa95aa8235e84bab1a156306d9bf0d6658bb0dff3.scope - libcontainer container aa8450a7e313589d3f76edffa95aa8235e84bab1a156306d9bf0d6658bb0dff3. Oct 30 00:06:42.772720 kubelet[2810]: E1030 00:06:42.772642 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:42.772720 kubelet[2810]: W1030 00:06:42.772675 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:42.772720 kubelet[2810]: E1030 00:06:42.772703 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:42.772976 kubelet[2810]: E1030 00:06:42.772928 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:42.772976 kubelet[2810]: W1030 00:06:42.772938 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:42.772976 kubelet[2810]: E1030 00:06:42.772947 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:42.773215 kubelet[2810]: E1030 00:06:42.773177 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:42.773215 kubelet[2810]: W1030 00:06:42.773191 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:42.773215 kubelet[2810]: E1030 00:06:42.773203 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:42.773517 kubelet[2810]: E1030 00:06:42.773488 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:42.773517 kubelet[2810]: W1030 00:06:42.773501 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:42.773517 kubelet[2810]: E1030 00:06:42.773513 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:42.773767 kubelet[2810]: E1030 00:06:42.773739 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:42.773767 kubelet[2810]: W1030 00:06:42.773753 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:42.773767 kubelet[2810]: E1030 00:06:42.773764 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:42.774004 kubelet[2810]: E1030 00:06:42.773977 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:42.774004 kubelet[2810]: W1030 00:06:42.773990 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:42.774004 kubelet[2810]: E1030 00:06:42.774001 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:42.774351 kubelet[2810]: E1030 00:06:42.774322 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:42.774351 kubelet[2810]: W1030 00:06:42.774336 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:42.774351 kubelet[2810]: E1030 00:06:42.774348 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:42.775435 kubelet[2810]: E1030 00:06:42.775355 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:42.775435 kubelet[2810]: W1030 00:06:42.775397 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:42.775435 kubelet[2810]: E1030 00:06:42.775427 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:42.775794 kubelet[2810]: E1030 00:06:42.775771 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:42.775794 kubelet[2810]: W1030 00:06:42.775784 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:42.775794 kubelet[2810]: E1030 00:06:42.775793 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:42.776029 kubelet[2810]: E1030 00:06:42.776005 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:42.776029 kubelet[2810]: W1030 00:06:42.776017 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:42.776029 kubelet[2810]: E1030 00:06:42.776028 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:42.776287 kubelet[2810]: E1030 00:06:42.776269 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:42.776287 kubelet[2810]: W1030 00:06:42.776285 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:42.776344 kubelet[2810]: E1030 00:06:42.776296 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:42.776578 kubelet[2810]: E1030 00:06:42.776550 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:42.776578 kubelet[2810]: W1030 00:06:42.776566 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:42.776578 kubelet[2810]: E1030 00:06:42.776577 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:42.776845 kubelet[2810]: E1030 00:06:42.776824 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:42.776845 kubelet[2810]: W1030 00:06:42.776835 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:42.776845 kubelet[2810]: E1030 00:06:42.776844 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:42.777071 kubelet[2810]: E1030 00:06:42.777054 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:42.777071 kubelet[2810]: W1030 00:06:42.777067 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:42.777160 kubelet[2810]: E1030 00:06:42.777094 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:42.777364 kubelet[2810]: E1030 00:06:42.777348 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:42.777364 kubelet[2810]: W1030 00:06:42.777361 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:42.777414 kubelet[2810]: E1030 00:06:42.777372 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:42.793122 kubelet[2810]: E1030 00:06:42.793018 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:42.793122 kubelet[2810]: W1030 00:06:42.793045 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:42.793122 kubelet[2810]: E1030 00:06:42.793068 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:42.793404 kubelet[2810]: E1030 00:06:42.793378 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:42.793404 kubelet[2810]: W1030 00:06:42.793395 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:42.793404 kubelet[2810]: E1030 00:06:42.793405 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:42.793704 kubelet[2810]: E1030 00:06:42.793684 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:42.793704 kubelet[2810]: W1030 00:06:42.793700 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:42.793704 kubelet[2810]: E1030 00:06:42.793711 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:42.793926 kubelet[2810]: E1030 00:06:42.793906 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:42.793926 kubelet[2810]: W1030 00:06:42.793918 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:42.793926 kubelet[2810]: E1030 00:06:42.793927 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:42.794824 kubelet[2810]: E1030 00:06:42.794803 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:42.794870 kubelet[2810]: W1030 00:06:42.794835 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:42.794870 kubelet[2810]: E1030 00:06:42.794846 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:42.795136 kubelet[2810]: E1030 00:06:42.795116 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:42.795136 kubelet[2810]: W1030 00:06:42.795132 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:42.795229 kubelet[2810]: E1030 00:06:42.795143 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:42.795352 kubelet[2810]: E1030 00:06:42.795330 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:42.795352 kubelet[2810]: W1030 00:06:42.795343 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:42.795352 kubelet[2810]: E1030 00:06:42.795352 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:42.795572 kubelet[2810]: E1030 00:06:42.795551 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:42.795572 kubelet[2810]: W1030 00:06:42.795564 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:42.795572 kubelet[2810]: E1030 00:06:42.795573 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:42.795815 kubelet[2810]: E1030 00:06:42.795796 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:42.795815 kubelet[2810]: W1030 00:06:42.795810 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:42.795875 kubelet[2810]: E1030 00:06:42.795820 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:42.796037 kubelet[2810]: E1030 00:06:42.796003 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:42.796037 kubelet[2810]: W1030 00:06:42.796028 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:42.796109 kubelet[2810]: E1030 00:06:42.796043 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:42.796326 kubelet[2810]: E1030 00:06:42.796305 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:42.796326 kubelet[2810]: W1030 00:06:42.796318 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:42.796387 kubelet[2810]: E1030 00:06:42.796328 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:42.796680 kubelet[2810]: E1030 00:06:42.796654 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:42.796680 kubelet[2810]: W1030 00:06:42.796668 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:42.796680 kubelet[2810]: E1030 00:06:42.796678 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:42.796927 kubelet[2810]: E1030 00:06:42.796904 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:42.796927 kubelet[2810]: W1030 00:06:42.796918 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:42.796927 kubelet[2810]: E1030 00:06:42.796927 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:42.797174 kubelet[2810]: E1030 00:06:42.797148 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:42.797174 kubelet[2810]: W1030 00:06:42.797160 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:42.797174 kubelet[2810]: E1030 00:06:42.797170 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:42.797369 kubelet[2810]: E1030 00:06:42.797345 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:42.797369 kubelet[2810]: W1030 00:06:42.797356 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:42.797369 kubelet[2810]: E1030 00:06:42.797364 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:42.797563 kubelet[2810]: E1030 00:06:42.797534 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:42.797563 kubelet[2810]: W1030 00:06:42.797543 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:42.797563 kubelet[2810]: E1030 00:06:42.797551 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:42.797827 kubelet[2810]: E1030 00:06:42.797806 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:42.797827 kubelet[2810]: W1030 00:06:42.797818 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:42.797827 kubelet[2810]: E1030 00:06:42.797826 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:42.798240 kubelet[2810]: E1030 00:06:42.798221 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:06:42.798240 kubelet[2810]: W1030 00:06:42.798233 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:06:42.798240 kubelet[2810]: E1030 00:06:42.798243 2810 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:06:42.804820 containerd[1621]: time="2025-10-30T00:06:42.804686071Z" level=info msg="StartContainer for \"aa8450a7e313589d3f76edffa95aa8235e84bab1a156306d9bf0d6658bb0dff3\" returns successfully" Oct 30 00:06:42.824755 systemd[1]: cri-containerd-aa8450a7e313589d3f76edffa95aa8235e84bab1a156306d9bf0d6658bb0dff3.scope: Deactivated successfully. Oct 30 00:06:42.825999 systemd[1]: cri-containerd-aa8450a7e313589d3f76edffa95aa8235e84bab1a156306d9bf0d6658bb0dff3.scope: Consumed 50ms CPU time, 6.4M memory peak, 4.6M written to disk. Oct 30 00:06:42.828586 containerd[1621]: time="2025-10-30T00:06:42.826297831Z" level=info msg="received exit event container_id:\"aa8450a7e313589d3f76edffa95aa8235e84bab1a156306d9bf0d6658bb0dff3\" id:\"aa8450a7e313589d3f76edffa95aa8235e84bab1a156306d9bf0d6658bb0dff3\" pid:3508 exited_at:{seconds:1761782802 nanos:825862824}" Oct 30 00:06:42.828586 containerd[1621]: time="2025-10-30T00:06:42.826434809Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aa8450a7e313589d3f76edffa95aa8235e84bab1a156306d9bf0d6658bb0dff3\" id:\"aa8450a7e313589d3f76edffa95aa8235e84bab1a156306d9bf0d6658bb0dff3\" pid:3508 exited_at:{seconds:1761782802 nanos:825862824}" Oct 30 00:06:42.852126 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa8450a7e313589d3f76edffa95aa8235e84bab1a156306d9bf0d6658bb0dff3-rootfs.mount: Deactivated successfully. Oct 30 00:06:43.609374 kubelet[2810]: E1030 00:06:43.609323 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bgd9q" podUID="d62c2877-00ac-4394-911e-002e28febfd2" Oct 30 00:06:43.690099 kubelet[2810]: E1030 00:06:43.690047 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:44.694264 kubelet[2810]: E1030 00:06:44.694205 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:44.695105 containerd[1621]: time="2025-10-30T00:06:44.695037845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 30 00:06:45.606741 kubelet[2810]: E1030 00:06:45.606681 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bgd9q" podUID="d62c2877-00ac-4394-911e-002e28febfd2" Oct 30 00:06:47.607468 kubelet[2810]: E1030 00:06:47.606706 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bgd9q" podUID="d62c2877-00ac-4394-911e-002e28febfd2" Oct 30 00:06:49.578823 containerd[1621]: time="2025-10-30T00:06:49.578718995Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:06:49.581024 containerd[1621]: time="2025-10-30T00:06:49.580973306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 30 00:06:49.582812 containerd[1621]: time="2025-10-30T00:06:49.582731716Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:06:49.585996 containerd[1621]: time="2025-10-30T00:06:49.585952480Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:06:49.586793 containerd[1621]: time="2025-10-30T00:06:49.586598712Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.891521634s" Oct 30 00:06:49.586793 containerd[1621]: time="2025-10-30T00:06:49.586627435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 30 00:06:49.592070 containerd[1621]: time="2025-10-30T00:06:49.591992093Z" level=info msg="CreateContainer within sandbox \"fa09e9157bdfa322e65d0978acc3f41658d2f1434aae079e232868156e2ffbb6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 30 00:06:49.602212 containerd[1621]: time="2025-10-30T00:06:49.602141409Z" level=info msg="Container a5c08214bcc2144170cd4fa6c0af35b2a813130d4b144c69e546f3a68d9789cb: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:06:49.607595 kubelet[2810]: E1030 00:06:49.607153 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bgd9q" podUID="d62c2877-00ac-4394-911e-002e28febfd2" Oct 30 00:06:49.616509 containerd[1621]: time="2025-10-30T00:06:49.616448857Z" level=info msg="CreateContainer within sandbox \"fa09e9157bdfa322e65d0978acc3f41658d2f1434aae079e232868156e2ffbb6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a5c08214bcc2144170cd4fa6c0af35b2a813130d4b144c69e546f3a68d9789cb\"" Oct 30 00:06:49.616958 containerd[1621]: time="2025-10-30T00:06:49.616933818Z" level=info msg="StartContainer for \"a5c08214bcc2144170cd4fa6c0af35b2a813130d4b144c69e546f3a68d9789cb\"" Oct 30 00:06:49.618580 containerd[1621]: time="2025-10-30T00:06:49.618540632Z" level=info msg="connecting to shim a5c08214bcc2144170cd4fa6c0af35b2a813130d4b144c69e546f3a68d9789cb" address="unix:///run/containerd/s/a21d5004ed885a30e5f00c198042a8272a83c2b0e07a0cb8b234426ceddb907f" protocol=ttrpc version=3 Oct 30 00:06:49.646252 systemd[1]: Started cri-containerd-a5c08214bcc2144170cd4fa6c0af35b2a813130d4b144c69e546f3a68d9789cb.scope - libcontainer container a5c08214bcc2144170cd4fa6c0af35b2a813130d4b144c69e546f3a68d9789cb. Oct 30 00:06:49.749028 containerd[1621]: time="2025-10-30T00:06:49.748324668Z" level=info msg="StartContainer for \"a5c08214bcc2144170cd4fa6c0af35b2a813130d4b144c69e546f3a68d9789cb\" returns successfully" Oct 30 00:06:50.763283 kubelet[2810]: E1030 00:06:50.763216 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:51.606993 kubelet[2810]: E1030 00:06:51.606919 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bgd9q" podUID="d62c2877-00ac-4394-911e-002e28febfd2" Oct 30 00:06:51.765027 kubelet[2810]: E1030 00:06:51.764964 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:52.727346 systemd[1]: cri-containerd-a5c08214bcc2144170cd4fa6c0af35b2a813130d4b144c69e546f3a68d9789cb.scope: Deactivated successfully. Oct 30 00:06:52.728378 systemd[1]: cri-containerd-a5c08214bcc2144170cd4fa6c0af35b2a813130d4b144c69e546f3a68d9789cb.scope: Consumed 673ms CPU time, 180.8M memory peak, 3.2M read from disk, 171.3M written to disk. Oct 30 00:06:52.729169 containerd[1621]: time="2025-10-30T00:06:52.728595935Z" level=info msg="received exit event container_id:\"a5c08214bcc2144170cd4fa6c0af35b2a813130d4b144c69e546f3a68d9789cb\" id:\"a5c08214bcc2144170cd4fa6c0af35b2a813130d4b144c69e546f3a68d9789cb\" pid:3602 exited_at:{seconds:1761782812 nanos:728292946}" Oct 30 00:06:52.729169 containerd[1621]: time="2025-10-30T00:06:52.728785610Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5c08214bcc2144170cd4fa6c0af35b2a813130d4b144c69e546f3a68d9789cb\" id:\"a5c08214bcc2144170cd4fa6c0af35b2a813130d4b144c69e546f3a68d9789cb\" pid:3602 exited_at:{seconds:1761782812 nanos:728292946}" Oct 30 00:06:52.758198 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5c08214bcc2144170cd4fa6c0af35b2a813130d4b144c69e546f3a68d9789cb-rootfs.mount: Deactivated successfully. Oct 30 00:06:52.786882 kubelet[2810]: I1030 00:06:52.786828 2810 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 30 00:06:53.268342 systemd[1]: Created slice kubepods-besteffort-pod2f8d59a7_a1e6_4aca_91ca_94959e3f1a19.slice - libcontainer container kubepods-besteffort-pod2f8d59a7_a1e6_4aca_91ca_94959e3f1a19.slice. Oct 30 00:06:53.276255 systemd[1]: Created slice kubepods-besteffort-podfb783c3f_c8d1_42f0_a262_b7fd408f60b3.slice - libcontainer container kubepods-besteffort-podfb783c3f_c8d1_42f0_a262_b7fd408f60b3.slice. Oct 30 00:06:53.291172 systemd[1]: Created slice kubepods-burstable-pod6477e341_9cbb_4bbc_b90e_fcc438b0b3a9.slice - libcontainer container kubepods-burstable-pod6477e341_9cbb_4bbc_b90e_fcc438b0b3a9.slice. Oct 30 00:06:53.300503 systemd[1]: Created slice kubepods-besteffort-pod5d5c2c33_987b_44fa_be72_89d7d6488ff0.slice - libcontainer container kubepods-besteffort-pod5d5c2c33_987b_44fa_be72_89d7d6488ff0.slice. Oct 30 00:06:53.307784 systemd[1]: Created slice kubepods-burstable-pod2c069f41_6fd8_469f_b76f_46d048b85fa4.slice - libcontainer container kubepods-burstable-pod2c069f41_6fd8_469f_b76f_46d048b85fa4.slice. Oct 30 00:06:53.315000 systemd[1]: Created slice kubepods-besteffort-pod6ca14667_e807_40b3_a7f9_d2bc31653373.slice - libcontainer container kubepods-besteffort-pod6ca14667_e807_40b3_a7f9_d2bc31653373.slice. Oct 30 00:06:53.330439 systemd[1]: Created slice kubepods-besteffort-pod0a7e0678_b33a_4d3a_b42a_5b4a4c30629b.slice - libcontainer container kubepods-besteffort-pod0a7e0678_b33a_4d3a_b42a_5b4a4c30629b.slice. Oct 30 00:06:53.372824 kubelet[2810]: I1030 00:06:53.372742 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0a7e0678-b33a-4d3a-b42a-5b4a4c30629b-goldmane-key-pair\") pod \"goldmane-666569f655-g2t2x\" (UID: \"0a7e0678-b33a-4d3a-b42a-5b4a4c30629b\") " pod="calico-system/goldmane-666569f655-g2t2x" Oct 30 00:06:53.372824 kubelet[2810]: I1030 00:06:53.372797 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2f8d59a7-a1e6-4aca-91ca-94959e3f1a19-calico-apiserver-certs\") pod \"calico-apiserver-869977fb74-rhndp\" (UID: \"2f8d59a7-a1e6-4aca-91ca-94959e3f1a19\") " pod="calico-apiserver/calico-apiserver-869977fb74-rhndp" Oct 30 00:06:53.372824 kubelet[2810]: I1030 00:06:53.372829 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68v4m\" (UniqueName: \"kubernetes.io/projected/2f8d59a7-a1e6-4aca-91ca-94959e3f1a19-kube-api-access-68v4m\") pod \"calico-apiserver-869977fb74-rhndp\" (UID: \"2f8d59a7-a1e6-4aca-91ca-94959e3f1a19\") " pod="calico-apiserver/calico-apiserver-869977fb74-rhndp" Oct 30 00:06:53.372824 kubelet[2810]: I1030 00:06:53.372850 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbv6t\" (UniqueName: \"kubernetes.io/projected/6477e341-9cbb-4bbc-b90e-fcc438b0b3a9-kube-api-access-gbv6t\") pod \"coredns-674b8bbfcf-xgt5r\" (UID: \"6477e341-9cbb-4bbc-b90e-fcc438b0b3a9\") " pod="kube-system/coredns-674b8bbfcf-xgt5r" Oct 30 00:06:53.373247 kubelet[2810]: I1030 00:06:53.372888 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzbbs\" (UniqueName: \"kubernetes.io/projected/5d5c2c33-987b-44fa-be72-89d7d6488ff0-kube-api-access-qzbbs\") pod \"calico-apiserver-869977fb74-wfqp7\" (UID: \"5d5c2c33-987b-44fa-be72-89d7d6488ff0\") " pod="calico-apiserver/calico-apiserver-869977fb74-wfqp7" Oct 30 00:06:53.373247 kubelet[2810]: I1030 00:06:53.372932 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v52lg\" (UniqueName: \"kubernetes.io/projected/fb783c3f-c8d1-42f0-a262-b7fd408f60b3-kube-api-access-v52lg\") pod \"calico-kube-controllers-69bd87fbdd-zshjp\" (UID: \"fb783c3f-c8d1-42f0-a262-b7fd408f60b3\") " pod="calico-system/calico-kube-controllers-69bd87fbdd-zshjp" Oct 30 00:06:53.373247 kubelet[2810]: I1030 00:06:53.372971 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvlfj\" (UniqueName: \"kubernetes.io/projected/2c069f41-6fd8-469f-b76f-46d048b85fa4-kube-api-access-zvlfj\") pod \"coredns-674b8bbfcf-rxbrv\" (UID: \"2c069f41-6fd8-469f-b76f-46d048b85fa4\") " pod="kube-system/coredns-674b8bbfcf-rxbrv" Oct 30 00:06:53.373247 kubelet[2810]: I1030 00:06:53.373006 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9jvq\" (UniqueName: \"kubernetes.io/projected/6ca14667-e807-40b3-a7f9-d2bc31653373-kube-api-access-w9jvq\") pod \"whisker-dc6679f96-hdbnt\" (UID: \"6ca14667-e807-40b3-a7f9-d2bc31653373\") " pod="calico-system/whisker-dc6679f96-hdbnt" Oct 30 00:06:53.373247 kubelet[2810]: I1030 00:06:53.373039 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt6b5\" (UniqueName: \"kubernetes.io/projected/0a7e0678-b33a-4d3a-b42a-5b4a4c30629b-kube-api-access-lt6b5\") pod \"goldmane-666569f655-g2t2x\" (UID: \"0a7e0678-b33a-4d3a-b42a-5b4a4c30629b\") " pod="calico-system/goldmane-666569f655-g2t2x" Oct 30 00:06:53.373416 kubelet[2810]: I1030 00:06:53.373098 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6477e341-9cbb-4bbc-b90e-fcc438b0b3a9-config-volume\") pod \"coredns-674b8bbfcf-xgt5r\" (UID: \"6477e341-9cbb-4bbc-b90e-fcc438b0b3a9\") " pod="kube-system/coredns-674b8bbfcf-xgt5r" Oct 30 00:06:53.373416 kubelet[2810]: I1030 00:06:53.373130 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5d5c2c33-987b-44fa-be72-89d7d6488ff0-calico-apiserver-certs\") pod \"calico-apiserver-869977fb74-wfqp7\" (UID: \"5d5c2c33-987b-44fa-be72-89d7d6488ff0\") " pod="calico-apiserver/calico-apiserver-869977fb74-wfqp7" Oct 30 00:06:53.373416 kubelet[2810]: I1030 00:06:53.373174 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6ca14667-e807-40b3-a7f9-d2bc31653373-whisker-backend-key-pair\") pod \"whisker-dc6679f96-hdbnt\" (UID: \"6ca14667-e807-40b3-a7f9-d2bc31653373\") " pod="calico-system/whisker-dc6679f96-hdbnt" Oct 30 00:06:53.373416 kubelet[2810]: I1030 00:06:53.373196 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ca14667-e807-40b3-a7f9-d2bc31653373-whisker-ca-bundle\") pod \"whisker-dc6679f96-hdbnt\" (UID: \"6ca14667-e807-40b3-a7f9-d2bc31653373\") " pod="calico-system/whisker-dc6679f96-hdbnt" Oct 30 00:06:53.373416 kubelet[2810]: I1030 00:06:53.373212 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a7e0678-b33a-4d3a-b42a-5b4a4c30629b-goldmane-ca-bundle\") pod \"goldmane-666569f655-g2t2x\" (UID: \"0a7e0678-b33a-4d3a-b42a-5b4a4c30629b\") " pod="calico-system/goldmane-666569f655-g2t2x" Oct 30 00:06:53.373569 kubelet[2810]: I1030 00:06:53.373232 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb783c3f-c8d1-42f0-a262-b7fd408f60b3-tigera-ca-bundle\") pod \"calico-kube-controllers-69bd87fbdd-zshjp\" (UID: \"fb783c3f-c8d1-42f0-a262-b7fd408f60b3\") " pod="calico-system/calico-kube-controllers-69bd87fbdd-zshjp" Oct 30 00:06:53.373569 kubelet[2810]: I1030 00:06:53.373264 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c069f41-6fd8-469f-b76f-46d048b85fa4-config-volume\") pod \"coredns-674b8bbfcf-rxbrv\" (UID: \"2c069f41-6fd8-469f-b76f-46d048b85fa4\") " pod="kube-system/coredns-674b8bbfcf-rxbrv" Oct 30 00:06:53.373569 kubelet[2810]: I1030 00:06:53.373282 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a7e0678-b33a-4d3a-b42a-5b4a4c30629b-config\") pod \"goldmane-666569f655-g2t2x\" (UID: \"0a7e0678-b33a-4d3a-b42a-5b4a4c30629b\") " pod="calico-system/goldmane-666569f655-g2t2x" Oct 30 00:06:53.574245 containerd[1621]: time="2025-10-30T00:06:53.574053714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-869977fb74-rhndp,Uid:2f8d59a7-a1e6-4aca-91ca-94959e3f1a19,Namespace:calico-apiserver,Attempt:0,}" Oct 30 00:06:53.584897 containerd[1621]: time="2025-10-30T00:06:53.584839311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69bd87fbdd-zshjp,Uid:fb783c3f-c8d1-42f0-a262-b7fd408f60b3,Namespace:calico-system,Attempt:0,}" Oct 30 00:06:53.599342 kubelet[2810]: E1030 00:06:53.599286 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:53.600906 containerd[1621]: time="2025-10-30T00:06:53.599862638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xgt5r,Uid:6477e341-9cbb-4bbc-b90e-fcc438b0b3a9,Namespace:kube-system,Attempt:0,}" Oct 30 00:06:53.605315 containerd[1621]: time="2025-10-30T00:06:53.605268100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-869977fb74-wfqp7,Uid:5d5c2c33-987b-44fa-be72-89d7d6488ff0,Namespace:calico-apiserver,Attempt:0,}" Oct 30 00:06:53.613305 kubelet[2810]: E1030 00:06:53.613268 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:53.614279 containerd[1621]: time="2025-10-30T00:06:53.614238281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rxbrv,Uid:2c069f41-6fd8-469f-b76f-46d048b85fa4,Namespace:kube-system,Attempt:0,}" Oct 30 00:06:53.623796 systemd[1]: Created slice kubepods-besteffort-podd62c2877_00ac_4394_911e_002e28febfd2.slice - libcontainer container kubepods-besteffort-podd62c2877_00ac_4394_911e_002e28febfd2.slice. Oct 30 00:06:53.625237 containerd[1621]: time="2025-10-30T00:06:53.625187155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dc6679f96-hdbnt,Uid:6ca14667-e807-40b3-a7f9-d2bc31653373,Namespace:calico-system,Attempt:0,}" Oct 30 00:06:53.663745 containerd[1621]: time="2025-10-30T00:06:53.663688092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-g2t2x,Uid:0a7e0678-b33a-4d3a-b42a-5b4a4c30629b,Namespace:calico-system,Attempt:0,}" Oct 30 00:06:53.679163 containerd[1621]: time="2025-10-30T00:06:53.677713177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bgd9q,Uid:d62c2877-00ac-4394-911e-002e28febfd2,Namespace:calico-system,Attempt:0,}" Oct 30 00:06:53.791116 kubelet[2810]: E1030 00:06:53.790328 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:53.793357 containerd[1621]: time="2025-10-30T00:06:53.793309748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 30 00:06:53.885320 containerd[1621]: time="2025-10-30T00:06:53.885151058Z" level=error msg="Failed to destroy network for sandbox \"7eec3596ab66752806e785d5f70ede172b75bb9d96708319136934101e624c63\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:06:53.891490 systemd[1]: run-netns-cni\x2d380f4292\x2db86e\x2df893\x2d537d\x2dc180ee78f819.mount: Deactivated successfully. Oct 30 00:06:53.900829 containerd[1621]: time="2025-10-30T00:06:53.900763431Z" level=error msg="Failed to destroy network for sandbox \"1b758ab964faeff41a2aa4949f54ea014d5aa437c74e092e24bd2d0a8354670c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:06:53.907070 systemd[1]: run-netns-cni\x2db85781e8\x2d299e\x2d69f1\x2d5c09\x2d8ddde22e4d47.mount: Deactivated successfully. Oct 30 00:06:53.912738 containerd[1621]: time="2025-10-30T00:06:53.912640005Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-869977fb74-rhndp,Uid:2f8d59a7-a1e6-4aca-91ca-94959e3f1a19,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b758ab964faeff41a2aa4949f54ea014d5aa437c74e092e24bd2d0a8354670c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:06:53.913124 kubelet[2810]: E1030 00:06:53.913018 2810 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b758ab964faeff41a2aa4949f54ea014d5aa437c74e092e24bd2d0a8354670c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:06:53.913124 kubelet[2810]: E1030 00:06:53.913159 2810 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b758ab964faeff41a2aa4949f54ea014d5aa437c74e092e24bd2d0a8354670c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-869977fb74-rhndp" Oct 30 00:06:53.913124 kubelet[2810]: E1030 00:06:53.913197 2810 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b758ab964faeff41a2aa4949f54ea014d5aa437c74e092e24bd2d0a8354670c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-869977fb74-rhndp" Oct 30 00:06:53.913622 kubelet[2810]: E1030 00:06:53.913309 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-869977fb74-rhndp_calico-apiserver(2f8d59a7-a1e6-4aca-91ca-94959e3f1a19)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-869977fb74-rhndp_calico-apiserver(2f8d59a7-a1e6-4aca-91ca-94959e3f1a19)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b758ab964faeff41a2aa4949f54ea014d5aa437c74e092e24bd2d0a8354670c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-869977fb74-rhndp" podUID="2f8d59a7-a1e6-4aca-91ca-94959e3f1a19" Oct 30 00:06:53.915570 containerd[1621]: time="2025-10-30T00:06:53.914057214Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dc6679f96-hdbnt,Uid:6ca14667-e807-40b3-a7f9-d2bc31653373,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7eec3596ab66752806e785d5f70ede172b75bb9d96708319136934101e624c63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:06:53.915570 containerd[1621]: time="2025-10-30T00:06:53.914506617Z" level=error msg="Failed to destroy network for sandbox \"ec7c0bdd8792be3a2d873b1e4a203d1534a96ec84946fe22df9bd3373ef4555e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:06:53.915729 kubelet[2810]: E1030 00:06:53.914259 2810 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7eec3596ab66752806e785d5f70ede172b75bb9d96708319136934101e624c63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:06:53.915729 kubelet[2810]: E1030 00:06:53.914285 2810 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7eec3596ab66752806e785d5f70ede172b75bb9d96708319136934101e624c63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-dc6679f96-hdbnt" Oct 30 00:06:53.915729 kubelet[2810]: E1030 00:06:53.914301 2810 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7eec3596ab66752806e785d5f70ede172b75bb9d96708319136934101e624c63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-dc6679f96-hdbnt" Oct 30 00:06:53.915820 kubelet[2810]: E1030 00:06:53.914350 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-dc6679f96-hdbnt_calico-system(6ca14667-e807-40b3-a7f9-d2bc31653373)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-dc6679f96-hdbnt_calico-system(6ca14667-e807-40b3-a7f9-d2bc31653373)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7eec3596ab66752806e785d5f70ede172b75bb9d96708319136934101e624c63\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-dc6679f96-hdbnt" podUID="6ca14667-e807-40b3-a7f9-d2bc31653373" Oct 30 00:06:53.918520 systemd[1]: run-netns-cni\x2d8132e882\x2d0da9\x2d0c3f\x2d3b00\x2d47a603bf6240.mount: Deactivated successfully. Oct 30 00:06:53.920068 containerd[1621]: time="2025-10-30T00:06:53.920016856Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rxbrv,Uid:2c069f41-6fd8-469f-b76f-46d048b85fa4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec7c0bdd8792be3a2d873b1e4a203d1534a96ec84946fe22df9bd3373ef4555e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:06:53.921394 kubelet[2810]: E1030 00:06:53.921304 2810 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec7c0bdd8792be3a2d873b1e4a203d1534a96ec84946fe22df9bd3373ef4555e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:06:53.921495 kubelet[2810]: E1030 00:06:53.921421 2810 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec7c0bdd8792be3a2d873b1e4a203d1534a96ec84946fe22df9bd3373ef4555e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rxbrv" Oct 30 00:06:53.921495 kubelet[2810]: E1030 00:06:53.921441 2810 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec7c0bdd8792be3a2d873b1e4a203d1534a96ec84946fe22df9bd3373ef4555e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rxbrv" Oct 30 00:06:53.921552 kubelet[2810]: E1030 00:06:53.921490 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-rxbrv_kube-system(2c069f41-6fd8-469f-b76f-46d048b85fa4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-rxbrv_kube-system(2c069f41-6fd8-469f-b76f-46d048b85fa4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec7c0bdd8792be3a2d873b1e4a203d1534a96ec84946fe22df9bd3373ef4555e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-rxbrv" podUID="2c069f41-6fd8-469f-b76f-46d048b85fa4" Oct 30 00:06:53.924934 containerd[1621]: time="2025-10-30T00:06:53.924852259Z" level=error msg="Failed to destroy network for sandbox \"f3bf605c24c6656efea2e054e6a52f7e7eee8d69f915521882e557ce664bc20f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:06:53.931569 systemd[1]: run-netns-cni\x2db0d6cd26\x2dab9e\x2da5cc\x2d6ec1\x2de73cfe85263c.mount: Deactivated successfully. Oct 30 00:06:53.933907 containerd[1621]: time="2025-10-30T00:06:53.933324985Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69bd87fbdd-zshjp,Uid:fb783c3f-c8d1-42f0-a262-b7fd408f60b3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3bf605c24c6656efea2e054e6a52f7e7eee8d69f915521882e557ce664bc20f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:06:53.934025 kubelet[2810]: E1030 00:06:53.933688 2810 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3bf605c24c6656efea2e054e6a52f7e7eee8d69f915521882e557ce664bc20f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:06:53.934025 kubelet[2810]: E1030 00:06:53.933779 2810 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3bf605c24c6656efea2e054e6a52f7e7eee8d69f915521882e557ce664bc20f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69bd87fbdd-zshjp" Oct 30 00:06:53.934025 kubelet[2810]: E1030 00:06:53.933807 2810 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3bf605c24c6656efea2e054e6a52f7e7eee8d69f915521882e557ce664bc20f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69bd87fbdd-zshjp" Oct 30 00:06:53.934249 kubelet[2810]: E1030 00:06:53.933859 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-69bd87fbdd-zshjp_calico-system(fb783c3f-c8d1-42f0-a262-b7fd408f60b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-69bd87fbdd-zshjp_calico-system(fb783c3f-c8d1-42f0-a262-b7fd408f60b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3bf605c24c6656efea2e054e6a52f7e7eee8d69f915521882e557ce664bc20f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69bd87fbdd-zshjp" podUID="fb783c3f-c8d1-42f0-a262-b7fd408f60b3" Oct 30 00:06:53.945124 containerd[1621]: time="2025-10-30T00:06:53.944988109Z" level=error msg="Failed to destroy network for sandbox \"f6159d3a9bbf9b3612c88afd328ba4b27e355ab8423f18fa9e4ff799f649fafd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:06:53.949419 containerd[1621]: time="2025-10-30T00:06:53.949265865Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bgd9q,Uid:d62c2877-00ac-4394-911e-002e28febfd2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6159d3a9bbf9b3612c88afd328ba4b27e355ab8423f18fa9e4ff799f649fafd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:06:53.949703 kubelet[2810]: E1030 00:06:53.949605 2810 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6159d3a9bbf9b3612c88afd328ba4b27e355ab8423f18fa9e4ff799f649fafd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:06:53.949999 kubelet[2810]: E1030 00:06:53.949915 2810 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6159d3a9bbf9b3612c88afd328ba4b27e355ab8423f18fa9e4ff799f649fafd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bgd9q" Oct 30 00:06:53.950240 kubelet[2810]: E1030 00:06:53.950203 2810 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6159d3a9bbf9b3612c88afd328ba4b27e355ab8423f18fa9e4ff799f649fafd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bgd9q" Oct 30 00:06:53.950377 kubelet[2810]: E1030 00:06:53.950313 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bgd9q_calico-system(d62c2877-00ac-4394-911e-002e28febfd2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bgd9q_calico-system(d62c2877-00ac-4394-911e-002e28febfd2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f6159d3a9bbf9b3612c88afd328ba4b27e355ab8423f18fa9e4ff799f649fafd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bgd9q" podUID="d62c2877-00ac-4394-911e-002e28febfd2" Oct 30 00:06:53.959986 containerd[1621]: time="2025-10-30T00:06:53.959912972Z" level=error msg="Failed to destroy network for sandbox \"f36e1564a36c0a81df1ca44c24f16769bb89479e66ba6db47c0231acc4493523\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:06:53.961587 containerd[1621]: time="2025-10-30T00:06:53.961511591Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-g2t2x,Uid:0a7e0678-b33a-4d3a-b42a-5b4a4c30629b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f36e1564a36c0a81df1ca44c24f16769bb89479e66ba6db47c0231acc4493523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:06:53.961982 kubelet[2810]: E1030 00:06:53.961943 2810 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f36e1564a36c0a81df1ca44c24f16769bb89479e66ba6db47c0231acc4493523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:06:53.962052 kubelet[2810]: E1030 00:06:53.962009 2810 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f36e1564a36c0a81df1ca44c24f16769bb89479e66ba6db47c0231acc4493523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-g2t2x" Oct 30 00:06:53.962052 kubelet[2810]: E1030 00:06:53.962043 2810 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f36e1564a36c0a81df1ca44c24f16769bb89479e66ba6db47c0231acc4493523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-g2t2x" Oct 30 00:06:53.962201 kubelet[2810]: E1030 00:06:53.962150 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-g2t2x_calico-system(0a7e0678-b33a-4d3a-b42a-5b4a4c30629b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-g2t2x_calico-system(0a7e0678-b33a-4d3a-b42a-5b4a4c30629b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f36e1564a36c0a81df1ca44c24f16769bb89479e66ba6db47c0231acc4493523\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-g2t2x" podUID="0a7e0678-b33a-4d3a-b42a-5b4a4c30629b" Oct 30 00:06:53.964790 containerd[1621]: time="2025-10-30T00:06:53.964746471Z" level=error msg="Failed to destroy network for sandbox \"415cbabdc5abdccd87fb744d91ef3459f12dd04c1da48ced9855c8394f230180\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:06:53.966843 containerd[1621]: time="2025-10-30T00:06:53.966326396Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-869977fb74-wfqp7,Uid:5d5c2c33-987b-44fa-be72-89d7d6488ff0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"415cbabdc5abdccd87fb744d91ef3459f12dd04c1da48ced9855c8394f230180\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:06:53.966843 containerd[1621]: time="2025-10-30T00:06:53.966748898Z" level=error msg="Failed to destroy network for sandbox \"bcefba6d22ccd981602e3b1a10bbf8ee4b017f6b823acc8f5ec05f4b329a70f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:06:53.967196 kubelet[2810]: E1030 00:06:53.967166 2810 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"415cbabdc5abdccd87fb744d91ef3459f12dd04c1da48ced9855c8394f230180\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:06:53.967302 kubelet[2810]: E1030 00:06:53.967213 2810 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"415cbabdc5abdccd87fb744d91ef3459f12dd04c1da48ced9855c8394f230180\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-869977fb74-wfqp7" Oct 30 00:06:53.967302 kubelet[2810]: E1030 00:06:53.967233 2810 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"415cbabdc5abdccd87fb744d91ef3459f12dd04c1da48ced9855c8394f230180\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-869977fb74-wfqp7" Oct 30 00:06:53.967302 kubelet[2810]: E1030 00:06:53.967282 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-869977fb74-wfqp7_calico-apiserver(5d5c2c33-987b-44fa-be72-89d7d6488ff0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-869977fb74-wfqp7_calico-apiserver(5d5c2c33-987b-44fa-be72-89d7d6488ff0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"415cbabdc5abdccd87fb744d91ef3459f12dd04c1da48ced9855c8394f230180\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-869977fb74-wfqp7" podUID="5d5c2c33-987b-44fa-be72-89d7d6488ff0" Oct 30 00:06:53.968200 containerd[1621]: time="2025-10-30T00:06:53.968151289Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xgt5r,Uid:6477e341-9cbb-4bbc-b90e-fcc438b0b3a9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcefba6d22ccd981602e3b1a10bbf8ee4b017f6b823acc8f5ec05f4b329a70f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:06:53.968372 kubelet[2810]: E1030 00:06:53.968339 2810 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcefba6d22ccd981602e3b1a10bbf8ee4b017f6b823acc8f5ec05f4b329a70f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:06:53.968412 kubelet[2810]: E1030 00:06:53.968377 2810 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcefba6d22ccd981602e3b1a10bbf8ee4b017f6b823acc8f5ec05f4b329a70f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xgt5r" Oct 30 00:06:53.968412 kubelet[2810]: E1030 00:06:53.968393 2810 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcefba6d22ccd981602e3b1a10bbf8ee4b017f6b823acc8f5ec05f4b329a70f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xgt5r" Oct 30 00:06:53.968464 kubelet[2810]: E1030 00:06:53.968430 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-xgt5r_kube-system(6477e341-9cbb-4bbc-b90e-fcc438b0b3a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-xgt5r_kube-system(6477e341-9cbb-4bbc-b90e-fcc438b0b3a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bcefba6d22ccd981602e3b1a10bbf8ee4b017f6b823acc8f5ec05f4b329a70f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-xgt5r" podUID="6477e341-9cbb-4bbc-b90e-fcc438b0b3a9" Oct 30 00:06:54.759152 systemd[1]: run-netns-cni\x2d9813ad53\x2dbf05\x2dcfb6\x2d57e2\x2d6fa933cd948a.mount: Deactivated successfully. Oct 30 00:06:54.759301 systemd[1]: run-netns-cni\x2dd8ad1519\x2d284c\x2d71c1\x2db68a\x2db8317e749f91.mount: Deactivated successfully. Oct 30 00:06:54.759388 systemd[1]: run-netns-cni\x2d4213aeae\x2d88f0\x2df421\x2d6097\x2d6762bfd235aa.mount: Deactivated successfully. Oct 30 00:06:54.759474 systemd[1]: run-netns-cni\x2dda87e65c\x2d767a\x2de3bb\x2d0f48\x2dc865a4e0a113.mount: Deactivated successfully. Oct 30 00:06:58.780617 kubelet[2810]: I1030 00:06:58.780537 2810 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 30 00:06:58.781254 kubelet[2810]: E1030 00:06:58.781023 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:06:58.800866 kubelet[2810]: E1030 00:06:58.800806 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:07:01.414737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3698053415.mount: Deactivated successfully. Oct 30 00:07:04.606653 kubelet[2810]: E1030 00:07:04.606588 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:07:04.610561 containerd[1621]: time="2025-10-30T00:07:04.610513815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xgt5r,Uid:6477e341-9cbb-4bbc-b90e-fcc438b0b3a9,Namespace:kube-system,Attempt:0,}" Oct 30 00:07:04.611144 containerd[1621]: time="2025-10-30T00:07:04.611016584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dc6679f96-hdbnt,Uid:6ca14667-e807-40b3-a7f9-d2bc31653373,Namespace:calico-system,Attempt:0,}" Oct 30 00:07:04.611473 containerd[1621]: time="2025-10-30T00:07:04.611441593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69bd87fbdd-zshjp,Uid:fb783c3f-c8d1-42f0-a262-b7fd408f60b3,Namespace:calico-system,Attempt:0,}" Oct 30 00:07:05.609140 containerd[1621]: time="2025-10-30T00:07:05.609071505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-869977fb74-wfqp7,Uid:5d5c2c33-987b-44fa-be72-89d7d6488ff0,Namespace:calico-apiserver,Attempt:0,}" Oct 30 00:07:06.606435 kubelet[2810]: E1030 00:07:06.606381 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:07:06.607854 containerd[1621]: time="2025-10-30T00:07:06.607175143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rxbrv,Uid:2c069f41-6fd8-469f-b76f-46d048b85fa4,Namespace:kube-system,Attempt:0,}" Oct 30 00:07:06.607854 containerd[1621]: time="2025-10-30T00:07:06.607606543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bgd9q,Uid:d62c2877-00ac-4394-911e-002e28febfd2,Namespace:calico-system,Attempt:0,}" Oct 30 00:07:07.606878 containerd[1621]: time="2025-10-30T00:07:07.606791125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-869977fb74-rhndp,Uid:2f8d59a7-a1e6-4aca-91ca-94959e3f1a19,Namespace:calico-apiserver,Attempt:0,}" Oct 30 00:07:07.751178 containerd[1621]: time="2025-10-30T00:07:07.751104309Z" level=error msg="Failed to destroy network for sandbox \"6d20079c54deecb9950151d418344a90f81782a38a2c0a8728680a8de71dfa32\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:07:07.753391 systemd[1]: run-netns-cni\x2dfba334c2\x2d1817\x2d9434\x2d584f\x2d5401816c7037.mount: Deactivated successfully. Oct 30 00:07:07.894823 containerd[1621]: time="2025-10-30T00:07:07.894687892Z" level=error msg="Failed to destroy network for sandbox \"728589b667252246ddf92698646ef51ff84acd2ce6054ddfdea987d30e1b3083\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:07:07.896899 systemd[1]: run-netns-cni\x2d0222e66b\x2d8988\x2d0451\x2d8fbe\x2d0d25e6facb7c.mount: Deactivated successfully. Oct 30 00:07:08.310161 containerd[1621]: time="2025-10-30T00:07:08.310094103Z" level=error msg="Failed to destroy network for sandbox \"9fb3e737b306c1ae26de847e9f716fd642bc5f420ef9433867a05a75fcaa46a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:07:08.312753 systemd[1]: run-netns-cni\x2d1627975f\x2d890c\x2d388c\x2dcdc2\x2d0da6e5048554.mount: Deactivated successfully. Oct 30 00:07:08.725983 containerd[1621]: time="2025-10-30T00:07:08.725813560Z" level=error msg="Failed to destroy network for sandbox \"69d2eac422b2c71a5a05815a3a5ce71e910802ad1401496c5030455d61fe4347\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:07:08.728967 systemd[1]: run-netns-cni\x2dcd8c7a93\x2d9201\x2d654b\x2d9086\x2d919970cd4bb0.mount: Deactivated successfully. Oct 30 00:07:09.113223 containerd[1621]: time="2025-10-30T00:07:09.103485710Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xgt5r,Uid:6477e341-9cbb-4bbc-b90e-fcc438b0b3a9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d20079c54deecb9950151d418344a90f81782a38a2c0a8728680a8de71dfa32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:07:09.113907 kubelet[2810]: E1030 00:07:09.113501 2810 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d20079c54deecb9950151d418344a90f81782a38a2c0a8728680a8de71dfa32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:07:09.113907 kubelet[2810]: E1030 00:07:09.113591 2810 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d20079c54deecb9950151d418344a90f81782a38a2c0a8728680a8de71dfa32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xgt5r" Oct 30 00:07:09.113907 kubelet[2810]: E1030 00:07:09.113621 2810 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d20079c54deecb9950151d418344a90f81782a38a2c0a8728680a8de71dfa32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xgt5r" Oct 30 00:07:09.115851 containerd[1621]: time="2025-10-30T00:07:09.115801751Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dc6679f96-hdbnt,Uid:6ca14667-e807-40b3-a7f9-d2bc31653373,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"728589b667252246ddf92698646ef51ff84acd2ce6054ddfdea987d30e1b3083\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:07:09.116006 kubelet[2810]: E1030 00:07:09.115975 2810 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"728589b667252246ddf92698646ef51ff84acd2ce6054ddfdea987d30e1b3083\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:07:09.116094 kubelet[2810]: E1030 00:07:09.116016 2810 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"728589b667252246ddf92698646ef51ff84acd2ce6054ddfdea987d30e1b3083\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-dc6679f96-hdbnt" Oct 30 00:07:09.116094 kubelet[2810]: E1030 00:07:09.116044 2810 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"728589b667252246ddf92698646ef51ff84acd2ce6054ddfdea987d30e1b3083\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-dc6679f96-hdbnt" Oct 30 00:07:09.168775 kubelet[2810]: E1030 00:07:09.168682 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-xgt5r_kube-system(6477e341-9cbb-4bbc-b90e-fcc438b0b3a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-xgt5r_kube-system(6477e341-9cbb-4bbc-b90e-fcc438b0b3a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d20079c54deecb9950151d418344a90f81782a38a2c0a8728680a8de71dfa32\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-xgt5r" podUID="6477e341-9cbb-4bbc-b90e-fcc438b0b3a9" Oct 30 00:07:09.169228 kubelet[2810]: E1030 00:07:09.169149 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-dc6679f96-hdbnt_calico-system(6ca14667-e807-40b3-a7f9-d2bc31653373)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-dc6679f96-hdbnt_calico-system(6ca14667-e807-40b3-a7f9-d2bc31653373)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"728589b667252246ddf92698646ef51ff84acd2ce6054ddfdea987d30e1b3083\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-dc6679f96-hdbnt" podUID="6ca14667-e807-40b3-a7f9-d2bc31653373" Oct 30 00:07:09.371295 containerd[1621]: time="2025-10-30T00:07:09.370763961Z" level=error msg="Failed to destroy network for sandbox \"add96ec86c92bf4fc6edd4218359dcb86e24363a5854cd961eca1ded9d8cdfcd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:07:09.374212 systemd[1]: run-netns-cni\x2dfe3a247b\x2d4367\x2da055\x2d9bba\x2d4e7b68110dc3.mount: Deactivated successfully. Oct 30 00:07:09.375355 containerd[1621]: time="2025-10-30T00:07:09.375238747Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69bd87fbdd-zshjp,Uid:fb783c3f-c8d1-42f0-a262-b7fd408f60b3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fb3e737b306c1ae26de847e9f716fd642bc5f420ef9433867a05a75fcaa46a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:07:09.376270 kubelet[2810]: E1030 00:07:09.376167 2810 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fb3e737b306c1ae26de847e9f716fd642bc5f420ef9433867a05a75fcaa46a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:07:09.376357 kubelet[2810]: E1030 00:07:09.376303 2810 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fb3e737b306c1ae26de847e9f716fd642bc5f420ef9433867a05a75fcaa46a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69bd87fbdd-zshjp" Oct 30 00:07:09.376357 kubelet[2810]: E1030 00:07:09.376328 2810 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fb3e737b306c1ae26de847e9f716fd642bc5f420ef9433867a05a75fcaa46a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69bd87fbdd-zshjp" Oct 30 00:07:09.376452 kubelet[2810]: E1030 00:07:09.376418 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-69bd87fbdd-zshjp_calico-system(fb783c3f-c8d1-42f0-a262-b7fd408f60b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-69bd87fbdd-zshjp_calico-system(fb783c3f-c8d1-42f0-a262-b7fd408f60b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9fb3e737b306c1ae26de847e9f716fd642bc5f420ef9433867a05a75fcaa46a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69bd87fbdd-zshjp" podUID="fb783c3f-c8d1-42f0-a262-b7fd408f60b3" Oct 30 00:07:09.381147 containerd[1621]: time="2025-10-30T00:07:09.381094988Z" level=error msg="Failed to destroy network for sandbox \"8335aed316fe55b81a3ffc98c45f07fde173014a7516be76c935635c1e6a9f73\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:07:09.383484 systemd[1]: run-netns-cni\x2d36adb460\x2d7b05\x2dc07f\x2dc8ab\x2db5db25e0a6ad.mount: Deactivated successfully. Oct 30 00:07:09.590748 containerd[1621]: time="2025-10-30T00:07:09.590670254Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-869977fb74-wfqp7,Uid:5d5c2c33-987b-44fa-be72-89d7d6488ff0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"69d2eac422b2c71a5a05815a3a5ce71e910802ad1401496c5030455d61fe4347\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:07:09.591115 kubelet[2810]: E1030 00:07:09.590990 2810 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69d2eac422b2c71a5a05815a3a5ce71e910802ad1401496c5030455d61fe4347\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:07:09.591115 kubelet[2810]: E1030 00:07:09.591059 2810 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69d2eac422b2c71a5a05815a3a5ce71e910802ad1401496c5030455d61fe4347\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-869977fb74-wfqp7" Oct 30 00:07:09.591115 kubelet[2810]: E1030 00:07:09.591105 2810 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69d2eac422b2c71a5a05815a3a5ce71e910802ad1401496c5030455d61fe4347\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-869977fb74-wfqp7" Oct 30 00:07:09.591302 kubelet[2810]: E1030 00:07:09.591169 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-869977fb74-wfqp7_calico-apiserver(5d5c2c33-987b-44fa-be72-89d7d6488ff0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-869977fb74-wfqp7_calico-apiserver(5d5c2c33-987b-44fa-be72-89d7d6488ff0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"69d2eac422b2c71a5a05815a3a5ce71e910802ad1401496c5030455d61fe4347\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-869977fb74-wfqp7" podUID="5d5c2c33-987b-44fa-be72-89d7d6488ff0" Oct 30 00:07:09.643687 containerd[1621]: time="2025-10-30T00:07:09.643562516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-g2t2x,Uid:0a7e0678-b33a-4d3a-b42a-5b4a4c30629b,Namespace:calico-system,Attempt:0,}" Oct 30 00:07:09.660406 containerd[1621]: time="2025-10-30T00:07:09.660361238Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:07:09.724534 containerd[1621]: time="2025-10-30T00:07:09.724439229Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rxbrv,Uid:2c069f41-6fd8-469f-b76f-46d048b85fa4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"add96ec86c92bf4fc6edd4218359dcb86e24363a5854cd961eca1ded9d8cdfcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:07:09.724946 kubelet[2810]: E1030 00:07:09.724903 2810 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"add96ec86c92bf4fc6edd4218359dcb86e24363a5854cd961eca1ded9d8cdfcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:07:09.725190 kubelet[2810]: E1030 00:07:09.725117 2810 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"add96ec86c92bf4fc6edd4218359dcb86e24363a5854cd961eca1ded9d8cdfcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rxbrv" Oct 30 00:07:09.725190 kubelet[2810]: E1030 00:07:09.725146 2810 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"add96ec86c92bf4fc6edd4218359dcb86e24363a5854cd961eca1ded9d8cdfcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rxbrv" Oct 30 00:07:09.725408 kubelet[2810]: E1030 00:07:09.725315 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-rxbrv_kube-system(2c069f41-6fd8-469f-b76f-46d048b85fa4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-rxbrv_kube-system(2c069f41-6fd8-469f-b76f-46d048b85fa4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"add96ec86c92bf4fc6edd4218359dcb86e24363a5854cd961eca1ded9d8cdfcd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-rxbrv" podUID="2c069f41-6fd8-469f-b76f-46d048b85fa4" Oct 30 00:07:09.733954 containerd[1621]: time="2025-10-30T00:07:09.733908230Z" level=error msg="Failed to destroy network for sandbox \"37eadfe370eba8a0100f713245b48c5ddef782a0ba3c7932e1347124a165b308\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:07:09.736258 systemd[1]: run-netns-cni\x2de0bdaff2\x2d2393\x2dc9d1\x2d6a5c\x2d67f698c66374.mount: Deactivated successfully. Oct 30 00:07:09.780593 containerd[1621]: time="2025-10-30T00:07:09.780499567Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bgd9q,Uid:d62c2877-00ac-4394-911e-002e28febfd2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8335aed316fe55b81a3ffc98c45f07fde173014a7516be76c935635c1e6a9f73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:07:09.780837 kubelet[2810]: E1030 00:07:09.780788 2810 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8335aed316fe55b81a3ffc98c45f07fde173014a7516be76c935635c1e6a9f73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:07:09.780896 kubelet[2810]: E1030 00:07:09.780860 2810 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8335aed316fe55b81a3ffc98c45f07fde173014a7516be76c935635c1e6a9f73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bgd9q" Oct 30 00:07:09.780896 kubelet[2810]: E1030 00:07:09.780881 2810 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8335aed316fe55b81a3ffc98c45f07fde173014a7516be76c935635c1e6a9f73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bgd9q" Oct 30 00:07:09.780968 kubelet[2810]: E1030 00:07:09.780947 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bgd9q_calico-system(d62c2877-00ac-4394-911e-002e28febfd2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bgd9q_calico-system(d62c2877-00ac-4394-911e-002e28febfd2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8335aed316fe55b81a3ffc98c45f07fde173014a7516be76c935635c1e6a9f73\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bgd9q" podUID="d62c2877-00ac-4394-911e-002e28febfd2" Oct 30 00:07:09.823377 containerd[1621]: time="2025-10-30T00:07:09.823305562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 30 00:07:09.848201 containerd[1621]: time="2025-10-30T00:07:09.848067632Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-869977fb74-rhndp,Uid:2f8d59a7-a1e6-4aca-91ca-94959e3f1a19,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"37eadfe370eba8a0100f713245b48c5ddef782a0ba3c7932e1347124a165b308\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:07:09.848617 kubelet[2810]: E1030 00:07:09.848546 2810 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37eadfe370eba8a0100f713245b48c5ddef782a0ba3c7932e1347124a165b308\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:07:09.848817 kubelet[2810]: E1030 00:07:09.848636 2810 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37eadfe370eba8a0100f713245b48c5ddef782a0ba3c7932e1347124a165b308\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-869977fb74-rhndp" Oct 30 00:07:09.848817 kubelet[2810]: E1030 00:07:09.848663 2810 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37eadfe370eba8a0100f713245b48c5ddef782a0ba3c7932e1347124a165b308\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-869977fb74-rhndp" Oct 30 00:07:09.848817 kubelet[2810]: E1030 00:07:09.848723 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-869977fb74-rhndp_calico-apiserver(2f8d59a7-a1e6-4aca-91ca-94959e3f1a19)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-869977fb74-rhndp_calico-apiserver(2f8d59a7-a1e6-4aca-91ca-94959e3f1a19)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37eadfe370eba8a0100f713245b48c5ddef782a0ba3c7932e1347124a165b308\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-869977fb74-rhndp" podUID="2f8d59a7-a1e6-4aca-91ca-94959e3f1a19" Oct 30 00:07:09.932717 containerd[1621]: time="2025-10-30T00:07:09.932559751Z" level=error msg="Failed to destroy network for sandbox \"c19079933a2f7f1e7d53c2877500900da10013fdd89b84aaa3b724dd30577278\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:07:09.936191 systemd[1]: run-netns-cni\x2d48004541\x2d5e27\x2d4b8e\x2d3577\x2dd71a9aa31137.mount: Deactivated successfully. Oct 30 00:07:09.974555 containerd[1621]: time="2025-10-30T00:07:09.974438554Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:07:10.013439 containerd[1621]: time="2025-10-30T00:07:10.013260821Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-g2t2x,Uid:0a7e0678-b33a-4d3a-b42a-5b4a4c30629b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c19079933a2f7f1e7d53c2877500900da10013fdd89b84aaa3b724dd30577278\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:07:10.013795 kubelet[2810]: E1030 00:07:10.013600 2810 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c19079933a2f7f1e7d53c2877500900da10013fdd89b84aaa3b724dd30577278\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:07:10.013795 kubelet[2810]: E1030 00:07:10.013691 2810 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c19079933a2f7f1e7d53c2877500900da10013fdd89b84aaa3b724dd30577278\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-g2t2x" Oct 30 00:07:10.013795 kubelet[2810]: E1030 00:07:10.013732 2810 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c19079933a2f7f1e7d53c2877500900da10013fdd89b84aaa3b724dd30577278\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-g2t2x" Oct 30 00:07:10.013946 kubelet[2810]: E1030 00:07:10.013820 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-g2t2x_calico-system(0a7e0678-b33a-4d3a-b42a-5b4a4c30629b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-g2t2x_calico-system(0a7e0678-b33a-4d3a-b42a-5b4a4c30629b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c19079933a2f7f1e7d53c2877500900da10013fdd89b84aaa3b724dd30577278\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-g2t2x" podUID="0a7e0678-b33a-4d3a-b42a-5b4a4c30629b" Oct 30 00:07:10.149497 containerd[1621]: time="2025-10-30T00:07:10.149390848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:07:10.150138 containerd[1621]: time="2025-10-30T00:07:10.150101853Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 16.356353981s" Oct 30 00:07:10.150215 containerd[1621]: time="2025-10-30T00:07:10.150146629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 30 00:07:10.323177 containerd[1621]: time="2025-10-30T00:07:10.323113238Z" level=info msg="CreateContainer within sandbox \"fa09e9157bdfa322e65d0978acc3f41658d2f1434aae079e232868156e2ffbb6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 30 00:07:11.088718 containerd[1621]: time="2025-10-30T00:07:11.088214839Z" level=info msg="Container ae0f135fbd60186c7c915759c8d45fdd641ddaa976a01bce6c3d6b62c5db4d69: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:07:11.416333 containerd[1621]: time="2025-10-30T00:07:11.416175029Z" level=info msg="CreateContainer within sandbox \"fa09e9157bdfa322e65d0978acc3f41658d2f1434aae079e232868156e2ffbb6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ae0f135fbd60186c7c915759c8d45fdd641ddaa976a01bce6c3d6b62c5db4d69\"" Oct 30 00:07:11.417043 containerd[1621]: time="2025-10-30T00:07:11.417006905Z" level=info msg="StartContainer for \"ae0f135fbd60186c7c915759c8d45fdd641ddaa976a01bce6c3d6b62c5db4d69\"" Oct 30 00:07:11.418601 containerd[1621]: time="2025-10-30T00:07:11.418566117Z" level=info msg="connecting to shim ae0f135fbd60186c7c915759c8d45fdd641ddaa976a01bce6c3d6b62c5db4d69" address="unix:///run/containerd/s/a21d5004ed885a30e5f00c198042a8272a83c2b0e07a0cb8b234426ceddb907f" protocol=ttrpc version=3 Oct 30 00:07:11.445273 systemd[1]: Started cri-containerd-ae0f135fbd60186c7c915759c8d45fdd641ddaa976a01bce6c3d6b62c5db4d69.scope - libcontainer container ae0f135fbd60186c7c915759c8d45fdd641ddaa976a01bce6c3d6b62c5db4d69. Oct 30 00:07:11.893265 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 30 00:07:11.893667 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 30 00:07:11.906512 containerd[1621]: time="2025-10-30T00:07:11.906448926Z" level=info msg="StartContainer for \"ae0f135fbd60186c7c915759c8d45fdd641ddaa976a01bce6c3d6b62c5db4d69\" returns successfully" Oct 30 00:07:12.496902 systemd[1]: Started sshd@7-10.0.0.102:22-10.0.0.1:60762.service - OpenSSH per-connection server daemon (10.0.0.1:60762). Oct 30 00:07:12.609691 sshd[4220]: Accepted publickey for core from 10.0.0.1 port 60762 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:07:12.611961 sshd-session[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:07:12.628950 systemd-logind[1593]: New session 8 of user core. Oct 30 00:07:12.637920 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 30 00:07:12.915737 kubelet[2810]: E1030 00:07:12.915700 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:07:13.060186 containerd[1621]: time="2025-10-30T00:07:13.060127646Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae0f135fbd60186c7c915759c8d45fdd641ddaa976a01bce6c3d6b62c5db4d69\" id:\"9b1f19366d9c106cd2270bdaa2d38e973eb66707c3f3d593f59fda98da602f53\" pid:4260 exit_status:1 exited_at:{seconds:1761782833 nanos:59749551}" Oct 30 00:07:13.379670 sshd[4230]: Connection closed by 10.0.0.1 port 60762 Oct 30 00:07:13.379986 sshd-session[4220]: pam_unix(sshd:session): session closed for user core Oct 30 00:07:13.384779 systemd[1]: sshd@7-10.0.0.102:22-10.0.0.1:60762.service: Deactivated successfully. Oct 30 00:07:13.386898 systemd[1]: session-8.scope: Deactivated successfully. Oct 30 00:07:13.387674 systemd-logind[1593]: Session 8 logged out. Waiting for processes to exit. Oct 30 00:07:13.388788 systemd-logind[1593]: Removed session 8. Oct 30 00:07:13.839057 kubelet[2810]: I1030 00:07:13.838967 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-p998t" podStartSLOduration=5.13099053 podStartE2EDuration="36.83894873s" podCreationTimestamp="2025-10-30 00:06:37 +0000 UTC" firstStartedPulling="2025-10-30 00:06:38.443168941 +0000 UTC m=+22.985997158" lastFinishedPulling="2025-10-30 00:07:10.151127141 +0000 UTC m=+54.693955358" observedRunningTime="2025-10-30 00:07:13.836670745 +0000 UTC m=+58.379498962" watchObservedRunningTime="2025-10-30 00:07:13.83894873 +0000 UTC m=+58.381776947" Oct 30 00:07:13.917892 kubelet[2810]: E1030 00:07:13.917814 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:07:13.991677 containerd[1621]: time="2025-10-30T00:07:13.991608856Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae0f135fbd60186c7c915759c8d45fdd641ddaa976a01bce6c3d6b62c5db4d69\" id:\"abba24bde0391f6aada17f93e5fae103b94b47d4a2e143d403b9d1a75cf49f69\" pid:4286 exit_status:1 exited_at:{seconds:1761782833 nanos:991262883}" Oct 30 00:07:14.424530 kubelet[2810]: I1030 00:07:14.424474 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6ca14667-e807-40b3-a7f9-d2bc31653373-whisker-backend-key-pair\") pod \"6ca14667-e807-40b3-a7f9-d2bc31653373\" (UID: \"6ca14667-e807-40b3-a7f9-d2bc31653373\") " Oct 30 00:07:14.424725 kubelet[2810]: I1030 00:07:14.424561 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9jvq\" (UniqueName: \"kubernetes.io/projected/6ca14667-e807-40b3-a7f9-d2bc31653373-kube-api-access-w9jvq\") pod \"6ca14667-e807-40b3-a7f9-d2bc31653373\" (UID: \"6ca14667-e807-40b3-a7f9-d2bc31653373\") " Oct 30 00:07:14.424725 kubelet[2810]: I1030 00:07:14.424595 2810 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ca14667-e807-40b3-a7f9-d2bc31653373-whisker-ca-bundle\") pod \"6ca14667-e807-40b3-a7f9-d2bc31653373\" (UID: \"6ca14667-e807-40b3-a7f9-d2bc31653373\") " Oct 30 00:07:14.425100 kubelet[2810]: I1030 00:07:14.425019 2810 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ca14667-e807-40b3-a7f9-d2bc31653373-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6ca14667-e807-40b3-a7f9-d2bc31653373" (UID: "6ca14667-e807-40b3-a7f9-d2bc31653373"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 30 00:07:14.431245 systemd[1]: var-lib-kubelet-pods-6ca14667\x2de807\x2d40b3\x2da7f9\x2dd2bc31653373-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw9jvq.mount: Deactivated successfully. Oct 30 00:07:14.431423 systemd[1]: var-lib-kubelet-pods-6ca14667\x2de807\x2d40b3\x2da7f9\x2dd2bc31653373-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 30 00:07:14.432424 kubelet[2810]: I1030 00:07:14.431988 2810 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ca14667-e807-40b3-a7f9-d2bc31653373-kube-api-access-w9jvq" (OuterVolumeSpecName: "kube-api-access-w9jvq") pod "6ca14667-e807-40b3-a7f9-d2bc31653373" (UID: "6ca14667-e807-40b3-a7f9-d2bc31653373"). InnerVolumeSpecName "kube-api-access-w9jvq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 30 00:07:14.432424 kubelet[2810]: I1030 00:07:14.432020 2810 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ca14667-e807-40b3-a7f9-d2bc31653373-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6ca14667-e807-40b3-a7f9-d2bc31653373" (UID: "6ca14667-e807-40b3-a7f9-d2bc31653373"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 30 00:07:14.525713 kubelet[2810]: I1030 00:07:14.525627 2810 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ca14667-e807-40b3-a7f9-d2bc31653373-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 30 00:07:14.525713 kubelet[2810]: I1030 00:07:14.525673 2810 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6ca14667-e807-40b3-a7f9-d2bc31653373-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 30 00:07:14.525713 kubelet[2810]: I1030 00:07:14.525682 2810 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w9jvq\" (UniqueName: \"kubernetes.io/projected/6ca14667-e807-40b3-a7f9-d2bc31653373-kube-api-access-w9jvq\") on node \"localhost\" DevicePath \"\"" Oct 30 00:07:14.925757 systemd[1]: Removed slice kubepods-besteffort-pod6ca14667_e807_40b3_a7f9_d2bc31653373.slice - libcontainer container kubepods-besteffort-pod6ca14667_e807_40b3_a7f9_d2bc31653373.slice. Oct 30 00:07:15.573589 systemd[1]: Created slice kubepods-besteffort-pod0ddc0305_4c4c_4d8f_adc8_d24daa6c347e.slice - libcontainer container kubepods-besteffort-pod0ddc0305_4c4c_4d8f_adc8_d24daa6c347e.slice. Oct 30 00:07:15.609793 kubelet[2810]: I1030 00:07:15.609729 2810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ca14667-e807-40b3-a7f9-d2bc31653373" path="/var/lib/kubelet/pods/6ca14667-e807-40b3-a7f9-d2bc31653373/volumes" Oct 30 00:07:15.635750 kubelet[2810]: I1030 00:07:15.635694 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0ddc0305-4c4c-4d8f-adc8-d24daa6c347e-whisker-backend-key-pair\") pod \"whisker-78cd8f6cd-jtlv8\" (UID: \"0ddc0305-4c4c-4d8f-adc8-d24daa6c347e\") " pod="calico-system/whisker-78cd8f6cd-jtlv8" Oct 30 00:07:15.635750 kubelet[2810]: I1030 00:07:15.635756 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhvms\" (UniqueName: \"kubernetes.io/projected/0ddc0305-4c4c-4d8f-adc8-d24daa6c347e-kube-api-access-mhvms\") pod \"whisker-78cd8f6cd-jtlv8\" (UID: \"0ddc0305-4c4c-4d8f-adc8-d24daa6c347e\") " pod="calico-system/whisker-78cd8f6cd-jtlv8" Oct 30 00:07:15.635979 kubelet[2810]: I1030 00:07:15.635792 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ddc0305-4c4c-4d8f-adc8-d24daa6c347e-whisker-ca-bundle\") pod \"whisker-78cd8f6cd-jtlv8\" (UID: \"0ddc0305-4c4c-4d8f-adc8-d24daa6c347e\") " pod="calico-system/whisker-78cd8f6cd-jtlv8" Oct 30 00:07:15.877963 containerd[1621]: time="2025-10-30T00:07:15.877812403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78cd8f6cd-jtlv8,Uid:0ddc0305-4c4c-4d8f-adc8-d24daa6c347e,Namespace:calico-system,Attempt:0,}" Oct 30 00:07:17.314521 systemd-networkd[1519]: vxlan.calico: Link UP Oct 30 00:07:17.314557 systemd-networkd[1519]: vxlan.calico: Gained carrier Oct 30 00:07:17.613187 systemd-networkd[1519]: cali9086a11c3bb: Link UP Oct 30 00:07:17.613816 systemd-networkd[1519]: cali9086a11c3bb: Gained carrier Oct 30 00:07:17.640326 containerd[1621]: 2025-10-30 00:07:16.006 [INFO][4316] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 30 00:07:17.640326 containerd[1621]: 2025-10-30 00:07:16.130 [INFO][4316] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--78cd8f6cd--jtlv8-eth0 whisker-78cd8f6cd- calico-system 0ddc0305-4c4c-4d8f-adc8-d24daa6c347e 1055 0 2025-10-30 00:07:15 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:78cd8f6cd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-78cd8f6cd-jtlv8 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9086a11c3bb [] [] }} ContainerID="c4c73b004bfca6938ebec65ed50c705c9ceac617d90be3aea8eadd144d8d0b6b" Namespace="calico-system" Pod="whisker-78cd8f6cd-jtlv8" WorkloadEndpoint="localhost-k8s-whisker--78cd8f6cd--jtlv8-" Oct 30 00:07:17.640326 containerd[1621]: 2025-10-30 00:07:16.131 [INFO][4316] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c4c73b004bfca6938ebec65ed50c705c9ceac617d90be3aea8eadd144d8d0b6b" Namespace="calico-system" Pod="whisker-78cd8f6cd-jtlv8" WorkloadEndpoint="localhost-k8s-whisker--78cd8f6cd--jtlv8-eth0" Oct 30 00:07:17.640326 containerd[1621]: 2025-10-30 00:07:17.304 [INFO][4329] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c4c73b004bfca6938ebec65ed50c705c9ceac617d90be3aea8eadd144d8d0b6b" HandleID="k8s-pod-network.c4c73b004bfca6938ebec65ed50c705c9ceac617d90be3aea8eadd144d8d0b6b" Workload="localhost-k8s-whisker--78cd8f6cd--jtlv8-eth0" Oct 30 00:07:17.640980 containerd[1621]: 2025-10-30 00:07:17.305 [INFO][4329] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c4c73b004bfca6938ebec65ed50c705c9ceac617d90be3aea8eadd144d8d0b6b" HandleID="k8s-pod-network.c4c73b004bfca6938ebec65ed50c705c9ceac617d90be3aea8eadd144d8d0b6b" Workload="localhost-k8s-whisker--78cd8f6cd--jtlv8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001ab420), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-78cd8f6cd-jtlv8", "timestamp":"2025-10-30 00:07:17.304831365 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:07:17.640980 containerd[1621]: 2025-10-30 00:07:17.305 [INFO][4329] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:07:17.640980 containerd[1621]: 2025-10-30 00:07:17.305 [INFO][4329] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:07:17.640980 containerd[1621]: 2025-10-30 00:07:17.305 [INFO][4329] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 00:07:17.640980 containerd[1621]: 2025-10-30 00:07:17.328 [INFO][4329] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c4c73b004bfca6938ebec65ed50c705c9ceac617d90be3aea8eadd144d8d0b6b" host="localhost" Oct 30 00:07:17.640980 containerd[1621]: 2025-10-30 00:07:17.394 [INFO][4329] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 00:07:17.640980 containerd[1621]: 2025-10-30 00:07:17.399 [INFO][4329] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 00:07:17.640980 containerd[1621]: 2025-10-30 00:07:17.401 [INFO][4329] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 00:07:17.640980 containerd[1621]: 2025-10-30 00:07:17.403 [INFO][4329] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 00:07:17.640980 containerd[1621]: 2025-10-30 00:07:17.403 [INFO][4329] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c4c73b004bfca6938ebec65ed50c705c9ceac617d90be3aea8eadd144d8d0b6b" host="localhost" Oct 30 00:07:17.641379 containerd[1621]: 2025-10-30 00:07:17.404 [INFO][4329] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c4c73b004bfca6938ebec65ed50c705c9ceac617d90be3aea8eadd144d8d0b6b Oct 30 00:07:17.641379 containerd[1621]: 2025-10-30 00:07:17.566 [INFO][4329] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c4c73b004bfca6938ebec65ed50c705c9ceac617d90be3aea8eadd144d8d0b6b" host="localhost" Oct 30 00:07:17.641379 containerd[1621]: 2025-10-30 00:07:17.596 [INFO][4329] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.c4c73b004bfca6938ebec65ed50c705c9ceac617d90be3aea8eadd144d8d0b6b" host="localhost" Oct 30 00:07:17.641379 containerd[1621]: 2025-10-30 00:07:17.596 [INFO][4329] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.c4c73b004bfca6938ebec65ed50c705c9ceac617d90be3aea8eadd144d8d0b6b" host="localhost" Oct 30 00:07:17.641379 containerd[1621]: 2025-10-30 00:07:17.596 [INFO][4329] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:07:17.641379 containerd[1621]: 2025-10-30 00:07:17.596 [INFO][4329] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="c4c73b004bfca6938ebec65ed50c705c9ceac617d90be3aea8eadd144d8d0b6b" HandleID="k8s-pod-network.c4c73b004bfca6938ebec65ed50c705c9ceac617d90be3aea8eadd144d8d0b6b" Workload="localhost-k8s-whisker--78cd8f6cd--jtlv8-eth0" Oct 30 00:07:17.641537 containerd[1621]: 2025-10-30 00:07:17.600 [INFO][4316] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c4c73b004bfca6938ebec65ed50c705c9ceac617d90be3aea8eadd144d8d0b6b" Namespace="calico-system" Pod="whisker-78cd8f6cd-jtlv8" WorkloadEndpoint="localhost-k8s-whisker--78cd8f6cd--jtlv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--78cd8f6cd--jtlv8-eth0", GenerateName:"whisker-78cd8f6cd-", Namespace:"calico-system", SelfLink:"", UID:"0ddc0305-4c4c-4d8f-adc8-d24daa6c347e", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 7, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78cd8f6cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-78cd8f6cd-jtlv8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9086a11c3bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:07:17.641537 containerd[1621]: 2025-10-30 00:07:17.600 [INFO][4316] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="c4c73b004bfca6938ebec65ed50c705c9ceac617d90be3aea8eadd144d8d0b6b" Namespace="calico-system" Pod="whisker-78cd8f6cd-jtlv8" WorkloadEndpoint="localhost-k8s-whisker--78cd8f6cd--jtlv8-eth0" Oct 30 00:07:17.641649 containerd[1621]: 2025-10-30 00:07:17.600 [INFO][4316] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9086a11c3bb ContainerID="c4c73b004bfca6938ebec65ed50c705c9ceac617d90be3aea8eadd144d8d0b6b" Namespace="calico-system" Pod="whisker-78cd8f6cd-jtlv8" WorkloadEndpoint="localhost-k8s-whisker--78cd8f6cd--jtlv8-eth0" Oct 30 00:07:17.641649 containerd[1621]: 2025-10-30 00:07:17.615 [INFO][4316] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c4c73b004bfca6938ebec65ed50c705c9ceac617d90be3aea8eadd144d8d0b6b" Namespace="calico-system" Pod="whisker-78cd8f6cd-jtlv8" WorkloadEndpoint="localhost-k8s-whisker--78cd8f6cd--jtlv8-eth0" Oct 30 00:07:17.641706 containerd[1621]: 2025-10-30 00:07:17.617 [INFO][4316] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c4c73b004bfca6938ebec65ed50c705c9ceac617d90be3aea8eadd144d8d0b6b" Namespace="calico-system" Pod="whisker-78cd8f6cd-jtlv8" WorkloadEndpoint="localhost-k8s-whisker--78cd8f6cd--jtlv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--78cd8f6cd--jtlv8-eth0", GenerateName:"whisker-78cd8f6cd-", Namespace:"calico-system", SelfLink:"", UID:"0ddc0305-4c4c-4d8f-adc8-d24daa6c347e", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 7, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78cd8f6cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c4c73b004bfca6938ebec65ed50c705c9ceac617d90be3aea8eadd144d8d0b6b", Pod:"whisker-78cd8f6cd-jtlv8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9086a11c3bb", MAC:"4a:f7:2b:6a:b7:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:07:17.641775 containerd[1621]: 2025-10-30 00:07:17.633 [INFO][4316] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c4c73b004bfca6938ebec65ed50c705c9ceac617d90be3aea8eadd144d8d0b6b" Namespace="calico-system" Pod="whisker-78cd8f6cd-jtlv8" WorkloadEndpoint="localhost-k8s-whisker--78cd8f6cd--jtlv8-eth0" Oct 30 00:07:18.405744 systemd[1]: Started sshd@8-10.0.0.102:22-10.0.0.1:60774.service - OpenSSH per-connection server daemon (10.0.0.1:60774). Oct 30 00:07:18.483432 sshd[4555]: Accepted publickey for core from 10.0.0.1 port 60774 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:07:18.485669 sshd-session[4555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:07:18.491442 systemd-logind[1593]: New session 9 of user core. Oct 30 00:07:18.499250 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 30 00:07:18.588127 containerd[1621]: time="2025-10-30T00:07:18.587339884Z" level=info msg="connecting to shim c4c73b004bfca6938ebec65ed50c705c9ceac617d90be3aea8eadd144d8d0b6b" address="unix:///run/containerd/s/8c497130d123708dc7dce3ac1dd7b1630813d76ecf40eb357a3aa3fb6c508b2a" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:07:18.616283 systemd[1]: Started cri-containerd-c4c73b004bfca6938ebec65ed50c705c9ceac617d90be3aea8eadd144d8d0b6b.scope - libcontainer container c4c73b004bfca6938ebec65ed50c705c9ceac617d90be3aea8eadd144d8d0b6b. Oct 30 00:07:18.631153 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 00:07:18.645458 sshd[4558]: Connection closed by 10.0.0.1 port 60774 Oct 30 00:07:18.645829 sshd-session[4555]: pam_unix(sshd:session): session closed for user core Oct 30 00:07:18.651162 systemd[1]: sshd@8-10.0.0.102:22-10.0.0.1:60774.service: Deactivated successfully. Oct 30 00:07:18.654828 systemd[1]: session-9.scope: Deactivated successfully. Oct 30 00:07:18.658699 systemd-logind[1593]: Session 9 logged out. Waiting for processes to exit. Oct 30 00:07:18.662794 systemd-logind[1593]: Removed session 9. Oct 30 00:07:18.694683 containerd[1621]: time="2025-10-30T00:07:18.694615366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78cd8f6cd-jtlv8,Uid:0ddc0305-4c4c-4d8f-adc8-d24daa6c347e,Namespace:calico-system,Attempt:0,} returns sandbox id \"c4c73b004bfca6938ebec65ed50c705c9ceac617d90be3aea8eadd144d8d0b6b\"" Oct 30 00:07:18.696622 containerd[1621]: time="2025-10-30T00:07:18.696513224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 00:07:18.819343 systemd-networkd[1519]: vxlan.calico: Gained IPv6LL Oct 30 00:07:19.011280 systemd-networkd[1519]: cali9086a11c3bb: Gained IPv6LL Oct 30 00:07:19.076846 containerd[1621]: time="2025-10-30T00:07:19.076757319Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:07:19.088215 containerd[1621]: time="2025-10-30T00:07:19.088103317Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 00:07:19.088294 containerd[1621]: time="2025-10-30T00:07:19.088242402Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 00:07:19.088490 kubelet[2810]: E1030 00:07:19.088432 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:07:19.088939 kubelet[2810]: E1030 00:07:19.088513 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:07:19.088978 kubelet[2810]: E1030 00:07:19.088689 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2384e6ac67324a0e992f1b31f21c6bff,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mhvms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78cd8f6cd-jtlv8_calico-system(0ddc0305-4c4c-4d8f-adc8-d24daa6c347e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 00:07:19.090940 containerd[1621]: time="2025-10-30T00:07:19.090815689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 00:07:19.511828 containerd[1621]: time="2025-10-30T00:07:19.511770231Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:07:19.554226 containerd[1621]: time="2025-10-30T00:07:19.554120367Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 00:07:19.554432 containerd[1621]: time="2025-10-30T00:07:19.554270574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 00:07:19.554675 kubelet[2810]: E1030 00:07:19.554586 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:07:19.554750 kubelet[2810]: E1030 00:07:19.554676 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:07:19.554924 kubelet[2810]: E1030 00:07:19.554847 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mhvms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78cd8f6cd-jtlv8_calico-system(0ddc0305-4c4c-4d8f-adc8-d24daa6c347e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 00:07:19.556223 kubelet[2810]: E1030 00:07:19.556062 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78cd8f6cd-jtlv8" podUID="0ddc0305-4c4c-4d8f-adc8-d24daa6c347e" Oct 30 00:07:19.935057 kubelet[2810]: E1030 00:07:19.934993 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78cd8f6cd-jtlv8" podUID="0ddc0305-4c4c-4d8f-adc8-d24daa6c347e" Oct 30 00:07:20.607671 containerd[1621]: time="2025-10-30T00:07:20.607619002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69bd87fbdd-zshjp,Uid:fb783c3f-c8d1-42f0-a262-b7fd408f60b3,Namespace:calico-system,Attempt:0,}" Oct 30 00:07:20.813349 systemd-networkd[1519]: cali251a1a17850: Link UP Oct 30 00:07:20.814997 systemd-networkd[1519]: cali251a1a17850: Gained carrier Oct 30 00:07:21.080790 containerd[1621]: 2025-10-30 00:07:20.715 [INFO][4621] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--69bd87fbdd--zshjp-eth0 calico-kube-controllers-69bd87fbdd- calico-system fb783c3f-c8d1-42f0-a262-b7fd408f60b3 903 0 2025-10-30 00:06:38 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:69bd87fbdd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-69bd87fbdd-zshjp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali251a1a17850 [] [] }} ContainerID="0f81fc12833139fe4e3450a6ba5e2cf9e1e8dd285e2a1363d67cf3b487af6a09" Namespace="calico-system" Pod="calico-kube-controllers-69bd87fbdd-zshjp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69bd87fbdd--zshjp-" Oct 30 00:07:21.080790 containerd[1621]: 2025-10-30 00:07:20.715 [INFO][4621] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0f81fc12833139fe4e3450a6ba5e2cf9e1e8dd285e2a1363d67cf3b487af6a09" Namespace="calico-system" Pod="calico-kube-controllers-69bd87fbdd-zshjp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69bd87fbdd--zshjp-eth0" Oct 30 00:07:21.080790 containerd[1621]: 2025-10-30 00:07:20.755 [INFO][4635] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f81fc12833139fe4e3450a6ba5e2cf9e1e8dd285e2a1363d67cf3b487af6a09" HandleID="k8s-pod-network.0f81fc12833139fe4e3450a6ba5e2cf9e1e8dd285e2a1363d67cf3b487af6a09" Workload="localhost-k8s-calico--kube--controllers--69bd87fbdd--zshjp-eth0" Oct 30 00:07:21.081218 containerd[1621]: 2025-10-30 00:07:20.755 [INFO][4635] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0f81fc12833139fe4e3450a6ba5e2cf9e1e8dd285e2a1363d67cf3b487af6a09" HandleID="k8s-pod-network.0f81fc12833139fe4e3450a6ba5e2cf9e1e8dd285e2a1363d67cf3b487af6a09" Workload="localhost-k8s-calico--kube--controllers--69bd87fbdd--zshjp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5870), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-69bd87fbdd-zshjp", "timestamp":"2025-10-30 00:07:20.755686706 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:07:21.081218 containerd[1621]: 2025-10-30 00:07:20.756 [INFO][4635] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:07:21.081218 containerd[1621]: 2025-10-30 00:07:20.756 [INFO][4635] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:07:21.081218 containerd[1621]: 2025-10-30 00:07:20.756 [INFO][4635] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 00:07:21.081218 containerd[1621]: 2025-10-30 00:07:20.763 [INFO][4635] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0f81fc12833139fe4e3450a6ba5e2cf9e1e8dd285e2a1363d67cf3b487af6a09" host="localhost" Oct 30 00:07:21.081218 containerd[1621]: 2025-10-30 00:07:20.768 [INFO][4635] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 00:07:21.081218 containerd[1621]: 2025-10-30 00:07:20.772 [INFO][4635] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 00:07:21.081218 containerd[1621]: 2025-10-30 00:07:20.773 [INFO][4635] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 00:07:21.081218 containerd[1621]: 2025-10-30 00:07:20.775 [INFO][4635] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 00:07:21.081218 containerd[1621]: 2025-10-30 00:07:20.775 [INFO][4635] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0f81fc12833139fe4e3450a6ba5e2cf9e1e8dd285e2a1363d67cf3b487af6a09" host="localhost" Oct 30 00:07:21.081744 containerd[1621]: 2025-10-30 00:07:20.777 [INFO][4635] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0f81fc12833139fe4e3450a6ba5e2cf9e1e8dd285e2a1363d67cf3b487af6a09 Oct 30 00:07:21.081744 containerd[1621]: 2025-10-30 00:07:20.788 [INFO][4635] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0f81fc12833139fe4e3450a6ba5e2cf9e1e8dd285e2a1363d67cf3b487af6a09" host="localhost" Oct 30 00:07:21.081744 containerd[1621]: 2025-10-30 00:07:20.807 [INFO][4635] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.0f81fc12833139fe4e3450a6ba5e2cf9e1e8dd285e2a1363d67cf3b487af6a09" host="localhost" Oct 30 00:07:21.081744 containerd[1621]: 2025-10-30 00:07:20.807 [INFO][4635] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.0f81fc12833139fe4e3450a6ba5e2cf9e1e8dd285e2a1363d67cf3b487af6a09" host="localhost" Oct 30 00:07:21.081744 containerd[1621]: 2025-10-30 00:07:20.807 [INFO][4635] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:07:21.081744 containerd[1621]: 2025-10-30 00:07:20.807 [INFO][4635] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="0f81fc12833139fe4e3450a6ba5e2cf9e1e8dd285e2a1363d67cf3b487af6a09" HandleID="k8s-pod-network.0f81fc12833139fe4e3450a6ba5e2cf9e1e8dd285e2a1363d67cf3b487af6a09" Workload="localhost-k8s-calico--kube--controllers--69bd87fbdd--zshjp-eth0" Oct 30 00:07:21.082003 containerd[1621]: 2025-10-30 00:07:20.811 [INFO][4621] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0f81fc12833139fe4e3450a6ba5e2cf9e1e8dd285e2a1363d67cf3b487af6a09" Namespace="calico-system" Pod="calico-kube-controllers-69bd87fbdd-zshjp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69bd87fbdd--zshjp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--69bd87fbdd--zshjp-eth0", GenerateName:"calico-kube-controllers-69bd87fbdd-", Namespace:"calico-system", SelfLink:"", UID:"fb783c3f-c8d1-42f0-a262-b7fd408f60b3", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 6, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69bd87fbdd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-69bd87fbdd-zshjp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali251a1a17850", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:07:21.082121 containerd[1621]: 2025-10-30 00:07:20.811 [INFO][4621] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="0f81fc12833139fe4e3450a6ba5e2cf9e1e8dd285e2a1363d67cf3b487af6a09" Namespace="calico-system" Pod="calico-kube-controllers-69bd87fbdd-zshjp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69bd87fbdd--zshjp-eth0" Oct 30 00:07:21.082121 containerd[1621]: 2025-10-30 00:07:20.811 [INFO][4621] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali251a1a17850 ContainerID="0f81fc12833139fe4e3450a6ba5e2cf9e1e8dd285e2a1363d67cf3b487af6a09" Namespace="calico-system" Pod="calico-kube-controllers-69bd87fbdd-zshjp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69bd87fbdd--zshjp-eth0" Oct 30 00:07:21.082121 containerd[1621]: 2025-10-30 00:07:20.814 [INFO][4621] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f81fc12833139fe4e3450a6ba5e2cf9e1e8dd285e2a1363d67cf3b487af6a09" Namespace="calico-system" Pod="calico-kube-controllers-69bd87fbdd-zshjp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69bd87fbdd--zshjp-eth0" Oct 30 00:07:21.082238 containerd[1621]: 2025-10-30 00:07:20.815 [INFO][4621] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0f81fc12833139fe4e3450a6ba5e2cf9e1e8dd285e2a1363d67cf3b487af6a09" Namespace="calico-system" Pod="calico-kube-controllers-69bd87fbdd-zshjp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69bd87fbdd--zshjp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--69bd87fbdd--zshjp-eth0", GenerateName:"calico-kube-controllers-69bd87fbdd-", Namespace:"calico-system", SelfLink:"", UID:"fb783c3f-c8d1-42f0-a262-b7fd408f60b3", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 6, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69bd87fbdd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0f81fc12833139fe4e3450a6ba5e2cf9e1e8dd285e2a1363d67cf3b487af6a09", Pod:"calico-kube-controllers-69bd87fbdd-zshjp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali251a1a17850", MAC:"96:38:56:32:66:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:07:21.082320 containerd[1621]: 2025-10-30 00:07:21.076 [INFO][4621] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0f81fc12833139fe4e3450a6ba5e2cf9e1e8dd285e2a1363d67cf3b487af6a09" Namespace="calico-system" Pod="calico-kube-controllers-69bd87fbdd-zshjp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69bd87fbdd--zshjp-eth0" Oct 30 00:07:21.606878 kubelet[2810]: E1030 00:07:21.606790 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:07:21.607510 kubelet[2810]: E1030 00:07:21.606968 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:07:21.607552 containerd[1621]: time="2025-10-30T00:07:21.607491398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xgt5r,Uid:6477e341-9cbb-4bbc-b90e-fcc438b0b3a9,Namespace:kube-system,Attempt:0,}" Oct 30 00:07:21.607729 containerd[1621]: time="2025-10-30T00:07:21.607707841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rxbrv,Uid:2c069f41-6fd8-469f-b76f-46d048b85fa4,Namespace:kube-system,Attempt:0,}" Oct 30 00:07:21.880735 containerd[1621]: time="2025-10-30T00:07:21.880604178Z" level=info msg="connecting to shim 0f81fc12833139fe4e3450a6ba5e2cf9e1e8dd285e2a1363d67cf3b487af6a09" address="unix:///run/containerd/s/bd9b1d6a5c4ab21dfd98d8a9be4542e1f4d7e8ac390124e1205cf17549a124f9" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:07:21.910323 systemd[1]: Started cri-containerd-0f81fc12833139fe4e3450a6ba5e2cf9e1e8dd285e2a1363d67cf3b487af6a09.scope - libcontainer container 0f81fc12833139fe4e3450a6ba5e2cf9e1e8dd285e2a1363d67cf3b487af6a09. Oct 30 00:07:21.926148 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 00:07:22.147329 systemd-networkd[1519]: cali251a1a17850: Gained IPv6LL Oct 30 00:07:22.542340 containerd[1621]: time="2025-10-30T00:07:22.542282261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69bd87fbdd-zshjp,Uid:fb783c3f-c8d1-42f0-a262-b7fd408f60b3,Namespace:calico-system,Attempt:0,} returns sandbox id \"0f81fc12833139fe4e3450a6ba5e2cf9e1e8dd285e2a1363d67cf3b487af6a09\"" Oct 30 00:07:22.544059 containerd[1621]: time="2025-10-30T00:07:22.544019435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 00:07:22.607349 containerd[1621]: time="2025-10-30T00:07:22.607301694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bgd9q,Uid:d62c2877-00ac-4394-911e-002e28febfd2,Namespace:calico-system,Attempt:0,}" Oct 30 00:07:22.607713 containerd[1621]: time="2025-10-30T00:07:22.607306854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-869977fb74-rhndp,Uid:2f8d59a7-a1e6-4aca-91ca-94959e3f1a19,Namespace:calico-apiserver,Attempt:0,}" Oct 30 00:07:22.702348 systemd-networkd[1519]: cali48a2ccb13f4: Link UP Oct 30 00:07:22.702959 systemd-networkd[1519]: cali48a2ccb13f4: Gained carrier Oct 30 00:07:22.923928 containerd[1621]: 2025-10-30 00:07:21.958 [INFO][4694] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--xgt5r-eth0 coredns-674b8bbfcf- kube-system 6477e341-9cbb-4bbc-b90e-fcc438b0b3a9 906 0 2025-10-30 00:06:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-xgt5r eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali48a2ccb13f4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="506d3b6b2b54fdbd41682b58a9672500dedb8086d34a84ffd531b49d0178712b" Namespace="kube-system" Pod="coredns-674b8bbfcf-xgt5r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xgt5r-" Oct 30 00:07:22.923928 containerd[1621]: 2025-10-30 00:07:21.959 [INFO][4694] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="506d3b6b2b54fdbd41682b58a9672500dedb8086d34a84ffd531b49d0178712b" Namespace="kube-system" Pod="coredns-674b8bbfcf-xgt5r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xgt5r-eth0" Oct 30 00:07:22.923928 containerd[1621]: 2025-10-30 00:07:22.317 [INFO][4718] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="506d3b6b2b54fdbd41682b58a9672500dedb8086d34a84ffd531b49d0178712b" HandleID="k8s-pod-network.506d3b6b2b54fdbd41682b58a9672500dedb8086d34a84ffd531b49d0178712b" Workload="localhost-k8s-coredns--674b8bbfcf--xgt5r-eth0" Oct 30 00:07:22.925214 containerd[1621]: 2025-10-30 00:07:22.317 [INFO][4718] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="506d3b6b2b54fdbd41682b58a9672500dedb8086d34a84ffd531b49d0178712b" HandleID="k8s-pod-network.506d3b6b2b54fdbd41682b58a9672500dedb8086d34a84ffd531b49d0178712b" Workload="localhost-k8s-coredns--674b8bbfcf--xgt5r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001237e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-xgt5r", "timestamp":"2025-10-30 00:07:22.317093379 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:07:22.925214 containerd[1621]: 2025-10-30 00:07:22.317 [INFO][4718] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:07:22.925214 containerd[1621]: 2025-10-30 00:07:22.317 [INFO][4718] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:07:22.925214 containerd[1621]: 2025-10-30 00:07:22.317 [INFO][4718] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 00:07:22.925214 containerd[1621]: 2025-10-30 00:07:22.446 [INFO][4718] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.506d3b6b2b54fdbd41682b58a9672500dedb8086d34a84ffd531b49d0178712b" host="localhost" Oct 30 00:07:22.925214 containerd[1621]: 2025-10-30 00:07:22.452 [INFO][4718] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 00:07:22.925214 containerd[1621]: 2025-10-30 00:07:22.456 [INFO][4718] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 00:07:22.925214 containerd[1621]: 2025-10-30 00:07:22.458 [INFO][4718] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 00:07:22.925214 containerd[1621]: 2025-10-30 00:07:22.460 [INFO][4718] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 00:07:22.925214 containerd[1621]: 2025-10-30 00:07:22.460 [INFO][4718] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.506d3b6b2b54fdbd41682b58a9672500dedb8086d34a84ffd531b49d0178712b" host="localhost" Oct 30 00:07:22.925832 containerd[1621]: 2025-10-30 00:07:22.461 [INFO][4718] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.506d3b6b2b54fdbd41682b58a9672500dedb8086d34a84ffd531b49d0178712b Oct 30 00:07:22.925832 containerd[1621]: 2025-10-30 00:07:22.488 [INFO][4718] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.506d3b6b2b54fdbd41682b58a9672500dedb8086d34a84ffd531b49d0178712b" host="localhost" Oct 30 00:07:22.925832 containerd[1621]: 2025-10-30 00:07:22.695 [INFO][4718] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.506d3b6b2b54fdbd41682b58a9672500dedb8086d34a84ffd531b49d0178712b" host="localhost" Oct 30 00:07:22.925832 containerd[1621]: 2025-10-30 00:07:22.695 [INFO][4718] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.506d3b6b2b54fdbd41682b58a9672500dedb8086d34a84ffd531b49d0178712b" host="localhost" Oct 30 00:07:22.925832 containerd[1621]: 2025-10-30 00:07:22.695 [INFO][4718] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:07:22.925832 containerd[1621]: 2025-10-30 00:07:22.695 [INFO][4718] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="506d3b6b2b54fdbd41682b58a9672500dedb8086d34a84ffd531b49d0178712b" HandleID="k8s-pod-network.506d3b6b2b54fdbd41682b58a9672500dedb8086d34a84ffd531b49d0178712b" Workload="localhost-k8s-coredns--674b8bbfcf--xgt5r-eth0" Oct 30 00:07:22.926027 containerd[1621]: 2025-10-30 00:07:22.699 [INFO][4694] cni-plugin/k8s.go 418: Populated endpoint ContainerID="506d3b6b2b54fdbd41682b58a9672500dedb8086d34a84ffd531b49d0178712b" Namespace="kube-system" Pod="coredns-674b8bbfcf-xgt5r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xgt5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--xgt5r-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6477e341-9cbb-4bbc-b90e-fcc438b0b3a9", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 6, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-xgt5r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali48a2ccb13f4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:07:22.926136 containerd[1621]: 2025-10-30 00:07:22.699 [INFO][4694] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="506d3b6b2b54fdbd41682b58a9672500dedb8086d34a84ffd531b49d0178712b" Namespace="kube-system" Pod="coredns-674b8bbfcf-xgt5r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xgt5r-eth0" Oct 30 00:07:22.926136 containerd[1621]: 2025-10-30 00:07:22.699 [INFO][4694] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali48a2ccb13f4 ContainerID="506d3b6b2b54fdbd41682b58a9672500dedb8086d34a84ffd531b49d0178712b" Namespace="kube-system" Pod="coredns-674b8bbfcf-xgt5r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xgt5r-eth0" Oct 30 00:07:22.926136 containerd[1621]: 2025-10-30 00:07:22.703 [INFO][4694] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="506d3b6b2b54fdbd41682b58a9672500dedb8086d34a84ffd531b49d0178712b" Namespace="kube-system" Pod="coredns-674b8bbfcf-xgt5r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xgt5r-eth0" Oct 30 00:07:22.926253 containerd[1621]: 2025-10-30 00:07:22.703 [INFO][4694] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="506d3b6b2b54fdbd41682b58a9672500dedb8086d34a84ffd531b49d0178712b" Namespace="kube-system" Pod="coredns-674b8bbfcf-xgt5r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xgt5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--xgt5r-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6477e341-9cbb-4bbc-b90e-fcc438b0b3a9", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 6, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"506d3b6b2b54fdbd41682b58a9672500dedb8086d34a84ffd531b49d0178712b", Pod:"coredns-674b8bbfcf-xgt5r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali48a2ccb13f4", MAC:"22:6f:36:60:53:13", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:07:22.926253 containerd[1621]: 2025-10-30 00:07:22.920 [INFO][4694] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="506d3b6b2b54fdbd41682b58a9672500dedb8086d34a84ffd531b49d0178712b" Namespace="kube-system" Pod="coredns-674b8bbfcf-xgt5r" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--xgt5r-eth0" Oct 30 00:07:23.063238 containerd[1621]: time="2025-10-30T00:07:23.063170134Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:07:23.267592 systemd-networkd[1519]: cali78fced7e1d0: Link UP Oct 30 00:07:23.269640 systemd-networkd[1519]: cali78fced7e1d0: Gained carrier Oct 30 00:07:23.312906 containerd[1621]: time="2025-10-30T00:07:23.312816395Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 00:07:23.313566 kubelet[2810]: E1030 00:07:23.313344 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:07:23.313566 kubelet[2810]: E1030 00:07:23.313507 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:07:23.315097 kubelet[2810]: E1030 00:07:23.314265 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v52lg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-69bd87fbdd-zshjp_calico-system(fb783c3f-c8d1-42f0-a262-b7fd408f60b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 00:07:23.315224 containerd[1621]: time="2025-10-30T00:07:23.314766915Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 00:07:23.315460 kubelet[2810]: E1030 00:07:23.315393 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69bd87fbdd-zshjp" podUID="fb783c3f-c8d1-42f0-a262-b7fd408f60b3" Oct 30 00:07:23.606956 containerd[1621]: time="2025-10-30T00:07:23.606803957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-869977fb74-wfqp7,Uid:5d5c2c33-987b-44fa-be72-89d7d6488ff0,Namespace:calico-apiserver,Attempt:0,}" Oct 30 00:07:23.664667 systemd[1]: Started sshd@9-10.0.0.102:22-10.0.0.1:33076.service - OpenSSH per-connection server daemon (10.0.0.1:33076). Oct 30 00:07:23.732295 sshd[4818]: Accepted publickey for core from 10.0.0.1 port 33076 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:07:23.733956 sshd-session[4818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:07:23.738479 systemd-logind[1593]: New session 10 of user core. Oct 30 00:07:23.752256 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 30 00:07:23.944877 kubelet[2810]: E1030 00:07:23.944736 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69bd87fbdd-zshjp" podUID="fb783c3f-c8d1-42f0-a262-b7fd408f60b3" Oct 30 00:07:24.179222 containerd[1621]: 2025-10-30 00:07:22.491 [INFO][4732] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--rxbrv-eth0 coredns-674b8bbfcf- kube-system 2c069f41-6fd8-469f-b76f-46d048b85fa4 908 0 2025-10-30 00:06:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-rxbrv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali78fced7e1d0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d80cc8e6647a12704df87baa5562ca284883da6dd9c21d94549f5be3b62b7211" Namespace="kube-system" Pod="coredns-674b8bbfcf-rxbrv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rxbrv-" Oct 30 00:07:24.179222 containerd[1621]: 2025-10-30 00:07:22.491 [INFO][4732] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d80cc8e6647a12704df87baa5562ca284883da6dd9c21d94549f5be3b62b7211" Namespace="kube-system" Pod="coredns-674b8bbfcf-rxbrv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rxbrv-eth0" Oct 30 00:07:24.179222 containerd[1621]: 2025-10-30 00:07:22.926 [INFO][4751] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d80cc8e6647a12704df87baa5562ca284883da6dd9c21d94549f5be3b62b7211" HandleID="k8s-pod-network.d80cc8e6647a12704df87baa5562ca284883da6dd9c21d94549f5be3b62b7211" Workload="localhost-k8s-coredns--674b8bbfcf--rxbrv-eth0" Oct 30 00:07:24.179222 containerd[1621]: 2025-10-30 00:07:22.926 [INFO][4751] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d80cc8e6647a12704df87baa5562ca284883da6dd9c21d94549f5be3b62b7211" HandleID="k8s-pod-network.d80cc8e6647a12704df87baa5562ca284883da6dd9c21d94549f5be3b62b7211" Workload="localhost-k8s-coredns--674b8bbfcf--rxbrv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a3430), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-rxbrv", "timestamp":"2025-10-30 00:07:22.925998899 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:07:24.179222 containerd[1621]: 2025-10-30 00:07:22.926 [INFO][4751] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:07:24.179222 containerd[1621]: 2025-10-30 00:07:22.926 [INFO][4751] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:07:24.179222 containerd[1621]: 2025-10-30 00:07:22.926 [INFO][4751] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 00:07:24.179222 containerd[1621]: 2025-10-30 00:07:22.933 [INFO][4751] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d80cc8e6647a12704df87baa5562ca284883da6dd9c21d94549f5be3b62b7211" host="localhost" Oct 30 00:07:24.179222 containerd[1621]: 2025-10-30 00:07:22.937 [INFO][4751] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 00:07:24.179222 containerd[1621]: 2025-10-30 00:07:22.942 [INFO][4751] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 00:07:24.179222 containerd[1621]: 2025-10-30 00:07:22.944 [INFO][4751] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 00:07:24.179222 containerd[1621]: 2025-10-30 00:07:22.946 [INFO][4751] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 00:07:24.179222 containerd[1621]: 2025-10-30 00:07:22.946 [INFO][4751] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d80cc8e6647a12704df87baa5562ca284883da6dd9c21d94549f5be3b62b7211" host="localhost" Oct 30 00:07:24.179222 containerd[1621]: 2025-10-30 00:07:22.948 [INFO][4751] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d80cc8e6647a12704df87baa5562ca284883da6dd9c21d94549f5be3b62b7211 Oct 30 00:07:24.179222 containerd[1621]: 2025-10-30 00:07:22.992 [INFO][4751] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d80cc8e6647a12704df87baa5562ca284883da6dd9c21d94549f5be3b62b7211" host="localhost" Oct 30 00:07:24.179222 containerd[1621]: 2025-10-30 00:07:23.258 [INFO][4751] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.d80cc8e6647a12704df87baa5562ca284883da6dd9c21d94549f5be3b62b7211" host="localhost" Oct 30 00:07:24.179222 containerd[1621]: 2025-10-30 00:07:23.258 [INFO][4751] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.d80cc8e6647a12704df87baa5562ca284883da6dd9c21d94549f5be3b62b7211" host="localhost" Oct 30 00:07:24.179222 containerd[1621]: 2025-10-30 00:07:23.258 [INFO][4751] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:07:24.179222 containerd[1621]: 2025-10-30 00:07:23.258 [INFO][4751] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="d80cc8e6647a12704df87baa5562ca284883da6dd9c21d94549f5be3b62b7211" HandleID="k8s-pod-network.d80cc8e6647a12704df87baa5562ca284883da6dd9c21d94549f5be3b62b7211" Workload="localhost-k8s-coredns--674b8bbfcf--rxbrv-eth0" Oct 30 00:07:24.180055 containerd[1621]: 2025-10-30 00:07:23.263 [INFO][4732] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d80cc8e6647a12704df87baa5562ca284883da6dd9c21d94549f5be3b62b7211" Namespace="kube-system" Pod="coredns-674b8bbfcf-rxbrv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rxbrv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--rxbrv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2c069f41-6fd8-469f-b76f-46d048b85fa4", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 6, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-rxbrv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali78fced7e1d0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:07:24.180055 containerd[1621]: 2025-10-30 00:07:23.263 [INFO][4732] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="d80cc8e6647a12704df87baa5562ca284883da6dd9c21d94549f5be3b62b7211" Namespace="kube-system" Pod="coredns-674b8bbfcf-rxbrv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rxbrv-eth0" Oct 30 00:07:24.180055 containerd[1621]: 2025-10-30 00:07:23.263 [INFO][4732] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali78fced7e1d0 ContainerID="d80cc8e6647a12704df87baa5562ca284883da6dd9c21d94549f5be3b62b7211" Namespace="kube-system" Pod="coredns-674b8bbfcf-rxbrv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rxbrv-eth0" Oct 30 00:07:24.180055 containerd[1621]: 2025-10-30 00:07:23.270 [INFO][4732] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d80cc8e6647a12704df87baa5562ca284883da6dd9c21d94549f5be3b62b7211" Namespace="kube-system" Pod="coredns-674b8bbfcf-rxbrv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rxbrv-eth0" Oct 30 00:07:24.180055 containerd[1621]: 2025-10-30 00:07:23.273 [INFO][4732] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d80cc8e6647a12704df87baa5562ca284883da6dd9c21d94549f5be3b62b7211" Namespace="kube-system" Pod="coredns-674b8bbfcf-rxbrv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rxbrv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--rxbrv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2c069f41-6fd8-469f-b76f-46d048b85fa4", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 6, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d80cc8e6647a12704df87baa5562ca284883da6dd9c21d94549f5be3b62b7211", Pod:"coredns-674b8bbfcf-rxbrv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali78fced7e1d0", MAC:"5e:62:c3:89:7e:2f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:07:24.180055 containerd[1621]: 2025-10-30 00:07:24.175 [INFO][4732] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d80cc8e6647a12704df87baa5562ca284883da6dd9c21d94549f5be3b62b7211" Namespace="kube-system" Pod="coredns-674b8bbfcf-rxbrv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rxbrv-eth0" Oct 30 00:07:24.451347 systemd-networkd[1519]: cali78fced7e1d0: Gained IPv6LL Oct 30 00:07:24.571016 sshd[4821]: Connection closed by 10.0.0.1 port 33076 Oct 30 00:07:24.571406 sshd-session[4818]: pam_unix(sshd:session): session closed for user core Oct 30 00:07:24.576266 systemd[1]: sshd@9-10.0.0.102:22-10.0.0.1:33076.service: Deactivated successfully. Oct 30 00:07:24.578650 systemd[1]: session-10.scope: Deactivated successfully. Oct 30 00:07:24.579764 systemd-logind[1593]: Session 10 logged out. Waiting for processes to exit. Oct 30 00:07:24.581142 systemd-logind[1593]: Removed session 10. Oct 30 00:07:24.643340 systemd-networkd[1519]: cali48a2ccb13f4: Gained IPv6LL Oct 30 00:07:25.607717 containerd[1621]: time="2025-10-30T00:07:25.607659834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-g2t2x,Uid:0a7e0678-b33a-4d3a-b42a-5b4a4c30629b,Namespace:calico-system,Attempt:0,}" Oct 30 00:07:25.847514 containerd[1621]: time="2025-10-30T00:07:25.847457245Z" level=info msg="connecting to shim 506d3b6b2b54fdbd41682b58a9672500dedb8086d34a84ffd531b49d0178712b" address="unix:///run/containerd/s/cc6f1c1653b724b977abf3332712b9a6a29d19dcde95feed175dc1ddcf1d0c0c" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:07:25.876313 systemd[1]: Started cri-containerd-506d3b6b2b54fdbd41682b58a9672500dedb8086d34a84ffd531b49d0178712b.scope - libcontainer container 506d3b6b2b54fdbd41682b58a9672500dedb8086d34a84ffd531b49d0178712b. Oct 30 00:07:25.891423 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 00:07:26.130194 containerd[1621]: time="2025-10-30T00:07:26.130030213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xgt5r,Uid:6477e341-9cbb-4bbc-b90e-fcc438b0b3a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"506d3b6b2b54fdbd41682b58a9672500dedb8086d34a84ffd531b49d0178712b\"" Oct 30 00:07:26.130833 kubelet[2810]: E1030 00:07:26.130795 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:07:26.146322 systemd-networkd[1519]: calid33820c9f73: Link UP Oct 30 00:07:26.147230 systemd-networkd[1519]: calid33820c9f73: Gained carrier Oct 30 00:07:26.772114 containerd[1621]: 2025-10-30 00:07:23.262 [INFO][4785] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--869977fb74--rhndp-eth0 calico-apiserver-869977fb74- calico-apiserver 2f8d59a7-a1e6-4aca-91ca-94959e3f1a19 902 0 2025-10-30 00:06:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:869977fb74 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-869977fb74-rhndp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid33820c9f73 [] [] }} ContainerID="3f8baaf787c3487193cc59090f08cdb1bf28254c64ed9a848c189ba68aa208c9" Namespace="calico-apiserver" Pod="calico-apiserver-869977fb74-rhndp" WorkloadEndpoint="localhost-k8s-calico--apiserver--869977fb74--rhndp-" Oct 30 00:07:26.772114 containerd[1621]: 2025-10-30 00:07:23.262 [INFO][4785] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3f8baaf787c3487193cc59090f08cdb1bf28254c64ed9a848c189ba68aa208c9" Namespace="calico-apiserver" Pod="calico-apiserver-869977fb74-rhndp" WorkloadEndpoint="localhost-k8s-calico--apiserver--869977fb74--rhndp-eth0" Oct 30 00:07:26.772114 containerd[1621]: 2025-10-30 00:07:23.298 [INFO][4807] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f8baaf787c3487193cc59090f08cdb1bf28254c64ed9a848c189ba68aa208c9" HandleID="k8s-pod-network.3f8baaf787c3487193cc59090f08cdb1bf28254c64ed9a848c189ba68aa208c9" Workload="localhost-k8s-calico--apiserver--869977fb74--rhndp-eth0" Oct 30 00:07:26.772114 containerd[1621]: 2025-10-30 00:07:23.298 [INFO][4807] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3f8baaf787c3487193cc59090f08cdb1bf28254c64ed9a848c189ba68aa208c9" HandleID="k8s-pod-network.3f8baaf787c3487193cc59090f08cdb1bf28254c64ed9a848c189ba68aa208c9" Workload="localhost-k8s-calico--apiserver--869977fb74--rhndp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034d590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-869977fb74-rhndp", "timestamp":"2025-10-30 00:07:23.298457418 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:07:26.772114 containerd[1621]: 2025-10-30 00:07:23.298 [INFO][4807] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:07:26.772114 containerd[1621]: 2025-10-30 00:07:23.298 [INFO][4807] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:07:26.772114 containerd[1621]: 2025-10-30 00:07:23.298 [INFO][4807] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 00:07:26.772114 containerd[1621]: 2025-10-30 00:07:24.176 [INFO][4807] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3f8baaf787c3487193cc59090f08cdb1bf28254c64ed9a848c189ba68aa208c9" host="localhost" Oct 30 00:07:26.772114 containerd[1621]: 2025-10-30 00:07:24.723 [INFO][4807] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 00:07:26.772114 containerd[1621]: 2025-10-30 00:07:24.810 [INFO][4807] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 00:07:26.772114 containerd[1621]: 2025-10-30 00:07:25.366 [INFO][4807] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 00:07:26.772114 containerd[1621]: 2025-10-30 00:07:25.369 [INFO][4807] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 00:07:26.772114 containerd[1621]: 2025-10-30 00:07:25.369 [INFO][4807] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3f8baaf787c3487193cc59090f08cdb1bf28254c64ed9a848c189ba68aa208c9" host="localhost" Oct 30 00:07:26.772114 containerd[1621]: 2025-10-30 00:07:25.870 [INFO][4807] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3f8baaf787c3487193cc59090f08cdb1bf28254c64ed9a848c189ba68aa208c9 Oct 30 00:07:26.772114 containerd[1621]: 2025-10-30 00:07:25.917 [INFO][4807] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3f8baaf787c3487193cc59090f08cdb1bf28254c64ed9a848c189ba68aa208c9" host="localhost" Oct 30 00:07:26.772114 containerd[1621]: 2025-10-30 00:07:26.139 [INFO][4807] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.3f8baaf787c3487193cc59090f08cdb1bf28254c64ed9a848c189ba68aa208c9" host="localhost" Oct 30 00:07:26.772114 containerd[1621]: 2025-10-30 00:07:26.139 [INFO][4807] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.3f8baaf787c3487193cc59090f08cdb1bf28254c64ed9a848c189ba68aa208c9" host="localhost" Oct 30 00:07:26.772114 containerd[1621]: 2025-10-30 00:07:26.139 [INFO][4807] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:07:26.772114 containerd[1621]: 2025-10-30 00:07:26.139 [INFO][4807] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="3f8baaf787c3487193cc59090f08cdb1bf28254c64ed9a848c189ba68aa208c9" HandleID="k8s-pod-network.3f8baaf787c3487193cc59090f08cdb1bf28254c64ed9a848c189ba68aa208c9" Workload="localhost-k8s-calico--apiserver--869977fb74--rhndp-eth0" Oct 30 00:07:26.774215 containerd[1621]: 2025-10-30 00:07:26.144 [INFO][4785] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3f8baaf787c3487193cc59090f08cdb1bf28254c64ed9a848c189ba68aa208c9" Namespace="calico-apiserver" Pod="calico-apiserver-869977fb74-rhndp" WorkloadEndpoint="localhost-k8s-calico--apiserver--869977fb74--rhndp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--869977fb74--rhndp-eth0", GenerateName:"calico-apiserver-869977fb74-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f8d59a7-a1e6-4aca-91ca-94959e3f1a19", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 6, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"869977fb74", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-869977fb74-rhndp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid33820c9f73", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:07:26.774215 containerd[1621]: 2025-10-30 00:07:26.144 [INFO][4785] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="3f8baaf787c3487193cc59090f08cdb1bf28254c64ed9a848c189ba68aa208c9" Namespace="calico-apiserver" Pod="calico-apiserver-869977fb74-rhndp" WorkloadEndpoint="localhost-k8s-calico--apiserver--869977fb74--rhndp-eth0" Oct 30 00:07:26.774215 containerd[1621]: 2025-10-30 00:07:26.144 [INFO][4785] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid33820c9f73 ContainerID="3f8baaf787c3487193cc59090f08cdb1bf28254c64ed9a848c189ba68aa208c9" Namespace="calico-apiserver" Pod="calico-apiserver-869977fb74-rhndp" WorkloadEndpoint="localhost-k8s-calico--apiserver--869977fb74--rhndp-eth0" Oct 30 00:07:26.774215 containerd[1621]: 2025-10-30 00:07:26.146 [INFO][4785] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f8baaf787c3487193cc59090f08cdb1bf28254c64ed9a848c189ba68aa208c9" Namespace="calico-apiserver" Pod="calico-apiserver-869977fb74-rhndp" WorkloadEndpoint="localhost-k8s-calico--apiserver--869977fb74--rhndp-eth0" Oct 30 00:07:26.774215 containerd[1621]: 2025-10-30 00:07:26.147 [INFO][4785] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3f8baaf787c3487193cc59090f08cdb1bf28254c64ed9a848c189ba68aa208c9" Namespace="calico-apiserver" Pod="calico-apiserver-869977fb74-rhndp" WorkloadEndpoint="localhost-k8s-calico--apiserver--869977fb74--rhndp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--869977fb74--rhndp-eth0", GenerateName:"calico-apiserver-869977fb74-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f8d59a7-a1e6-4aca-91ca-94959e3f1a19", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 6, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"869977fb74", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3f8baaf787c3487193cc59090f08cdb1bf28254c64ed9a848c189ba68aa208c9", Pod:"calico-apiserver-869977fb74-rhndp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid33820c9f73", MAC:"72:4e:0c:d1:8a:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:07:26.774215 containerd[1621]: 2025-10-30 00:07:26.767 [INFO][4785] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3f8baaf787c3487193cc59090f08cdb1bf28254c64ed9a848c189ba68aa208c9" Namespace="calico-apiserver" Pod="calico-apiserver-869977fb74-rhndp" WorkloadEndpoint="localhost-k8s-calico--apiserver--869977fb74--rhndp-eth0" Oct 30 00:07:27.395296 systemd-networkd[1519]: calid33820c9f73: Gained IPv6LL Oct 30 00:07:27.441663 containerd[1621]: time="2025-10-30T00:07:27.441624166Z" level=info msg="CreateContainer within sandbox \"506d3b6b2b54fdbd41682b58a9672500dedb8086d34a84ffd531b49d0178712b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 30 00:07:27.549202 systemd-networkd[1519]: caliaf51a9a6e9a: Link UP Oct 30 00:07:27.549973 systemd-networkd[1519]: caliaf51a9a6e9a: Gained carrier Oct 30 00:07:28.072630 containerd[1621]: 2025-10-30 00:07:23.261 [INFO][4770] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--bgd9q-eth0 csi-node-driver- calico-system d62c2877-00ac-4394-911e-002e28febfd2 777 0 2025-10-30 00:06:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-bgd9q eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliaf51a9a6e9a [] [] }} ContainerID="c410b8da247e2929dc16ef41cf0049ea33859fd2ae688411961a4bc4ad6694b2" Namespace="calico-system" Pod="csi-node-driver-bgd9q" WorkloadEndpoint="localhost-k8s-csi--node--driver--bgd9q-" Oct 30 00:07:28.072630 containerd[1621]: 2025-10-30 00:07:23.261 [INFO][4770] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c410b8da247e2929dc16ef41cf0049ea33859fd2ae688411961a4bc4ad6694b2" Namespace="calico-system" Pod="csi-node-driver-bgd9q" WorkloadEndpoint="localhost-k8s-csi--node--driver--bgd9q-eth0" Oct 30 00:07:28.072630 containerd[1621]: 2025-10-30 00:07:23.317 [INFO][4800] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c410b8da247e2929dc16ef41cf0049ea33859fd2ae688411961a4bc4ad6694b2" HandleID="k8s-pod-network.c410b8da247e2929dc16ef41cf0049ea33859fd2ae688411961a4bc4ad6694b2" Workload="localhost-k8s-csi--node--driver--bgd9q-eth0" Oct 30 00:07:28.072630 containerd[1621]: 2025-10-30 00:07:23.317 [INFO][4800] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c410b8da247e2929dc16ef41cf0049ea33859fd2ae688411961a4bc4ad6694b2" HandleID="k8s-pod-network.c410b8da247e2929dc16ef41cf0049ea33859fd2ae688411961a4bc4ad6694b2" Workload="localhost-k8s-csi--node--driver--bgd9q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c1ba0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-bgd9q", "timestamp":"2025-10-30 00:07:23.317319905 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:07:28.072630 containerd[1621]: 2025-10-30 00:07:23.317 [INFO][4800] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:07:28.072630 containerd[1621]: 2025-10-30 00:07:26.139 [INFO][4800] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:07:28.072630 containerd[1621]: 2025-10-30 00:07:26.139 [INFO][4800] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 00:07:28.072630 containerd[1621]: 2025-10-30 00:07:26.201 [INFO][4800] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c410b8da247e2929dc16ef41cf0049ea33859fd2ae688411961a4bc4ad6694b2" host="localhost" Oct 30 00:07:28.072630 containerd[1621]: 2025-10-30 00:07:26.771 [INFO][4800] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 00:07:28.072630 containerd[1621]: 2025-10-30 00:07:26.777 [INFO][4800] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 00:07:28.072630 containerd[1621]: 2025-10-30 00:07:26.779 [INFO][4800] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 00:07:28.072630 containerd[1621]: 2025-10-30 00:07:26.782 [INFO][4800] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 00:07:28.072630 containerd[1621]: 2025-10-30 00:07:26.783 [INFO][4800] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c410b8da247e2929dc16ef41cf0049ea33859fd2ae688411961a4bc4ad6694b2" host="localhost" Oct 30 00:07:28.072630 containerd[1621]: 2025-10-30 00:07:26.784 [INFO][4800] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c410b8da247e2929dc16ef41cf0049ea33859fd2ae688411961a4bc4ad6694b2 Oct 30 00:07:28.072630 containerd[1621]: 2025-10-30 00:07:27.182 [INFO][4800] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c410b8da247e2929dc16ef41cf0049ea33859fd2ae688411961a4bc4ad6694b2" host="localhost" Oct 30 00:07:28.072630 containerd[1621]: 2025-10-30 00:07:27.542 [INFO][4800] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.c410b8da247e2929dc16ef41cf0049ea33859fd2ae688411961a4bc4ad6694b2" host="localhost" Oct 30 00:07:28.072630 containerd[1621]: 2025-10-30 00:07:27.542 [INFO][4800] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.c410b8da247e2929dc16ef41cf0049ea33859fd2ae688411961a4bc4ad6694b2" host="localhost" Oct 30 00:07:28.072630 containerd[1621]: 2025-10-30 00:07:27.543 [INFO][4800] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:07:28.072630 containerd[1621]: 2025-10-30 00:07:27.543 [INFO][4800] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="c410b8da247e2929dc16ef41cf0049ea33859fd2ae688411961a4bc4ad6694b2" HandleID="k8s-pod-network.c410b8da247e2929dc16ef41cf0049ea33859fd2ae688411961a4bc4ad6694b2" Workload="localhost-k8s-csi--node--driver--bgd9q-eth0" Oct 30 00:07:28.073811 containerd[1621]: 2025-10-30 00:07:27.546 [INFO][4770] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c410b8da247e2929dc16ef41cf0049ea33859fd2ae688411961a4bc4ad6694b2" Namespace="calico-system" Pod="csi-node-driver-bgd9q" WorkloadEndpoint="localhost-k8s-csi--node--driver--bgd9q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bgd9q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d62c2877-00ac-4394-911e-002e28febfd2", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 6, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-bgd9q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaf51a9a6e9a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:07:28.073811 containerd[1621]: 2025-10-30 00:07:27.546 [INFO][4770] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="c410b8da247e2929dc16ef41cf0049ea33859fd2ae688411961a4bc4ad6694b2" Namespace="calico-system" Pod="csi-node-driver-bgd9q" WorkloadEndpoint="localhost-k8s-csi--node--driver--bgd9q-eth0" Oct 30 00:07:28.073811 containerd[1621]: 2025-10-30 00:07:27.546 [INFO][4770] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaf51a9a6e9a ContainerID="c410b8da247e2929dc16ef41cf0049ea33859fd2ae688411961a4bc4ad6694b2" Namespace="calico-system" Pod="csi-node-driver-bgd9q" WorkloadEndpoint="localhost-k8s-csi--node--driver--bgd9q-eth0" Oct 30 00:07:28.073811 containerd[1621]: 2025-10-30 00:07:27.550 [INFO][4770] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c410b8da247e2929dc16ef41cf0049ea33859fd2ae688411961a4bc4ad6694b2" Namespace="calico-system" Pod="csi-node-driver-bgd9q" WorkloadEndpoint="localhost-k8s-csi--node--driver--bgd9q-eth0" Oct 30 00:07:28.073811 containerd[1621]: 2025-10-30 00:07:27.551 [INFO][4770] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c410b8da247e2929dc16ef41cf0049ea33859fd2ae688411961a4bc4ad6694b2" Namespace="calico-system" Pod="csi-node-driver-bgd9q" WorkloadEndpoint="localhost-k8s-csi--node--driver--bgd9q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bgd9q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d62c2877-00ac-4394-911e-002e28febfd2", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 6, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c410b8da247e2929dc16ef41cf0049ea33859fd2ae688411961a4bc4ad6694b2", Pod:"csi-node-driver-bgd9q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaf51a9a6e9a", MAC:"b6:a5:d7:94:1a:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:07:28.073811 containerd[1621]: 2025-10-30 00:07:28.065 [INFO][4770] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c410b8da247e2929dc16ef41cf0049ea33859fd2ae688411961a4bc4ad6694b2" Namespace="calico-system" Pod="csi-node-driver-bgd9q" WorkloadEndpoint="localhost-k8s-csi--node--driver--bgd9q-eth0" Oct 30 00:07:28.359927 containerd[1621]: time="2025-10-30T00:07:28.359798773Z" level=info msg="connecting to shim d80cc8e6647a12704df87baa5562ca284883da6dd9c21d94549f5be3b62b7211" address="unix:///run/containerd/s/405424e5f604f8afa9e30289fb8d36ab4145ad0d0dc380e864cc78693459a1de" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:07:28.388246 systemd[1]: Started cri-containerd-d80cc8e6647a12704df87baa5562ca284883da6dd9c21d94549f5be3b62b7211.scope - libcontainer container d80cc8e6647a12704df87baa5562ca284883da6dd9c21d94549f5be3b62b7211. Oct 30 00:07:28.428138 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 00:07:28.437330 systemd-networkd[1519]: cali87b9b3658ee: Link UP Oct 30 00:07:28.442421 systemd-networkd[1519]: cali87b9b3658ee: Gained carrier Oct 30 00:07:28.609212 containerd[1621]: 2025-10-30 00:07:25.366 [INFO][4843] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--869977fb74--wfqp7-eth0 calico-apiserver-869977fb74- calico-apiserver 5d5c2c33-987b-44fa-be72-89d7d6488ff0 907 0 2025-10-30 00:06:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:869977fb74 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-869977fb74-wfqp7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali87b9b3658ee [] [] }} ContainerID="ddb900be242a305fc532bf3da7ba039c1f61828c49650fc674a23eed4418fc82" Namespace="calico-apiserver" Pod="calico-apiserver-869977fb74-wfqp7" WorkloadEndpoint="localhost-k8s-calico--apiserver--869977fb74--wfqp7-" Oct 30 00:07:28.609212 containerd[1621]: 2025-10-30 00:07:25.366 [INFO][4843] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ddb900be242a305fc532bf3da7ba039c1f61828c49650fc674a23eed4418fc82" Namespace="calico-apiserver" Pod="calico-apiserver-869977fb74-wfqp7" WorkloadEndpoint="localhost-k8s-calico--apiserver--869977fb74--wfqp7-eth0" Oct 30 00:07:28.609212 containerd[1621]: 2025-10-30 00:07:25.946 [INFO][4906] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ddb900be242a305fc532bf3da7ba039c1f61828c49650fc674a23eed4418fc82" HandleID="k8s-pod-network.ddb900be242a305fc532bf3da7ba039c1f61828c49650fc674a23eed4418fc82" Workload="localhost-k8s-calico--apiserver--869977fb74--wfqp7-eth0" Oct 30 00:07:28.609212 containerd[1621]: 2025-10-30 00:07:25.946 [INFO][4906] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ddb900be242a305fc532bf3da7ba039c1f61828c49650fc674a23eed4418fc82" HandleID="k8s-pod-network.ddb900be242a305fc532bf3da7ba039c1f61828c49650fc674a23eed4418fc82" Workload="localhost-k8s-calico--apiserver--869977fb74--wfqp7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002915c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-869977fb74-wfqp7", "timestamp":"2025-10-30 00:07:25.946529706 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:07:28.609212 containerd[1621]: 2025-10-30 00:07:25.946 [INFO][4906] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:07:28.609212 containerd[1621]: 2025-10-30 00:07:27.543 [INFO][4906] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:07:28.609212 containerd[1621]: 2025-10-30 00:07:27.543 [INFO][4906] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 00:07:28.609212 containerd[1621]: 2025-10-30 00:07:28.053 [INFO][4906] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ddb900be242a305fc532bf3da7ba039c1f61828c49650fc674a23eed4418fc82" host="localhost" Oct 30 00:07:28.609212 containerd[1621]: 2025-10-30 00:07:28.060 [INFO][4906] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 00:07:28.609212 containerd[1621]: 2025-10-30 00:07:28.067 [INFO][4906] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 00:07:28.609212 containerd[1621]: 2025-10-30 00:07:28.070 [INFO][4906] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 00:07:28.609212 containerd[1621]: 2025-10-30 00:07:28.073 [INFO][4906] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 00:07:28.609212 containerd[1621]: 2025-10-30 00:07:28.073 [INFO][4906] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ddb900be242a305fc532bf3da7ba039c1f61828c49650fc674a23eed4418fc82" host="localhost" Oct 30 00:07:28.609212 containerd[1621]: 2025-10-30 00:07:28.076 [INFO][4906] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ddb900be242a305fc532bf3da7ba039c1f61828c49650fc674a23eed4418fc82 Oct 30 00:07:28.609212 containerd[1621]: 2025-10-30 00:07:28.112 [INFO][4906] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ddb900be242a305fc532bf3da7ba039c1f61828c49650fc674a23eed4418fc82" host="localhost" Oct 30 00:07:28.609212 containerd[1621]: 2025-10-30 00:07:28.422 [INFO][4906] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.ddb900be242a305fc532bf3da7ba039c1f61828c49650fc674a23eed4418fc82" host="localhost" Oct 30 00:07:28.609212 containerd[1621]: 2025-10-30 00:07:28.422 [INFO][4906] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.ddb900be242a305fc532bf3da7ba039c1f61828c49650fc674a23eed4418fc82" host="localhost" Oct 30 00:07:28.609212 containerd[1621]: 2025-10-30 00:07:28.422 [INFO][4906] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:07:28.609212 containerd[1621]: 2025-10-30 00:07:28.422 [INFO][4906] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="ddb900be242a305fc532bf3da7ba039c1f61828c49650fc674a23eed4418fc82" HandleID="k8s-pod-network.ddb900be242a305fc532bf3da7ba039c1f61828c49650fc674a23eed4418fc82" Workload="localhost-k8s-calico--apiserver--869977fb74--wfqp7-eth0" Oct 30 00:07:28.611159 containerd[1621]: 2025-10-30 00:07:28.427 [INFO][4843] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ddb900be242a305fc532bf3da7ba039c1f61828c49650fc674a23eed4418fc82" Namespace="calico-apiserver" Pod="calico-apiserver-869977fb74-wfqp7" WorkloadEndpoint="localhost-k8s-calico--apiserver--869977fb74--wfqp7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--869977fb74--wfqp7-eth0", GenerateName:"calico-apiserver-869977fb74-", Namespace:"calico-apiserver", SelfLink:"", UID:"5d5c2c33-987b-44fa-be72-89d7d6488ff0", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 6, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"869977fb74", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-869977fb74-wfqp7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali87b9b3658ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:07:28.611159 containerd[1621]: 2025-10-30 00:07:28.427 [INFO][4843] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="ddb900be242a305fc532bf3da7ba039c1f61828c49650fc674a23eed4418fc82" Namespace="calico-apiserver" Pod="calico-apiserver-869977fb74-wfqp7" WorkloadEndpoint="localhost-k8s-calico--apiserver--869977fb74--wfqp7-eth0" Oct 30 00:07:28.611159 containerd[1621]: 2025-10-30 00:07:28.428 [INFO][4843] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali87b9b3658ee ContainerID="ddb900be242a305fc532bf3da7ba039c1f61828c49650fc674a23eed4418fc82" Namespace="calico-apiserver" Pod="calico-apiserver-869977fb74-wfqp7" WorkloadEndpoint="localhost-k8s-calico--apiserver--869977fb74--wfqp7-eth0" Oct 30 00:07:28.611159 containerd[1621]: 2025-10-30 00:07:28.443 [INFO][4843] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ddb900be242a305fc532bf3da7ba039c1f61828c49650fc674a23eed4418fc82" Namespace="calico-apiserver" Pod="calico-apiserver-869977fb74-wfqp7" WorkloadEndpoint="localhost-k8s-calico--apiserver--869977fb74--wfqp7-eth0" Oct 30 00:07:28.611159 containerd[1621]: 2025-10-30 00:07:28.446 [INFO][4843] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ddb900be242a305fc532bf3da7ba039c1f61828c49650fc674a23eed4418fc82" Namespace="calico-apiserver" Pod="calico-apiserver-869977fb74-wfqp7" WorkloadEndpoint="localhost-k8s-calico--apiserver--869977fb74--wfqp7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--869977fb74--wfqp7-eth0", GenerateName:"calico-apiserver-869977fb74-", Namespace:"calico-apiserver", SelfLink:"", UID:"5d5c2c33-987b-44fa-be72-89d7d6488ff0", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 6, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"869977fb74", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ddb900be242a305fc532bf3da7ba039c1f61828c49650fc674a23eed4418fc82", Pod:"calico-apiserver-869977fb74-wfqp7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali87b9b3658ee", MAC:"92:5b:bf:a2:85:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:07:28.611159 containerd[1621]: 2025-10-30 00:07:28.603 [INFO][4843] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ddb900be242a305fc532bf3da7ba039c1f61828c49650fc674a23eed4418fc82" Namespace="calico-apiserver" Pod="calico-apiserver-869977fb74-wfqp7" WorkloadEndpoint="localhost-k8s-calico--apiserver--869977fb74--wfqp7-eth0" Oct 30 00:07:28.685308 containerd[1621]: time="2025-10-30T00:07:28.685249516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rxbrv,Uid:2c069f41-6fd8-469f-b76f-46d048b85fa4,Namespace:kube-system,Attempt:0,} returns sandbox id \"d80cc8e6647a12704df87baa5562ca284883da6dd9c21d94549f5be3b62b7211\"" Oct 30 00:07:28.687377 kubelet[2810]: E1030 00:07:28.687329 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:07:28.695660 systemd-networkd[1519]: cali0b0e4d3d971: Link UP Oct 30 00:07:28.696374 systemd-networkd[1519]: cali0b0e4d3d971: Gained carrier Oct 30 00:07:28.804118 containerd[1621]: 2025-10-30 00:07:28.049 [INFO][4924] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--g2t2x-eth0 goldmane-666569f655- calico-system 0a7e0678-b33a-4d3a-b42a-5b4a4c30629b 905 0 2025-10-30 00:06:35 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-g2t2x eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0b0e4d3d971 [] [] }} ContainerID="ea60eee2b0fdd982e0cc3eea3ca60f2900b62eed49d90e2c584f4da7d20cfd99" Namespace="calico-system" Pod="goldmane-666569f655-g2t2x" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--g2t2x-" Oct 30 00:07:28.804118 containerd[1621]: 2025-10-30 00:07:28.050 [INFO][4924] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ea60eee2b0fdd982e0cc3eea3ca60f2900b62eed49d90e2c584f4da7d20cfd99" Namespace="calico-system" Pod="goldmane-666569f655-g2t2x" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--g2t2x-eth0" Oct 30 00:07:28.804118 containerd[1621]: 2025-10-30 00:07:28.103 [INFO][4948] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea60eee2b0fdd982e0cc3eea3ca60f2900b62eed49d90e2c584f4da7d20cfd99" HandleID="k8s-pod-network.ea60eee2b0fdd982e0cc3eea3ca60f2900b62eed49d90e2c584f4da7d20cfd99" Workload="localhost-k8s-goldmane--666569f655--g2t2x-eth0" Oct 30 00:07:28.804118 containerd[1621]: 2025-10-30 00:07:28.103 [INFO][4948] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ea60eee2b0fdd982e0cc3eea3ca60f2900b62eed49d90e2c584f4da7d20cfd99" HandleID="k8s-pod-network.ea60eee2b0fdd982e0cc3eea3ca60f2900b62eed49d90e2c584f4da7d20cfd99" Workload="localhost-k8s-goldmane--666569f655--g2t2x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00051eb80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-g2t2x", "timestamp":"2025-10-30 00:07:28.103397388 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:07:28.804118 containerd[1621]: 2025-10-30 00:07:28.103 [INFO][4948] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:07:28.804118 containerd[1621]: 2025-10-30 00:07:28.422 [INFO][4948] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:07:28.804118 containerd[1621]: 2025-10-30 00:07:28.423 [INFO][4948] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 00:07:28.804118 containerd[1621]: 2025-10-30 00:07:28.434 [INFO][4948] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ea60eee2b0fdd982e0cc3eea3ca60f2900b62eed49d90e2c584f4da7d20cfd99" host="localhost" Oct 30 00:07:28.804118 containerd[1621]: 2025-10-30 00:07:28.440 [INFO][4948] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 00:07:28.804118 containerd[1621]: 2025-10-30 00:07:28.450 [INFO][4948] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 00:07:28.804118 containerd[1621]: 2025-10-30 00:07:28.452 [INFO][4948] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 00:07:28.804118 containerd[1621]: 2025-10-30 00:07:28.603 [INFO][4948] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 00:07:28.804118 containerd[1621]: 2025-10-30 00:07:28.603 [INFO][4948] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ea60eee2b0fdd982e0cc3eea3ca60f2900b62eed49d90e2c584f4da7d20cfd99" host="localhost" Oct 30 00:07:28.804118 containerd[1621]: 2025-10-30 00:07:28.606 [INFO][4948] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ea60eee2b0fdd982e0cc3eea3ca60f2900b62eed49d90e2c584f4da7d20cfd99 Oct 30 00:07:28.804118 containerd[1621]: 2025-10-30 00:07:28.637 [INFO][4948] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ea60eee2b0fdd982e0cc3eea3ca60f2900b62eed49d90e2c584f4da7d20cfd99" host="localhost" Oct 30 00:07:28.804118 containerd[1621]: 2025-10-30 00:07:28.687 [INFO][4948] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.ea60eee2b0fdd982e0cc3eea3ca60f2900b62eed49d90e2c584f4da7d20cfd99" host="localhost" Oct 30 00:07:28.804118 containerd[1621]: 2025-10-30 00:07:28.687 [INFO][4948] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.ea60eee2b0fdd982e0cc3eea3ca60f2900b62eed49d90e2c584f4da7d20cfd99" host="localhost" Oct 30 00:07:28.804118 containerd[1621]: 2025-10-30 00:07:28.687 [INFO][4948] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:07:28.804118 containerd[1621]: 2025-10-30 00:07:28.688 [INFO][4948] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="ea60eee2b0fdd982e0cc3eea3ca60f2900b62eed49d90e2c584f4da7d20cfd99" HandleID="k8s-pod-network.ea60eee2b0fdd982e0cc3eea3ca60f2900b62eed49d90e2c584f4da7d20cfd99" Workload="localhost-k8s-goldmane--666569f655--g2t2x-eth0" Oct 30 00:07:28.804710 containerd[1621]: 2025-10-30 00:07:28.692 [INFO][4924] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ea60eee2b0fdd982e0cc3eea3ca60f2900b62eed49d90e2c584f4da7d20cfd99" Namespace="calico-system" Pod="goldmane-666569f655-g2t2x" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--g2t2x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--g2t2x-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"0a7e0678-b33a-4d3a-b42a-5b4a4c30629b", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 6, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-g2t2x", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0b0e4d3d971", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:07:28.804710 containerd[1621]: 2025-10-30 00:07:28.693 [INFO][4924] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="ea60eee2b0fdd982e0cc3eea3ca60f2900b62eed49d90e2c584f4da7d20cfd99" Namespace="calico-system" Pod="goldmane-666569f655-g2t2x" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--g2t2x-eth0" Oct 30 00:07:28.804710 containerd[1621]: 2025-10-30 00:07:28.693 [INFO][4924] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0b0e4d3d971 ContainerID="ea60eee2b0fdd982e0cc3eea3ca60f2900b62eed49d90e2c584f4da7d20cfd99" Namespace="calico-system" Pod="goldmane-666569f655-g2t2x" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--g2t2x-eth0" Oct 30 00:07:28.804710 containerd[1621]: 2025-10-30 00:07:28.697 [INFO][4924] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea60eee2b0fdd982e0cc3eea3ca60f2900b62eed49d90e2c584f4da7d20cfd99" Namespace="calico-system" Pod="goldmane-666569f655-g2t2x" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--g2t2x-eth0" Oct 30 00:07:28.804710 containerd[1621]: 2025-10-30 00:07:28.698 [INFO][4924] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ea60eee2b0fdd982e0cc3eea3ca60f2900b62eed49d90e2c584f4da7d20cfd99" Namespace="calico-system" Pod="goldmane-666569f655-g2t2x" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--g2t2x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--g2t2x-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"0a7e0678-b33a-4d3a-b42a-5b4a4c30629b", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 6, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ea60eee2b0fdd982e0cc3eea3ca60f2900b62eed49d90e2c584f4da7d20cfd99", Pod:"goldmane-666569f655-g2t2x", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0b0e4d3d971", MAC:"76:e7:1b:a5:61:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:07:28.804710 containerd[1621]: 2025-10-30 00:07:28.798 [INFO][4924] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ea60eee2b0fdd982e0cc3eea3ca60f2900b62eed49d90e2c584f4da7d20cfd99" Namespace="calico-system" Pod="goldmane-666569f655-g2t2x" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--g2t2x-eth0" Oct 30 00:07:28.804939 containerd[1621]: time="2025-10-30T00:07:28.804722705Z" level=info msg="CreateContainer within sandbox \"d80cc8e6647a12704df87baa5562ca284883da6dd9c21d94549f5be3b62b7211\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 30 00:07:28.832966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount337876026.mount: Deactivated successfully. Oct 30 00:07:28.838131 containerd[1621]: time="2025-10-30T00:07:28.838036163Z" level=info msg="Container 8f7520a65d22cde87fea34997ce1c038f37cf9a12631d088c4298739b7ada19e: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:07:29.354881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3653404587.mount: Deactivated successfully. Oct 30 00:07:29.443672 systemd-networkd[1519]: caliaf51a9a6e9a: Gained IPv6LL Oct 30 00:07:29.449943 containerd[1621]: time="2025-10-30T00:07:29.449864661Z" level=info msg="CreateContainer within sandbox \"506d3b6b2b54fdbd41682b58a9672500dedb8086d34a84ffd531b49d0178712b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8f7520a65d22cde87fea34997ce1c038f37cf9a12631d088c4298739b7ada19e\"" Oct 30 00:07:29.450767 containerd[1621]: time="2025-10-30T00:07:29.450699871Z" level=info msg="StartContainer for \"8f7520a65d22cde87fea34997ce1c038f37cf9a12631d088c4298739b7ada19e\"" Oct 30 00:07:29.451564 containerd[1621]: time="2025-10-30T00:07:29.451515252Z" level=info msg="connecting to shim 8f7520a65d22cde87fea34997ce1c038f37cf9a12631d088c4298739b7ada19e" address="unix:///run/containerd/s/cc6f1c1653b724b977abf3332712b9a6a29d19dcde95feed175dc1ddcf1d0c0c" protocol=ttrpc version=3 Oct 30 00:07:29.464289 containerd[1621]: time="2025-10-30T00:07:29.464194802Z" level=info msg="connecting to shim 3f8baaf787c3487193cc59090f08cdb1bf28254c64ed9a848c189ba68aa208c9" address="unix:///run/containerd/s/a652c3b36a7037f579c9a01f4ceb3463c6efe7528bbd25e0bff523d589427301" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:07:29.476193 containerd[1621]: time="2025-10-30T00:07:29.476138822Z" level=info msg="connecting to shim c410b8da247e2929dc16ef41cf0049ea33859fd2ae688411961a4bc4ad6694b2" address="unix:///run/containerd/s/362fae46df239aa805befd4effa601fbefadd9198a1630ad7e91da6660c6d894" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:07:29.493449 systemd[1]: Started cri-containerd-8f7520a65d22cde87fea34997ce1c038f37cf9a12631d088c4298739b7ada19e.scope - libcontainer container 8f7520a65d22cde87fea34997ce1c038f37cf9a12631d088c4298739b7ada19e. Oct 30 00:07:29.513237 systemd[1]: Started cri-containerd-3f8baaf787c3487193cc59090f08cdb1bf28254c64ed9a848c189ba68aa208c9.scope - libcontainer container 3f8baaf787c3487193cc59090f08cdb1bf28254c64ed9a848c189ba68aa208c9. Oct 30 00:07:29.517101 systemd[1]: Started cri-containerd-c410b8da247e2929dc16ef41cf0049ea33859fd2ae688411961a4bc4ad6694b2.scope - libcontainer container c410b8da247e2929dc16ef41cf0049ea33859fd2ae688411961a4bc4ad6694b2. Oct 30 00:07:29.531224 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 00:07:29.532631 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 00:07:29.567505 containerd[1621]: time="2025-10-30T00:07:29.567455027Z" level=info msg="Container 80984fe9fe1355b9b02b89898b917ceb9b449b6a21a9f58eca684a8a28655eab: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:07:29.586804 systemd[1]: Started sshd@10-10.0.0.102:22-10.0.0.1:33156.service - OpenSSH per-connection server daemon (10.0.0.1:33156). Oct 30 00:07:29.636218 systemd-networkd[1519]: cali87b9b3658ee: Gained IPv6LL Oct 30 00:07:29.642796 sshd[5140]: Accepted publickey for core from 10.0.0.1 port 33156 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:07:29.644975 sshd-session[5140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:07:29.650349 systemd-logind[1593]: New session 11 of user core. Oct 30 00:07:29.660264 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 30 00:07:29.712386 containerd[1621]: time="2025-10-30T00:07:29.712317153Z" level=info msg="StartContainer for \"8f7520a65d22cde87fea34997ce1c038f37cf9a12631d088c4298739b7ada19e\" returns successfully" Oct 30 00:07:29.761619 containerd[1621]: time="2025-10-30T00:07:29.761537951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bgd9q,Uid:d62c2877-00ac-4394-911e-002e28febfd2,Namespace:calico-system,Attempt:0,} returns sandbox id \"c410b8da247e2929dc16ef41cf0049ea33859fd2ae688411961a4bc4ad6694b2\"" Oct 30 00:07:29.763407 containerd[1621]: time="2025-10-30T00:07:29.763366962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 00:07:29.855936 containerd[1621]: time="2025-10-30T00:07:29.855885226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-869977fb74-rhndp,Uid:2f8d59a7-a1e6-4aca-91ca-94959e3f1a19,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3f8baaf787c3487193cc59090f08cdb1bf28254c64ed9a848c189ba68aa208c9\"" Oct 30 00:07:29.865208 sshd[5143]: Connection closed by 10.0.0.1 port 33156 Oct 30 00:07:29.865562 sshd-session[5140]: pam_unix(sshd:session): session closed for user core Oct 30 00:07:29.870258 systemd[1]: sshd@10-10.0.0.102:22-10.0.0.1:33156.service: Deactivated successfully. Oct 30 00:07:29.872651 systemd[1]: session-11.scope: Deactivated successfully. Oct 30 00:07:29.874428 systemd-logind[1593]: Session 11 logged out. Waiting for processes to exit. Oct 30 00:07:29.875607 systemd-logind[1593]: Removed session 11. Oct 30 00:07:29.892235 systemd-networkd[1519]: cali0b0e4d3d971: Gained IPv6LL Oct 30 00:07:29.961201 kubelet[2810]: E1030 00:07:29.960981 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:07:30.021196 kubelet[2810]: I1030 00:07:30.020892 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xgt5r" podStartSLOduration=70.020876405 podStartE2EDuration="1m10.020876405s" podCreationTimestamp="2025-10-30 00:06:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:07:30.020426639 +0000 UTC m=+74.563254846" watchObservedRunningTime="2025-10-30 00:07:30.020876405 +0000 UTC m=+74.563704622" Oct 30 00:07:30.087970 containerd[1621]: time="2025-10-30T00:07:30.087904287Z" level=info msg="CreateContainer within sandbox \"d80cc8e6647a12704df87baa5562ca284883da6dd9c21d94549f5be3b62b7211\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"80984fe9fe1355b9b02b89898b917ceb9b449b6a21a9f58eca684a8a28655eab\"" Oct 30 00:07:30.088570 containerd[1621]: time="2025-10-30T00:07:30.088540267Z" level=info msg="StartContainer for \"80984fe9fe1355b9b02b89898b917ceb9b449b6a21a9f58eca684a8a28655eab\"" Oct 30 00:07:30.089678 containerd[1621]: time="2025-10-30T00:07:30.089626815Z" level=info msg="connecting to shim 80984fe9fe1355b9b02b89898b917ceb9b449b6a21a9f58eca684a8a28655eab" address="unix:///run/containerd/s/405424e5f604f8afa9e30289fb8d36ab4145ad0d0dc380e864cc78693459a1de" protocol=ttrpc version=3 Oct 30 00:07:30.123426 systemd[1]: Started cri-containerd-80984fe9fe1355b9b02b89898b917ceb9b449b6a21a9f58eca684a8a28655eab.scope - libcontainer container 80984fe9fe1355b9b02b89898b917ceb9b449b6a21a9f58eca684a8a28655eab. Oct 30 00:07:30.163927 containerd[1621]: time="2025-10-30T00:07:30.163443632Z" level=info msg="connecting to shim ddb900be242a305fc532bf3da7ba039c1f61828c49650fc674a23eed4418fc82" address="unix:///run/containerd/s/751fa43abca969d706a0bd33001a1658004cddb81a9c302515095eb79b270192" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:07:30.199366 systemd[1]: Started cri-containerd-ddb900be242a305fc532bf3da7ba039c1f61828c49650fc674a23eed4418fc82.scope - libcontainer container ddb900be242a305fc532bf3da7ba039c1f61828c49650fc674a23eed4418fc82. Oct 30 00:07:30.211022 containerd[1621]: time="2025-10-30T00:07:30.210942072Z" level=info msg="connecting to shim ea60eee2b0fdd982e0cc3eea3ca60f2900b62eed49d90e2c584f4da7d20cfd99" address="unix:///run/containerd/s/2f9eaa521a033bb9d12d0eaff0f3282d6f5fdd942a213d491318febf9b9453ed" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:07:30.230052 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 00:07:30.286720 systemd[1]: Started cri-containerd-ea60eee2b0fdd982e0cc3eea3ca60f2900b62eed49d90e2c584f4da7d20cfd99.scope - libcontainer container ea60eee2b0fdd982e0cc3eea3ca60f2900b62eed49d90e2c584f4da7d20cfd99. Oct 30 00:07:30.302470 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 00:07:30.394051 containerd[1621]: time="2025-10-30T00:07:30.393992504Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:07:30.420781 containerd[1621]: time="2025-10-30T00:07:30.420609640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-869977fb74-wfqp7,Uid:5d5c2c33-987b-44fa-be72-89d7d6488ff0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ddb900be242a305fc532bf3da7ba039c1f61828c49650fc674a23eed4418fc82\"" Oct 30 00:07:30.421318 containerd[1621]: time="2025-10-30T00:07:30.421289143Z" level=info msg="StartContainer for \"80984fe9fe1355b9b02b89898b917ceb9b449b6a21a9f58eca684a8a28655eab\" returns successfully" Oct 30 00:07:30.558328 containerd[1621]: time="2025-10-30T00:07:30.558173329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-g2t2x,Uid:0a7e0678-b33a-4d3a-b42a-5b4a4c30629b,Namespace:calico-system,Attempt:0,} returns sandbox id \"ea60eee2b0fdd982e0cc3eea3ca60f2900b62eed49d90e2c584f4da7d20cfd99\"" Oct 30 00:07:30.588431 containerd[1621]: time="2025-10-30T00:07:30.588327543Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 00:07:30.588431 containerd[1621]: time="2025-10-30T00:07:30.588380314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 00:07:30.588796 kubelet[2810]: E1030 00:07:30.588643 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:07:30.588796 kubelet[2810]: E1030 00:07:30.588713 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:07:30.589609 kubelet[2810]: E1030 00:07:30.589000 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xn2nd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bgd9q_calico-system(d62c2877-00ac-4394-911e-002e28febfd2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 00:07:30.589716 containerd[1621]: time="2025-10-30T00:07:30.589275507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:07:30.968321 containerd[1621]: time="2025-10-30T00:07:30.968256433Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:07:30.968971 kubelet[2810]: E1030 00:07:30.968906 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:07:30.972674 kubelet[2810]: E1030 00:07:30.972298 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:07:31.010984 kubelet[2810]: I1030 00:07:31.010873 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rxbrv" podStartSLOduration=71.010854364 podStartE2EDuration="1m11.010854364s" podCreationTimestamp="2025-10-30 00:06:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:07:31.009872667 +0000 UTC m=+75.552700884" watchObservedRunningTime="2025-10-30 00:07:31.010854364 +0000 UTC m=+75.553682581" Oct 30 00:07:31.141828 containerd[1621]: time="2025-10-30T00:07:31.141755047Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:07:31.142199 containerd[1621]: time="2025-10-30T00:07:31.141948435Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:07:31.142713 kubelet[2810]: E1030 00:07:31.142621 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:07:31.142960 kubelet[2810]: E1030 00:07:31.142740 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:07:31.143036 kubelet[2810]: E1030 00:07:31.142969 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68v4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-869977fb74-rhndp_calico-apiserver(2f8d59a7-a1e6-4aca-91ca-94959e3f1a19): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:07:31.143903 containerd[1621]: time="2025-10-30T00:07:31.143859589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:07:31.144952 kubelet[2810]: E1030 00:07:31.144888 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-869977fb74-rhndp" podUID="2f8d59a7-a1e6-4aca-91ca-94959e3f1a19" Oct 30 00:07:31.494456 containerd[1621]: time="2025-10-30T00:07:31.494366521Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:07:31.522158 containerd[1621]: time="2025-10-30T00:07:31.522050099Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:07:31.522392 containerd[1621]: time="2025-10-30T00:07:31.522138637Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:07:31.522716 kubelet[2810]: E1030 00:07:31.522569 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:07:31.522716 kubelet[2810]: E1030 00:07:31.522671 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:07:31.523250 kubelet[2810]: E1030 00:07:31.523063 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qzbbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-869977fb74-wfqp7_calico-apiserver(5d5c2c33-987b-44fa-be72-89d7d6488ff0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:07:31.523528 containerd[1621]: time="2025-10-30T00:07:31.523248047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 00:07:31.525003 kubelet[2810]: E1030 00:07:31.524940 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-869977fb74-wfqp7" podUID="5d5c2c33-987b-44fa-be72-89d7d6488ff0" Oct 30 00:07:31.902151 containerd[1621]: time="2025-10-30T00:07:31.902052183Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:07:31.963286 containerd[1621]: time="2025-10-30T00:07:31.963187327Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 00:07:31.963476 containerd[1621]: time="2025-10-30T00:07:31.963289091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 00:07:31.963569 kubelet[2810]: E1030 00:07:31.963480 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:07:31.963569 kubelet[2810]: E1030 00:07:31.963540 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:07:31.963900 kubelet[2810]: E1030 00:07:31.963823 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lt6b5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-g2t2x_calico-system(0a7e0678-b33a-4d3a-b42a-5b4a4c30629b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 00:07:31.964048 containerd[1621]: time="2025-10-30T00:07:31.963895795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 00:07:31.965463 kubelet[2810]: E1030 00:07:31.965420 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g2t2x" podUID="0a7e0678-b33a-4d3a-b42a-5b4a4c30629b" Oct 30 00:07:31.973477 kubelet[2810]: E1030 00:07:31.973195 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:07:31.973477 kubelet[2810]: E1030 00:07:31.973471 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:07:31.974277 kubelet[2810]: E1030 00:07:31.974039 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-869977fb74-wfqp7" podUID="5d5c2c33-987b-44fa-be72-89d7d6488ff0" Oct 30 00:07:31.974277 kubelet[2810]: E1030 00:07:31.974219 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g2t2x" podUID="0a7e0678-b33a-4d3a-b42a-5b4a4c30629b" Oct 30 00:07:31.974786 kubelet[2810]: E1030 00:07:31.974707 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-869977fb74-rhndp" podUID="2f8d59a7-a1e6-4aca-91ca-94959e3f1a19" Oct 30 00:07:32.351108 containerd[1621]: time="2025-10-30T00:07:32.350999866Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:07:32.374938 containerd[1621]: time="2025-10-30T00:07:32.374838447Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 00:07:32.374938 containerd[1621]: time="2025-10-30T00:07:32.374906156Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 00:07:32.375259 kubelet[2810]: E1030 00:07:32.375201 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:07:32.375320 kubelet[2810]: E1030 00:07:32.375263 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:07:32.375454 kubelet[2810]: E1030 00:07:32.375414 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xn2nd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bgd9q_calico-system(d62c2877-00ac-4394-911e-002e28febfd2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 00:07:32.376684 kubelet[2810]: E1030 00:07:32.376620 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bgd9q" podUID="d62c2877-00ac-4394-911e-002e28febfd2" Oct 30 00:07:32.976658 kubelet[2810]: E1030 00:07:32.976200 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:07:32.977482 kubelet[2810]: E1030 00:07:32.977349 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bgd9q" podUID="d62c2877-00ac-4394-911e-002e28febfd2" Oct 30 00:07:34.607431 kubelet[2810]: E1030 00:07:34.607343 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:07:34.608246 containerd[1621]: time="2025-10-30T00:07:34.608175602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 00:07:34.880946 systemd[1]: Started sshd@11-10.0.0.102:22-10.0.0.1:54466.service - OpenSSH per-connection server daemon (10.0.0.1:54466). Oct 30 00:07:34.968235 containerd[1621]: time="2025-10-30T00:07:34.968171723Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:07:34.970373 sshd[5307]: Accepted publickey for core from 10.0.0.1 port 54466 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:07:34.972416 sshd-session[5307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:07:34.980619 systemd-logind[1593]: New session 12 of user core. Oct 30 00:07:34.995355 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 30 00:07:35.119048 containerd[1621]: time="2025-10-30T00:07:35.118957015Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 00:07:35.119048 containerd[1621]: time="2025-10-30T00:07:35.119024343Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 00:07:35.119459 kubelet[2810]: E1030 00:07:35.119380 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:07:35.119531 kubelet[2810]: E1030 00:07:35.119467 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:07:35.119733 kubelet[2810]: E1030 00:07:35.119669 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2384e6ac67324a0e992f1b31f21c6bff,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mhvms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78cd8f6cd-jtlv8_calico-system(0ddc0305-4c4c-4d8f-adc8-d24daa6c347e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 00:07:35.122238 containerd[1621]: time="2025-10-30T00:07:35.122198162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 00:07:35.374539 sshd[5310]: Connection closed by 10.0.0.1 port 54466 Oct 30 00:07:35.374885 sshd-session[5307]: pam_unix(sshd:session): session closed for user core Oct 30 00:07:35.380423 systemd[1]: sshd@11-10.0.0.102:22-10.0.0.1:54466.service: Deactivated successfully. Oct 30 00:07:35.382914 systemd[1]: session-12.scope: Deactivated successfully. Oct 30 00:07:35.384006 systemd-logind[1593]: Session 12 logged out. Waiting for processes to exit. Oct 30 00:07:35.385355 systemd-logind[1593]: Removed session 12. Oct 30 00:07:35.574163 containerd[1621]: time="2025-10-30T00:07:35.574047060Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:07:35.611654 containerd[1621]: time="2025-10-30T00:07:35.611522224Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 00:07:35.612795 containerd[1621]: time="2025-10-30T00:07:35.611829218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 00:07:35.612866 kubelet[2810]: E1030 00:07:35.612429 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:07:35.613286 kubelet[2810]: E1030 00:07:35.612475 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:07:35.613350 containerd[1621]: time="2025-10-30T00:07:35.613211863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 00:07:35.613411 kubelet[2810]: E1030 00:07:35.613315 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mhvms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78cd8f6cd-jtlv8_calico-system(0ddc0305-4c4c-4d8f-adc8-d24daa6c347e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 00:07:35.614846 kubelet[2810]: E1030 00:07:35.614792 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78cd8f6cd-jtlv8" podUID="0ddc0305-4c4c-4d8f-adc8-d24daa6c347e" Oct 30 00:07:36.026994 containerd[1621]: time="2025-10-30T00:07:36.026927465Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:07:36.071280 containerd[1621]: time="2025-10-30T00:07:36.071210055Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 00:07:36.071439 containerd[1621]: time="2025-10-30T00:07:36.071310987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 00:07:36.071520 kubelet[2810]: E1030 00:07:36.071460 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:07:36.071589 kubelet[2810]: E1030 00:07:36.071533 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:07:36.071778 kubelet[2810]: E1030 00:07:36.071715 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v52lg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-69bd87fbdd-zshjp_calico-system(fb783c3f-c8d1-42f0-a262-b7fd408f60b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 00:07:36.072927 kubelet[2810]: E1030 00:07:36.072880 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69bd87fbdd-zshjp" podUID="fb783c3f-c8d1-42f0-a262-b7fd408f60b3" Oct 30 00:07:36.607030 kubelet[2810]: E1030 00:07:36.606971 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:07:37.606717 kubelet[2810]: E1030 00:07:37.606611 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:07:39.607123 kubelet[2810]: E1030 00:07:39.606443 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:07:40.388292 systemd[1]: Started sshd@12-10.0.0.102:22-10.0.0.1:47546.service - OpenSSH per-connection server daemon (10.0.0.1:47546). Oct 30 00:07:40.444626 sshd[5340]: Accepted publickey for core from 10.0.0.1 port 47546 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:07:40.446619 sshd-session[5340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:07:40.452168 systemd-logind[1593]: New session 13 of user core. Oct 30 00:07:40.463270 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 30 00:07:40.633414 sshd[5343]: Connection closed by 10.0.0.1 port 47546 Oct 30 00:07:40.633909 sshd-session[5340]: pam_unix(sshd:session): session closed for user core Oct 30 00:07:40.646304 systemd[1]: sshd@12-10.0.0.102:22-10.0.0.1:47546.service: Deactivated successfully. Oct 30 00:07:40.649004 systemd[1]: session-13.scope: Deactivated successfully. Oct 30 00:07:40.651309 systemd-logind[1593]: Session 13 logged out. Waiting for processes to exit. Oct 30 00:07:40.653505 systemd[1]: Started sshd@13-10.0.0.102:22-10.0.0.1:47554.service - OpenSSH per-connection server daemon (10.0.0.1:47554). Oct 30 00:07:40.654935 systemd-logind[1593]: Removed session 13. Oct 30 00:07:40.710201 sshd[5357]: Accepted publickey for core from 10.0.0.1 port 47554 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:07:40.712014 sshd-session[5357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:07:40.717928 systemd-logind[1593]: New session 14 of user core. Oct 30 00:07:40.732414 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 30 00:07:40.966672 sshd[5360]: Connection closed by 10.0.0.1 port 47554 Oct 30 00:07:40.967361 sshd-session[5357]: pam_unix(sshd:session): session closed for user core Oct 30 00:07:40.981909 systemd[1]: sshd@13-10.0.0.102:22-10.0.0.1:47554.service: Deactivated successfully. Oct 30 00:07:40.984522 systemd[1]: session-14.scope: Deactivated successfully. Oct 30 00:07:40.985925 systemd-logind[1593]: Session 14 logged out. Waiting for processes to exit. Oct 30 00:07:40.990794 systemd[1]: Started sshd@14-10.0.0.102:22-10.0.0.1:47560.service - OpenSSH per-connection server daemon (10.0.0.1:47560). Oct 30 00:07:40.992166 systemd-logind[1593]: Removed session 14. Oct 30 00:07:41.054738 sshd[5372]: Accepted publickey for core from 10.0.0.1 port 47560 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:07:41.056408 sshd-session[5372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:07:41.061631 systemd-logind[1593]: New session 15 of user core. Oct 30 00:07:41.076328 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 30 00:07:41.386847 sshd[5375]: Connection closed by 10.0.0.1 port 47560 Oct 30 00:07:41.387337 sshd-session[5372]: pam_unix(sshd:session): session closed for user core Oct 30 00:07:41.395699 systemd[1]: sshd@14-10.0.0.102:22-10.0.0.1:47560.service: Deactivated successfully. Oct 30 00:07:41.400212 systemd[1]: session-15.scope: Deactivated successfully. Oct 30 00:07:41.402995 systemd-logind[1593]: Session 15 logged out. Waiting for processes to exit. Oct 30 00:07:41.405308 systemd-logind[1593]: Removed session 15. Oct 30 00:07:42.607909 containerd[1621]: time="2025-10-30T00:07:42.607852424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:07:43.159503 containerd[1621]: time="2025-10-30T00:07:43.159442008Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:07:43.308867 containerd[1621]: time="2025-10-30T00:07:43.308783444Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:07:43.308867 containerd[1621]: time="2025-10-30T00:07:43.308846634Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:07:43.309186 kubelet[2810]: E1030 00:07:43.309124 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:07:43.309620 kubelet[2810]: E1030 00:07:43.309196 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:07:43.309620 kubelet[2810]: E1030 00:07:43.309384 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qzbbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-869977fb74-wfqp7_calico-apiserver(5d5c2c33-987b-44fa-be72-89d7d6488ff0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:07:43.310651 kubelet[2810]: E1030 00:07:43.310598 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-869977fb74-wfqp7" podUID="5d5c2c33-987b-44fa-be72-89d7d6488ff0" Oct 30 00:07:43.608628 containerd[1621]: time="2025-10-30T00:07:43.608579679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:07:44.045605 containerd[1621]: time="2025-10-30T00:07:44.045512805Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:07:44.120575 containerd[1621]: time="2025-10-30T00:07:44.120456766Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:07:44.120768 containerd[1621]: time="2025-10-30T00:07:44.120537128Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:07:44.120870 kubelet[2810]: E1030 00:07:44.120824 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:07:44.120912 kubelet[2810]: E1030 00:07:44.120888 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:07:44.121088 kubelet[2810]: E1030 00:07:44.121035 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68v4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-869977fb74-rhndp_calico-apiserver(2f8d59a7-a1e6-4aca-91ca-94959e3f1a19): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:07:44.122286 kubelet[2810]: E1030 00:07:44.122238 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-869977fb74-rhndp" podUID="2f8d59a7-a1e6-4aca-91ca-94959e3f1a19" Oct 30 00:07:44.134418 containerd[1621]: time="2025-10-30T00:07:44.134332865Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae0f135fbd60186c7c915759c8d45fdd641ddaa976a01bce6c3d6b62c5db4d69\" id:\"c6538e8e59e3cf997ac98972dfd38b04d29e92f988e198aa8f5436b7cb50bbb7\" pid:5399 exited_at:{seconds:1761782864 nanos:133819963}" Oct 30 00:07:44.137154 kubelet[2810]: E1030 00:07:44.137116 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:07:45.609664 containerd[1621]: time="2025-10-30T00:07:45.609200290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 00:07:46.225857 containerd[1621]: time="2025-10-30T00:07:46.225763917Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:07:46.385897 containerd[1621]: time="2025-10-30T00:07:46.385824915Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 00:07:46.386188 containerd[1621]: time="2025-10-30T00:07:46.385861655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 00:07:46.386224 kubelet[2810]: E1030 00:07:46.386139 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:07:46.386224 kubelet[2810]: E1030 00:07:46.386211 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:07:46.386683 kubelet[2810]: E1030 00:07:46.386395 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lt6b5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-g2t2x_calico-system(0a7e0678-b33a-4d3a-b42a-5b4a4c30629b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 00:07:46.388692 kubelet[2810]: E1030 00:07:46.387621 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g2t2x" podUID="0a7e0678-b33a-4d3a-b42a-5b4a4c30629b" Oct 30 00:07:46.402115 systemd[1]: Started sshd@15-10.0.0.102:22-10.0.0.1:47614.service - OpenSSH per-connection server daemon (10.0.0.1:47614). Oct 30 00:07:46.460887 sshd[5413]: Accepted publickey for core from 10.0.0.1 port 47614 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:07:46.462670 sshd-session[5413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:07:46.467512 systemd-logind[1593]: New session 16 of user core. Oct 30 00:07:46.482260 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 30 00:07:46.754314 sshd[5416]: Connection closed by 10.0.0.1 port 47614 Oct 30 00:07:46.754572 sshd-session[5413]: pam_unix(sshd:session): session closed for user core Oct 30 00:07:46.759619 systemd[1]: sshd@15-10.0.0.102:22-10.0.0.1:47614.service: Deactivated successfully. Oct 30 00:07:46.761980 systemd[1]: session-16.scope: Deactivated successfully. Oct 30 00:07:46.762830 systemd-logind[1593]: Session 16 logged out. Waiting for processes to exit. Oct 30 00:07:46.764275 systemd-logind[1593]: Removed session 16. Oct 30 00:07:47.608233 containerd[1621]: time="2025-10-30T00:07:47.608144570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 00:07:48.069784 containerd[1621]: time="2025-10-30T00:07:48.069709816Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:07:48.099333 containerd[1621]: time="2025-10-30T00:07:48.099246762Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 00:07:48.099523 containerd[1621]: time="2025-10-30T00:07:48.099302077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 00:07:48.099743 kubelet[2810]: E1030 00:07:48.099663 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:07:48.100136 kubelet[2810]: E1030 00:07:48.099750 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:07:48.100136 kubelet[2810]: E1030 00:07:48.099913 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xn2nd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bgd9q_calico-system(d62c2877-00ac-4394-911e-002e28febfd2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 00:07:48.102153 containerd[1621]: time="2025-10-30T00:07:48.102108612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 00:07:48.445197 containerd[1621]: time="2025-10-30T00:07:48.444597000Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:07:48.448008 containerd[1621]: time="2025-10-30T00:07:48.447852967Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 00:07:48.448008 containerd[1621]: time="2025-10-30T00:07:48.447973295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 00:07:48.448267 kubelet[2810]: E1030 00:07:48.448210 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:07:48.448330 kubelet[2810]: E1030 00:07:48.448280 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:07:48.448516 kubelet[2810]: E1030 00:07:48.448432 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xn2nd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bgd9q_calico-system(d62c2877-00ac-4394-911e-002e28febfd2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 00:07:48.449675 kubelet[2810]: E1030 00:07:48.449633 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bgd9q" podUID="d62c2877-00ac-4394-911e-002e28febfd2" Oct 30 00:07:49.612109 kubelet[2810]: E1030 00:07:49.612027 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69bd87fbdd-zshjp" podUID="fb783c3f-c8d1-42f0-a262-b7fd408f60b3" Oct 30 00:07:49.612656 kubelet[2810]: E1030 00:07:49.612050 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78cd8f6cd-jtlv8" podUID="0ddc0305-4c4c-4d8f-adc8-d24daa6c347e" Oct 30 00:07:51.781546 systemd[1]: Started sshd@16-10.0.0.102:22-10.0.0.1:46762.service - OpenSSH per-connection server daemon (10.0.0.1:46762). Oct 30 00:07:51.849419 sshd[5433]: Accepted publickey for core from 10.0.0.1 port 46762 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:07:51.851248 sshd-session[5433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:07:51.856121 systemd-logind[1593]: New session 17 of user core. Oct 30 00:07:51.865222 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 30 00:07:52.022962 sshd[5437]: Connection closed by 10.0.0.1 port 46762 Oct 30 00:07:52.023751 sshd-session[5433]: pam_unix(sshd:session): session closed for user core Oct 30 00:07:52.029308 systemd[1]: sshd@16-10.0.0.102:22-10.0.0.1:46762.service: Deactivated successfully. Oct 30 00:07:52.031768 systemd[1]: session-17.scope: Deactivated successfully. Oct 30 00:07:52.034211 systemd-logind[1593]: Session 17 logged out. Waiting for processes to exit. Oct 30 00:07:52.038963 systemd-logind[1593]: Removed session 17. Oct 30 00:07:54.608014 kubelet[2810]: E1030 00:07:54.607914 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-869977fb74-wfqp7" podUID="5d5c2c33-987b-44fa-be72-89d7d6488ff0" Oct 30 00:07:55.607524 kubelet[2810]: E1030 00:07:55.607407 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-869977fb74-rhndp" podUID="2f8d59a7-a1e6-4aca-91ca-94959e3f1a19" Oct 30 00:07:57.038005 systemd[1]: Started sshd@17-10.0.0.102:22-10.0.0.1:46764.service - OpenSSH per-connection server daemon (10.0.0.1:46764). Oct 30 00:07:57.105531 sshd[5455]: Accepted publickey for core from 10.0.0.1 port 46764 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:07:57.107510 sshd-session[5455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:07:57.113239 systemd-logind[1593]: New session 18 of user core. Oct 30 00:07:57.121289 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 30 00:07:57.253958 sshd[5458]: Connection closed by 10.0.0.1 port 46764 Oct 30 00:07:57.254351 sshd-session[5455]: pam_unix(sshd:session): session closed for user core Oct 30 00:07:57.258939 systemd[1]: sshd@17-10.0.0.102:22-10.0.0.1:46764.service: Deactivated successfully. Oct 30 00:07:57.262209 systemd[1]: session-18.scope: Deactivated successfully. Oct 30 00:07:57.264659 systemd-logind[1593]: Session 18 logged out. Waiting for processes to exit. Oct 30 00:07:57.266312 systemd-logind[1593]: Removed session 18. Oct 30 00:07:59.607407 kubelet[2810]: E1030 00:07:59.607333 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g2t2x" podUID="0a7e0678-b33a-4d3a-b42a-5b4a4c30629b" Oct 30 00:08:01.609045 containerd[1621]: time="2025-10-30T00:08:01.608965296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 00:08:01.610553 kubelet[2810]: E1030 00:08:01.610199 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bgd9q" podUID="d62c2877-00ac-4394-911e-002e28febfd2" Oct 30 00:08:01.936836 containerd[1621]: time="2025-10-30T00:08:01.936567377Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:08:01.944884 containerd[1621]: time="2025-10-30T00:08:01.944769609Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 00:08:01.944884 containerd[1621]: time="2025-10-30T00:08:01.944878324Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 00:08:01.945177 kubelet[2810]: E1030 00:08:01.945108 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:08:01.945250 kubelet[2810]: E1030 00:08:01.945174 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:08:01.945531 kubelet[2810]: E1030 00:08:01.945474 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2384e6ac67324a0e992f1b31f21c6bff,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mhvms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78cd8f6cd-jtlv8_calico-system(0ddc0305-4c4c-4d8f-adc8-d24daa6c347e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 00:08:01.945653 containerd[1621]: time="2025-10-30T00:08:01.945569872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 00:08:02.281160 systemd[1]: Started sshd@18-10.0.0.102:22-10.0.0.1:51430.service - OpenSSH per-connection server daemon (10.0.0.1:51430). Oct 30 00:08:02.310871 containerd[1621]: time="2025-10-30T00:08:02.310807933Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:08:02.318010 containerd[1621]: time="2025-10-30T00:08:02.317914463Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 00:08:02.318010 containerd[1621]: time="2025-10-30T00:08:02.318006497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 00:08:02.318254 kubelet[2810]: E1030 00:08:02.318190 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:08:02.318322 kubelet[2810]: E1030 00:08:02.318262 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:08:02.318633 containerd[1621]: time="2025-10-30T00:08:02.318576614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 00:08:02.319333 kubelet[2810]: E1030 00:08:02.319254 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v52lg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-69bd87fbdd-zshjp_calico-system(fb783c3f-c8d1-42f0-a262-b7fd408f60b3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 00:08:02.320570 kubelet[2810]: E1030 00:08:02.320469 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69bd87fbdd-zshjp" podUID="fb783c3f-c8d1-42f0-a262-b7fd408f60b3" Oct 30 00:08:02.328974 sshd[5479]: Accepted publickey for core from 10.0.0.1 port 51430 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:08:02.329960 sshd-session[5479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:08:02.337351 systemd-logind[1593]: New session 19 of user core. Oct 30 00:08:02.348327 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 30 00:08:02.479746 sshd[5482]: Connection closed by 10.0.0.1 port 51430 Oct 30 00:08:02.479922 sshd-session[5479]: pam_unix(sshd:session): session closed for user core Oct 30 00:08:02.485963 systemd[1]: sshd@18-10.0.0.102:22-10.0.0.1:51430.service: Deactivated successfully. Oct 30 00:08:02.488489 systemd[1]: session-19.scope: Deactivated successfully. Oct 30 00:08:02.490799 systemd-logind[1593]: Session 19 logged out. Waiting for processes to exit. Oct 30 00:08:02.492133 systemd-logind[1593]: Removed session 19. Oct 30 00:08:02.682350 containerd[1621]: time="2025-10-30T00:08:02.682146661Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:08:02.796186 containerd[1621]: time="2025-10-30T00:08:02.796030386Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 00:08:02.796186 containerd[1621]: time="2025-10-30T00:08:02.796121037Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 00:08:02.796459 kubelet[2810]: E1030 00:08:02.796413 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:08:02.796898 kubelet[2810]: E1030 00:08:02.796479 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:08:02.796898 kubelet[2810]: E1030 00:08:02.796656 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mhvms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78cd8f6cd-jtlv8_calico-system(0ddc0305-4c4c-4d8f-adc8-d24daa6c347e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 00:08:02.797902 kubelet[2810]: E1030 00:08:02.797850 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78cd8f6cd-jtlv8" podUID="0ddc0305-4c4c-4d8f-adc8-d24daa6c347e" Oct 30 00:08:06.607866 containerd[1621]: time="2025-10-30T00:08:06.607804883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:08:06.986730 containerd[1621]: time="2025-10-30T00:08:06.986571365Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:08:07.048271 containerd[1621]: time="2025-10-30T00:08:07.048150324Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:08:07.048271 containerd[1621]: time="2025-10-30T00:08:07.048199997Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:08:07.048593 kubelet[2810]: E1030 00:08:07.048425 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:08:07.048593 kubelet[2810]: E1030 00:08:07.048489 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:08:07.049223 kubelet[2810]: E1030 00:08:07.048644 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68v4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-869977fb74-rhndp_calico-apiserver(2f8d59a7-a1e6-4aca-91ca-94959e3f1a19): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:08:07.050703 kubelet[2810]: E1030 00:08:07.050655 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-869977fb74-rhndp" podUID="2f8d59a7-a1e6-4aca-91ca-94959e3f1a19" Oct 30 00:08:07.496792 systemd[1]: Started sshd@19-10.0.0.102:22-10.0.0.1:51444.service - OpenSSH per-connection server daemon (10.0.0.1:51444). Oct 30 00:08:07.546853 sshd[5496]: Accepted publickey for core from 10.0.0.1 port 51444 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:08:07.548375 sshd-session[5496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:08:07.553370 systemd-logind[1593]: New session 20 of user core. Oct 30 00:08:07.565304 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 30 00:08:07.607786 containerd[1621]: time="2025-10-30T00:08:07.607671687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:08:07.707222 sshd[5499]: Connection closed by 10.0.0.1 port 51444 Oct 30 00:08:07.706564 sshd-session[5496]: pam_unix(sshd:session): session closed for user core Oct 30 00:08:07.717213 systemd[1]: sshd@19-10.0.0.102:22-10.0.0.1:51444.service: Deactivated successfully. Oct 30 00:08:07.719888 systemd[1]: session-20.scope: Deactivated successfully. Oct 30 00:08:07.722149 systemd-logind[1593]: Session 20 logged out. Waiting for processes to exit. Oct 30 00:08:07.725767 systemd[1]: Started sshd@20-10.0.0.102:22-10.0.0.1:51448.service - OpenSSH per-connection server daemon (10.0.0.1:51448). Oct 30 00:08:07.726972 systemd-logind[1593]: Removed session 20. Oct 30 00:08:07.774353 sshd[5512]: Accepted publickey for core from 10.0.0.1 port 51448 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:08:07.776317 sshd-session[5512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:08:07.783203 systemd-logind[1593]: New session 21 of user core. Oct 30 00:08:07.793316 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 30 00:08:07.968459 containerd[1621]: time="2025-10-30T00:08:07.968390907Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:08:07.969999 containerd[1621]: time="2025-10-30T00:08:07.969945003Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:08:07.970103 containerd[1621]: time="2025-10-30T00:08:07.970051954Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:08:07.970422 kubelet[2810]: E1030 00:08:07.970326 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:08:07.970422 kubelet[2810]: E1030 00:08:07.970413 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:08:07.970611 kubelet[2810]: E1030 00:08:07.970565 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qzbbs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-869977fb74-wfqp7_calico-apiserver(5d5c2c33-987b-44fa-be72-89d7d6488ff0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:08:07.971825 kubelet[2810]: E1030 00:08:07.971777 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-869977fb74-wfqp7" podUID="5d5c2c33-987b-44fa-be72-89d7d6488ff0" Oct 30 00:08:08.162473 sshd[5515]: Connection closed by 10.0.0.1 port 51448 Oct 30 00:08:08.164438 sshd-session[5512]: pam_unix(sshd:session): session closed for user core Oct 30 00:08:08.173312 systemd[1]: sshd@20-10.0.0.102:22-10.0.0.1:51448.service: Deactivated successfully. Oct 30 00:08:08.175812 systemd[1]: session-21.scope: Deactivated successfully. Oct 30 00:08:08.176824 systemd-logind[1593]: Session 21 logged out. Waiting for processes to exit. Oct 30 00:08:08.180138 systemd[1]: Started sshd@21-10.0.0.102:22-10.0.0.1:51456.service - OpenSSH per-connection server daemon (10.0.0.1:51456). Oct 30 00:08:08.181011 systemd-logind[1593]: Removed session 21. Oct 30 00:08:08.246035 sshd[5526]: Accepted publickey for core from 10.0.0.1 port 51456 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:08:08.247974 sshd-session[5526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:08:08.253476 systemd-logind[1593]: New session 22 of user core. Oct 30 00:08:08.266271 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 30 00:08:08.911622 sshd[5529]: Connection closed by 10.0.0.1 port 51456 Oct 30 00:08:08.912376 sshd-session[5526]: pam_unix(sshd:session): session closed for user core Oct 30 00:08:08.928548 systemd[1]: sshd@21-10.0.0.102:22-10.0.0.1:51456.service: Deactivated successfully. Oct 30 00:08:08.931840 systemd[1]: session-22.scope: Deactivated successfully. Oct 30 00:08:08.932823 systemd-logind[1593]: Session 22 logged out. Waiting for processes to exit. Oct 30 00:08:08.938788 systemd[1]: Started sshd@22-10.0.0.102:22-10.0.0.1:51458.service - OpenSSH per-connection server daemon (10.0.0.1:51458). Oct 30 00:08:08.940157 systemd-logind[1593]: Removed session 22. Oct 30 00:08:08.998623 sshd[5549]: Accepted publickey for core from 10.0.0.1 port 51458 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:08:09.000340 sshd-session[5549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:08:09.005454 systemd-logind[1593]: New session 23 of user core. Oct 30 00:08:09.013291 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 30 00:08:09.275358 sshd[5552]: Connection closed by 10.0.0.1 port 51458 Oct 30 00:08:09.278931 sshd-session[5549]: pam_unix(sshd:session): session closed for user core Oct 30 00:08:09.295435 systemd[1]: sshd@22-10.0.0.102:22-10.0.0.1:51458.service: Deactivated successfully. Oct 30 00:08:09.304185 systemd[1]: session-23.scope: Deactivated successfully. Oct 30 00:08:09.308311 systemd-logind[1593]: Session 23 logged out. Waiting for processes to exit. Oct 30 00:08:09.313869 systemd-logind[1593]: Removed session 23. Oct 30 00:08:09.317194 systemd[1]: Started sshd@23-10.0.0.102:22-10.0.0.1:51464.service - OpenSSH per-connection server daemon (10.0.0.1:51464). Oct 30 00:08:09.374701 sshd[5563]: Accepted publickey for core from 10.0.0.1 port 51464 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:08:09.376725 sshd-session[5563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:08:09.383195 systemd-logind[1593]: New session 24 of user core. Oct 30 00:08:09.388527 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 30 00:08:09.522224 sshd[5566]: Connection closed by 10.0.0.1 port 51464 Oct 30 00:08:09.522951 sshd-session[5563]: pam_unix(sshd:session): session closed for user core Oct 30 00:08:09.532770 systemd[1]: sshd@23-10.0.0.102:22-10.0.0.1:51464.service: Deactivated successfully. Oct 30 00:08:09.535371 systemd[1]: session-24.scope: Deactivated successfully. Oct 30 00:08:09.536540 systemd-logind[1593]: Session 24 logged out. Waiting for processes to exit. Oct 30 00:08:09.538710 systemd-logind[1593]: Removed session 24. Oct 30 00:08:12.607301 containerd[1621]: time="2025-10-30T00:08:12.606983577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 00:08:13.054935 containerd[1621]: time="2025-10-30T00:08:13.054869051Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:08:13.080676 containerd[1621]: time="2025-10-30T00:08:13.080607918Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 00:08:13.080847 containerd[1621]: time="2025-10-30T00:08:13.080671398Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 00:08:13.081024 kubelet[2810]: E1030 00:08:13.080968 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:08:13.081504 kubelet[2810]: E1030 00:08:13.081041 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:08:13.081504 kubelet[2810]: E1030 00:08:13.081308 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lt6b5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-g2t2x_calico-system(0a7e0678-b33a-4d3a-b42a-5b4a4c30629b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 00:08:13.083305 kubelet[2810]: E1030 00:08:13.083180 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g2t2x" podUID="0a7e0678-b33a-4d3a-b42a-5b4a4c30629b" Oct 30 00:08:14.018333 containerd[1621]: time="2025-10-30T00:08:14.018274200Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae0f135fbd60186c7c915759c8d45fdd641ddaa976a01bce6c3d6b62c5db4d69\" id:\"a15e9cb30ab5d94457f1dfd728cc593bf61337f6ffc39a973f7f735b2f9f6bd0\" pid:5591 exited_at:{seconds:1761782894 nanos:17611279}" Oct 30 00:08:14.540064 systemd[1]: Started sshd@24-10.0.0.102:22-10.0.0.1:40864.service - OpenSSH per-connection server daemon (10.0.0.1:40864). Oct 30 00:08:14.838647 sshd[5604]: Accepted publickey for core from 10.0.0.1 port 40864 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:08:14.840368 sshd-session[5604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:08:14.845565 systemd-logind[1593]: New session 25 of user core. Oct 30 00:08:14.852260 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 30 00:08:14.991045 sshd[5607]: Connection closed by 10.0.0.1 port 40864 Oct 30 00:08:14.991553 sshd-session[5604]: pam_unix(sshd:session): session closed for user core Oct 30 00:08:14.997883 systemd[1]: sshd@24-10.0.0.102:22-10.0.0.1:40864.service: Deactivated successfully. Oct 30 00:08:15.000176 systemd[1]: session-25.scope: Deactivated successfully. Oct 30 00:08:15.001215 systemd-logind[1593]: Session 25 logged out. Waiting for processes to exit. Oct 30 00:08:15.002610 systemd-logind[1593]: Removed session 25. Oct 30 00:08:15.608407 kubelet[2810]: E1030 00:08:15.608132 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69bd87fbdd-zshjp" podUID="fb783c3f-c8d1-42f0-a262-b7fd408f60b3" Oct 30 00:08:15.609550 kubelet[2810]: E1030 00:08:15.609493 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78cd8f6cd-jtlv8" podUID="0ddc0305-4c4c-4d8f-adc8-d24daa6c347e" Oct 30 00:08:16.607883 containerd[1621]: time="2025-10-30T00:08:16.607835742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 00:08:17.073520 containerd[1621]: time="2025-10-30T00:08:17.073465224Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:08:17.075010 containerd[1621]: time="2025-10-30T00:08:17.074932333Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 00:08:17.075010 containerd[1621]: time="2025-10-30T00:08:17.074969593Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 00:08:17.075276 kubelet[2810]: E1030 00:08:17.075215 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:08:17.075693 kubelet[2810]: E1030 00:08:17.075287 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:08:17.075693 kubelet[2810]: E1030 00:08:17.075451 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xn2nd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bgd9q_calico-system(d62c2877-00ac-4394-911e-002e28febfd2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 00:08:17.078805 containerd[1621]: time="2025-10-30T00:08:17.078751790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 00:08:17.416874 containerd[1621]: time="2025-10-30T00:08:17.416679554Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:08:17.419902 containerd[1621]: time="2025-10-30T00:08:17.419844386Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 00:08:17.420171 containerd[1621]: time="2025-10-30T00:08:17.419944515Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 00:08:17.420333 kubelet[2810]: E1030 00:08:17.420265 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:08:17.420397 kubelet[2810]: E1030 00:08:17.420352 2810 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:08:17.420636 kubelet[2810]: E1030 00:08:17.420574 2810 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xn2nd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bgd9q_calico-system(d62c2877-00ac-4394-911e-002e28febfd2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 00:08:17.421971 kubelet[2810]: E1030 00:08:17.421894 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bgd9q" podUID="d62c2877-00ac-4394-911e-002e28febfd2" Oct 30 00:08:19.609240 kubelet[2810]: E1030 00:08:19.609190 2810 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:08:20.005849 systemd[1]: Started sshd@25-10.0.0.102:22-10.0.0.1:40868.service - OpenSSH per-connection server daemon (10.0.0.1:40868). Oct 30 00:08:20.059888 sshd[5625]: Accepted publickey for core from 10.0.0.1 port 40868 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:08:20.062497 sshd-session[5625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:08:20.068580 systemd-logind[1593]: New session 26 of user core. Oct 30 00:08:20.081389 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 30 00:08:20.216664 sshd[5628]: Connection closed by 10.0.0.1 port 40868 Oct 30 00:08:20.218422 sshd-session[5625]: pam_unix(sshd:session): session closed for user core Oct 30 00:08:20.226895 systemd-logind[1593]: Session 26 logged out. Waiting for processes to exit. Oct 30 00:08:20.229762 systemd[1]: sshd@25-10.0.0.102:22-10.0.0.1:40868.service: Deactivated successfully. Oct 30 00:08:20.232949 systemd[1]: session-26.scope: Deactivated successfully. Oct 30 00:08:20.237443 systemd-logind[1593]: Removed session 26. Oct 30 00:08:22.607911 kubelet[2810]: E1030 00:08:22.607728 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-869977fb74-rhndp" podUID="2f8d59a7-a1e6-4aca-91ca-94959e3f1a19" Oct 30 00:08:22.608491 kubelet[2810]: E1030 00:08:22.608266 2810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-869977fb74-wfqp7" podUID="5d5c2c33-987b-44fa-be72-89d7d6488ff0" Oct 30 00:08:25.230912 systemd[1]: Started sshd@26-10.0.0.102:22-10.0.0.1:49344.service - OpenSSH per-connection server daemon (10.0.0.1:49344). Oct 30 00:08:25.289013 sshd[5644]: Accepted publickey for core from 10.0.0.1 port 49344 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:08:25.291012 sshd-session[5644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:08:25.296580 systemd-logind[1593]: New session 27 of user core. Oct 30 00:08:25.305280 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 30 00:08:25.478558 sshd[5648]: Connection closed by 10.0.0.1 port 49344 Oct 30 00:08:25.478909 sshd-session[5644]: pam_unix(sshd:session): session closed for user core Oct 30 00:08:25.484267 systemd[1]: sshd@26-10.0.0.102:22-10.0.0.1:49344.service: Deactivated successfully. Oct 30 00:08:25.486987 systemd[1]: session-27.scope: Deactivated successfully. Oct 30 00:08:25.488124 systemd-logind[1593]: Session 27 logged out. Waiting for processes to exit. Oct 30 00:08:25.490045 systemd-logind[1593]: Removed session 27.