Nov 6 00:22:38.744738 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Nov 5 22:11:41 -00 2025 Nov 6 00:22:38.744786 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5a467f58ff1d38830572ea713da04924778847a98299b0cfa25690713b346f38 Nov 6 00:22:38.744805 kernel: BIOS-provided physical RAM map: Nov 6 00:22:38.744814 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 6 00:22:38.744823 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Nov 6 00:22:38.744832 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Nov 6 00:22:38.744844 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Nov 6 00:22:38.744853 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Nov 6 00:22:38.744871 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Nov 6 00:22:38.744881 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Nov 6 00:22:38.744893 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Nov 6 00:22:38.744902 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Nov 6 00:22:38.744912 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Nov 6 00:22:38.744922 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Nov 6 00:22:38.744933 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Nov 6 00:22:38.744946 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Nov 6 00:22:38.744960 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Nov 6 00:22:38.744970 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Nov 6 00:22:38.744982 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Nov 6 00:22:38.744991 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Nov 6 00:22:38.745001 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Nov 6 00:22:38.745011 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Nov 6 00:22:38.745021 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 6 00:22:38.745030 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 6 00:22:38.745040 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Nov 6 00:22:38.745054 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 6 00:22:38.745083 kernel: NX (Execute Disable) protection: active Nov 6 00:22:38.745093 kernel: APIC: Static calls initialized Nov 6 00:22:38.745103 kernel: e820: update [mem 0x9b319018-0x9b322c57] usable ==> usable Nov 6 00:22:38.745113 kernel: e820: update [mem 0x9b2dc018-0x9b318e57] usable ==> usable Nov 6 00:22:38.745122 kernel: extended physical RAM map: Nov 6 00:22:38.745132 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 6 00:22:38.745141 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Nov 6 00:22:38.745151 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Nov 6 00:22:38.745161 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Nov 6 00:22:38.745171 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Nov 6 00:22:38.745185 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Nov 6 00:22:38.745195 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Nov 6 00:22:38.745204 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2dc017] usable Nov 6 00:22:38.745214 kernel: reserve setup_data: [mem 0x000000009b2dc018-0x000000009b318e57] usable Nov 6 00:22:38.745229 kernel: reserve setup_data: [mem 0x000000009b318e58-0x000000009b319017] usable Nov 6 00:22:38.745242 kernel: reserve setup_data: [mem 0x000000009b319018-0x000000009b322c57] usable Nov 6 00:22:38.745253 kernel: reserve setup_data: [mem 0x000000009b322c58-0x000000009bd3efff] usable Nov 6 00:22:38.745263 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Nov 6 00:22:38.745273 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Nov 6 00:22:38.745284 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Nov 6 00:22:38.745294 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Nov 6 00:22:38.745305 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Nov 6 00:22:38.745316 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Nov 6 00:22:38.745328 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Nov 6 00:22:38.745339 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Nov 6 00:22:38.745349 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Nov 6 00:22:38.745360 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Nov 6 00:22:38.745370 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Nov 6 00:22:38.745381 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 6 00:22:38.745392 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 6 00:22:38.745402 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Nov 6 00:22:38.745412 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 6 00:22:38.745427 kernel: efi: EFI v2.7 by EDK II Nov 6 00:22:38.745438 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Nov 6 00:22:38.745451 kernel: random: crng init done Nov 6 00:22:38.745465 kernel: efi: Remove mem150: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Nov 6 00:22:38.745475 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Nov 6 00:22:38.745503 kernel: secureboot: Secure boot disabled Nov 6 00:22:38.745513 kernel: SMBIOS 2.8 present. Nov 6 00:22:38.745523 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Nov 6 00:22:38.745653 kernel: DMI: Memory slots populated: 1/1 Nov 6 00:22:38.745663 kernel: Hypervisor detected: KVM Nov 6 00:22:38.745673 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Nov 6 00:22:38.745683 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 6 00:22:38.745693 kernel: kvm-clock: using sched offset of 5419130279 cycles Nov 6 00:22:38.745708 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 6 00:22:38.745720 kernel: tsc: Detected 2794.748 MHz processor Nov 6 00:22:38.745731 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 6 00:22:38.745742 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 6 00:22:38.745752 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Nov 6 00:22:38.745763 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 6 00:22:38.745774 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 6 00:22:38.745788 kernel: Using GB pages for direct mapping Nov 6 00:22:38.745799 kernel: ACPI: Early table checksum verification disabled Nov 6 00:22:38.745810 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Nov 6 00:22:38.745820 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Nov 6 00:22:38.745831 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:22:38.745842 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:22:38.745852 kernel: ACPI: FACS 0x000000009CBDD000 000040 Nov 6 00:22:38.745863 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:22:38.745877 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:22:38.745888 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:22:38.745899 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:22:38.745911 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Nov 6 00:22:38.745921 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Nov 6 00:22:38.745932 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Nov 6 00:22:38.745943 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Nov 6 00:22:38.745958 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Nov 6 00:22:38.745969 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Nov 6 00:22:38.745979 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Nov 6 00:22:38.745990 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Nov 6 00:22:38.746001 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Nov 6 00:22:38.746012 kernel: No NUMA configuration found Nov 6 00:22:38.746022 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Nov 6 00:22:38.746037 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Nov 6 00:22:38.746048 kernel: Zone ranges: Nov 6 00:22:38.746059 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 6 00:22:38.746087 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Nov 6 00:22:38.746097 kernel: Normal empty Nov 6 00:22:38.746108 kernel: Device empty Nov 6 00:22:38.746119 kernel: Movable zone start for each node Nov 6 00:22:38.746129 kernel: Early memory node ranges Nov 6 00:22:38.746144 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 6 00:22:38.746158 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Nov 6 00:22:38.746168 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Nov 6 00:22:38.746179 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Nov 6 00:22:38.746189 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Nov 6 00:22:38.746201 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Nov 6 00:22:38.746211 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Nov 6 00:22:38.746224 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Nov 6 00:22:38.746238 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Nov 6 00:22:38.746249 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 6 00:22:38.746268 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 6 00:22:38.746282 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Nov 6 00:22:38.746293 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 6 00:22:38.746304 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Nov 6 00:22:38.746315 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Nov 6 00:22:38.746327 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Nov 6 00:22:38.746341 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Nov 6 00:22:38.746353 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Nov 6 00:22:38.746364 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 6 00:22:38.746375 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 6 00:22:38.746389 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 6 00:22:38.746400 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 6 00:22:38.746412 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 6 00:22:38.746423 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 6 00:22:38.746434 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 6 00:22:38.746445 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 6 00:22:38.746457 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 6 00:22:38.746470 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 6 00:22:38.746491 kernel: TSC deadline timer available Nov 6 00:22:38.746503 kernel: CPU topo: Max. logical packages: 1 Nov 6 00:22:38.746514 kernel: CPU topo: Max. logical dies: 1 Nov 6 00:22:38.746525 kernel: CPU topo: Max. dies per package: 1 Nov 6 00:22:38.746535 kernel: CPU topo: Max. threads per core: 1 Nov 6 00:22:38.746547 kernel: CPU topo: Num. cores per package: 4 Nov 6 00:22:38.746558 kernel: CPU topo: Num. threads per package: 4 Nov 6 00:22:38.746573 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Nov 6 00:22:38.746584 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 6 00:22:38.746595 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 6 00:22:38.746606 kernel: kvm-guest: setup PV sched yield Nov 6 00:22:38.746617 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Nov 6 00:22:38.746628 kernel: Booting paravirtualized kernel on KVM Nov 6 00:22:38.746640 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 6 00:22:38.746654 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 6 00:22:38.746665 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Nov 6 00:22:38.746676 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Nov 6 00:22:38.746687 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 6 00:22:38.746698 kernel: kvm-guest: PV spinlocks enabled Nov 6 00:22:38.746709 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 6 00:22:38.746726 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5a467f58ff1d38830572ea713da04924778847a98299b0cfa25690713b346f38 Nov 6 00:22:38.746741 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 6 00:22:38.746752 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 6 00:22:38.746763 kernel: Fallback order for Node 0: 0 Nov 6 00:22:38.746774 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Nov 6 00:22:38.746785 kernel: Policy zone: DMA32 Nov 6 00:22:38.746796 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 6 00:22:38.746809 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 6 00:22:38.746820 kernel: ftrace: allocating 40092 entries in 157 pages Nov 6 00:22:38.746834 kernel: ftrace: allocated 157 pages with 5 groups Nov 6 00:22:38.746846 kernel: Dynamic Preempt: voluntary Nov 6 00:22:38.746857 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 6 00:22:38.746870 kernel: rcu: RCU event tracing is enabled. Nov 6 00:22:38.746881 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 6 00:22:38.746892 kernel: Trampoline variant of Tasks RCU enabled. Nov 6 00:22:38.746906 kernel: Rude variant of Tasks RCU enabled. Nov 6 00:22:38.746918 kernel: Tracing variant of Tasks RCU enabled. Nov 6 00:22:38.746929 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 6 00:22:38.746940 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 6 00:22:38.746954 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 6 00:22:38.746966 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 6 00:22:38.746978 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 6 00:22:38.746991 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 6 00:22:38.747002 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 6 00:22:38.747014 kernel: Console: colour dummy device 80x25 Nov 6 00:22:38.747025 kernel: printk: legacy console [ttyS0] enabled Nov 6 00:22:38.747035 kernel: ACPI: Core revision 20240827 Nov 6 00:22:38.747046 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 6 00:22:38.747058 kernel: APIC: Switch to symmetric I/O mode setup Nov 6 00:22:38.747246 kernel: x2apic enabled Nov 6 00:22:38.747258 kernel: APIC: Switched APIC routing to: physical x2apic Nov 6 00:22:38.747269 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 6 00:22:38.747280 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 6 00:22:38.747292 kernel: kvm-guest: setup PV IPIs Nov 6 00:22:38.747304 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 6 00:22:38.747315 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 6 00:22:38.747330 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Nov 6 00:22:38.747341 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 6 00:22:38.747352 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 6 00:22:38.747363 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 6 00:22:38.747374 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 6 00:22:38.747385 kernel: Spectre V2 : Mitigation: Retpolines Nov 6 00:22:38.747397 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 6 00:22:38.748238 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 6 00:22:38.748251 kernel: active return thunk: retbleed_return_thunk Nov 6 00:22:38.748262 kernel: RETBleed: Mitigation: untrained return thunk Nov 6 00:22:38.748278 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 6 00:22:38.748290 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 6 00:22:38.748302 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 6 00:22:38.748314 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 6 00:22:38.748329 kernel: active return thunk: srso_return_thunk Nov 6 00:22:38.748341 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 6 00:22:38.748353 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 6 00:22:38.748364 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 6 00:22:38.748376 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 6 00:22:38.748387 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 6 00:22:38.748399 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 6 00:22:38.748414 kernel: Freeing SMP alternatives memory: 32K Nov 6 00:22:38.748426 kernel: pid_max: default: 32768 minimum: 301 Nov 6 00:22:38.748438 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 6 00:22:38.748449 kernel: landlock: Up and running. Nov 6 00:22:38.748461 kernel: SELinux: Initializing. Nov 6 00:22:38.748473 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 6 00:22:38.748504 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 6 00:22:38.748520 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 6 00:22:38.748531 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 6 00:22:38.748543 kernel: ... version: 0 Nov 6 00:22:38.748554 kernel: ... bit width: 48 Nov 6 00:22:38.748565 kernel: ... generic registers: 6 Nov 6 00:22:38.748577 kernel: ... value mask: 0000ffffffffffff Nov 6 00:22:38.748589 kernel: ... max period: 00007fffffffffff Nov 6 00:22:38.748604 kernel: ... fixed-purpose events: 0 Nov 6 00:22:38.748615 kernel: ... event mask: 000000000000003f Nov 6 00:22:38.748626 kernel: signal: max sigframe size: 1776 Nov 6 00:22:38.748637 kernel: rcu: Hierarchical SRCU implementation. Nov 6 00:22:38.748648 kernel: rcu: Max phase no-delay instances is 400. Nov 6 00:22:38.748664 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 6 00:22:38.748674 kernel: smp: Bringing up secondary CPUs ... Nov 6 00:22:38.748687 kernel: smpboot: x86: Booting SMP configuration: Nov 6 00:22:38.748697 kernel: .... node #0, CPUs: #1 #2 #3 Nov 6 00:22:38.748707 kernel: smp: Brought up 1 node, 4 CPUs Nov 6 00:22:38.748717 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Nov 6 00:22:38.748728 kernel: Memory: 2445196K/2565800K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15936K init, 2108K bss, 114668K reserved, 0K cma-reserved) Nov 6 00:22:38.748738 kernel: devtmpfs: initialized Nov 6 00:22:38.748748 kernel: x86/mm: Memory block size: 128MB Nov 6 00:22:38.748761 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Nov 6 00:22:38.748771 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Nov 6 00:22:38.748781 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Nov 6 00:22:38.748791 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Nov 6 00:22:38.748803 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Nov 6 00:22:38.748815 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Nov 6 00:22:38.748826 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 6 00:22:38.748839 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 6 00:22:38.748850 kernel: pinctrl core: initialized pinctrl subsystem Nov 6 00:22:38.748861 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 6 00:22:38.748871 kernel: audit: initializing netlink subsys (disabled) Nov 6 00:22:38.748881 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 6 00:22:38.748891 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 6 00:22:38.748901 kernel: audit: type=2000 audit(1762388554.633:1): state=initialized audit_enabled=0 res=1 Nov 6 00:22:38.748913 kernel: cpuidle: using governor menu Nov 6 00:22:38.748923 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 6 00:22:38.748933 kernel: dca service started, version 1.12.1 Nov 6 00:22:38.748944 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Nov 6 00:22:38.748954 kernel: PCI: Using configuration type 1 for base access Nov 6 00:22:38.748964 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 6 00:22:38.748974 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 6 00:22:38.748986 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 6 00:22:38.748996 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 6 00:22:38.749006 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 6 00:22:38.749016 kernel: ACPI: Added _OSI(Module Device) Nov 6 00:22:38.749026 kernel: ACPI: Added _OSI(Processor Device) Nov 6 00:22:38.749036 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 6 00:22:38.749046 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 6 00:22:38.749059 kernel: ACPI: Interpreter enabled Nov 6 00:22:38.749082 kernel: ACPI: PM: (supports S0 S3 S5) Nov 6 00:22:38.749092 kernel: ACPI: Using IOAPIC for interrupt routing Nov 6 00:22:38.749103 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 6 00:22:38.749113 kernel: PCI: Using E820 reservations for host bridge windows Nov 6 00:22:38.749123 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 6 00:22:38.749133 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 6 00:22:38.749493 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 6 00:22:38.749690 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 6 00:22:38.749871 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 6 00:22:38.749884 kernel: PCI host bridge to bus 0000:00 Nov 6 00:22:38.750080 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 6 00:22:38.750277 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 6 00:22:38.750548 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 6 00:22:38.750775 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Nov 6 00:22:38.750949 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Nov 6 00:22:38.751152 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Nov 6 00:22:38.751342 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 6 00:22:38.751592 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 6 00:22:38.751815 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 6 00:22:38.752025 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Nov 6 00:22:38.752262 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Nov 6 00:22:38.752471 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Nov 6 00:22:38.752686 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 6 00:22:38.752916 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 6 00:22:38.753139 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Nov 6 00:22:38.753327 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Nov 6 00:22:38.753516 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Nov 6 00:22:38.753711 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 6 00:22:38.753901 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Nov 6 00:22:38.754095 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Nov 6 00:22:38.754277 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Nov 6 00:22:38.754467 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 6 00:22:38.754656 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Nov 6 00:22:38.754835 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Nov 6 00:22:38.755022 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Nov 6 00:22:38.755243 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Nov 6 00:22:38.755451 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 6 00:22:38.755649 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 6 00:22:38.755843 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 6 00:22:38.756029 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Nov 6 00:22:38.756235 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Nov 6 00:22:38.756433 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 6 00:22:38.756644 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Nov 6 00:22:38.756660 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 6 00:22:38.756672 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 6 00:22:38.756684 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 6 00:22:38.756699 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 6 00:22:38.756710 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 6 00:22:38.756721 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 6 00:22:38.756732 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 6 00:22:38.756743 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 6 00:22:38.756754 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 6 00:22:38.756765 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 6 00:22:38.756779 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 6 00:22:38.756790 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 6 00:22:38.756801 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 6 00:22:38.756812 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 6 00:22:38.756823 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 6 00:22:38.756834 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 6 00:22:38.756845 kernel: iommu: Default domain type: Translated Nov 6 00:22:38.756859 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 6 00:22:38.756874 kernel: efivars: Registered efivars operations Nov 6 00:22:38.756885 kernel: PCI: Using ACPI for IRQ routing Nov 6 00:22:38.756897 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 6 00:22:38.756908 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Nov 6 00:22:38.756919 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Nov 6 00:22:38.756930 kernel: e820: reserve RAM buffer [mem 0x9b2dc018-0x9bffffff] Nov 6 00:22:38.756944 kernel: e820: reserve RAM buffer [mem 0x9b319018-0x9bffffff] Nov 6 00:22:38.756955 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Nov 6 00:22:38.756966 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Nov 6 00:22:38.756977 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Nov 6 00:22:38.756988 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Nov 6 00:22:38.757211 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 6 00:22:38.757417 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 6 00:22:38.757641 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 6 00:22:38.757658 kernel: vgaarb: loaded Nov 6 00:22:38.757670 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 6 00:22:38.757681 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 6 00:22:38.757692 kernel: clocksource: Switched to clocksource kvm-clock Nov 6 00:22:38.757704 kernel: VFS: Disk quotas dquot_6.6.0 Nov 6 00:22:38.757716 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 6 00:22:38.757732 kernel: pnp: PnP ACPI init Nov 6 00:22:38.757973 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Nov 6 00:22:38.757995 kernel: pnp: PnP ACPI: found 6 devices Nov 6 00:22:38.758007 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 6 00:22:38.758020 kernel: NET: Registered PF_INET protocol family Nov 6 00:22:38.758033 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 6 00:22:38.758047 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 6 00:22:38.758059 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 6 00:22:38.758088 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 6 00:22:38.758100 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 6 00:22:38.758111 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 6 00:22:38.758123 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 6 00:22:38.758135 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 6 00:22:38.758150 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 6 00:22:38.758162 kernel: NET: Registered PF_XDP protocol family Nov 6 00:22:38.758367 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Nov 6 00:22:38.758573 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Nov 6 00:22:38.758756 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 6 00:22:38.758938 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 6 00:22:38.759138 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 6 00:22:38.759333 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Nov 6 00:22:38.759539 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Nov 6 00:22:38.759730 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Nov 6 00:22:38.759746 kernel: PCI: CLS 0 bytes, default 64 Nov 6 00:22:38.759759 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 6 00:22:38.759776 kernel: Initialise system trusted keyrings Nov 6 00:22:38.759789 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 6 00:22:38.759801 kernel: Key type asymmetric registered Nov 6 00:22:38.759812 kernel: Asymmetric key parser 'x509' registered Nov 6 00:22:38.759824 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 6 00:22:38.759836 kernel: io scheduler mq-deadline registered Nov 6 00:22:38.759850 kernel: io scheduler kyber registered Nov 6 00:22:38.759862 kernel: io scheduler bfq registered Nov 6 00:22:38.759874 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 6 00:22:38.759886 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 6 00:22:38.759898 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 6 00:22:38.759910 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 6 00:22:38.759922 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 6 00:22:38.759937 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 6 00:22:38.759954 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 6 00:22:38.759966 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 6 00:22:38.759978 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 6 00:22:38.760214 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 6 00:22:38.760233 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 6 00:22:38.760434 kernel: rtc_cmos 00:04: registered as rtc0 Nov 6 00:22:38.760665 kernel: rtc_cmos 00:04: setting system clock to 2025-11-06T00:22:36 UTC (1762388556) Nov 6 00:22:38.760853 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Nov 6 00:22:38.760867 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 6 00:22:38.760879 kernel: efifb: probing for efifb Nov 6 00:22:38.760890 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Nov 6 00:22:38.760903 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Nov 6 00:22:38.760919 kernel: efifb: scrolling: redraw Nov 6 00:22:38.760930 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 6 00:22:38.760942 kernel: Console: switching to colour frame buffer device 160x50 Nov 6 00:22:38.760954 kernel: fb0: EFI VGA frame buffer device Nov 6 00:22:38.760971 kernel: pstore: Using crash dump compression: deflate Nov 6 00:22:38.760984 kernel: pstore: Registered efi_pstore as persistent store backend Nov 6 00:22:38.760996 kernel: NET: Registered PF_INET6 protocol family Nov 6 00:22:38.779152 kernel: Segment Routing with IPv6 Nov 6 00:22:38.779178 kernel: In-situ OAM (IOAM) with IPv6 Nov 6 00:22:38.779190 kernel: NET: Registered PF_PACKET protocol family Nov 6 00:22:38.779200 kernel: Key type dns_resolver registered Nov 6 00:22:38.779212 kernel: IPI shorthand broadcast: enabled Nov 6 00:22:38.779223 kernel: sched_clock: Marking stable (2239002999, 340997545)->(2793212530, -213211986) Nov 6 00:22:38.779234 kernel: registered taskstats version 1 Nov 6 00:22:38.779244 kernel: Loading compiled-in X.509 certificates Nov 6 00:22:38.779257 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 92154d1aa04a8c1424f65981683e67110e07d121' Nov 6 00:22:38.779268 kernel: Demotion targets for Node 0: null Nov 6 00:22:38.779279 kernel: Key type .fscrypt registered Nov 6 00:22:38.779289 kernel: Key type fscrypt-provisioning registered Nov 6 00:22:38.779300 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 6 00:22:38.779310 kernel: ima: Allocated hash algorithm: sha1 Nov 6 00:22:38.779321 kernel: ima: No architecture policies found Nov 6 00:22:38.779333 kernel: clk: Disabling unused clocks Nov 6 00:22:38.779344 kernel: Freeing unused kernel image (initmem) memory: 15936K Nov 6 00:22:38.779355 kernel: Write protecting the kernel read-only data: 40960k Nov 6 00:22:38.779366 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 6 00:22:38.779377 kernel: Run /init as init process Nov 6 00:22:38.779387 kernel: with arguments: Nov 6 00:22:38.779398 kernel: /init Nov 6 00:22:38.779411 kernel: with environment: Nov 6 00:22:38.779421 kernel: HOME=/ Nov 6 00:22:38.779431 kernel: TERM=linux Nov 6 00:22:38.779442 kernel: SCSI subsystem initialized Nov 6 00:22:38.779452 kernel: libata version 3.00 loaded. Nov 6 00:22:38.779784 kernel: ahci 0000:00:1f.2: version 3.0 Nov 6 00:22:38.779802 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 6 00:22:38.779996 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 6 00:22:38.780211 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 6 00:22:38.780413 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 6 00:22:38.780670 kernel: scsi host0: ahci Nov 6 00:22:38.780896 kernel: scsi host1: ahci Nov 6 00:22:38.781143 kernel: scsi host2: ahci Nov 6 00:22:38.781373 kernel: scsi host3: ahci Nov 6 00:22:38.781613 kernel: scsi host4: ahci Nov 6 00:22:38.781835 kernel: scsi host5: ahci Nov 6 00:22:38.781853 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Nov 6 00:22:38.781865 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Nov 6 00:22:38.781882 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Nov 6 00:22:38.781894 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Nov 6 00:22:38.781907 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Nov 6 00:22:38.781920 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Nov 6 00:22:38.781932 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 6 00:22:38.781944 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 6 00:22:38.781956 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 6 00:22:38.781971 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 6 00:22:38.781983 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 6 00:22:38.781995 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 6 00:22:38.782007 kernel: ata3.00: LPM support broken, forcing max_power Nov 6 00:22:38.782019 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 6 00:22:38.782030 kernel: ata3.00: applying bridge limits Nov 6 00:22:38.782042 kernel: ata3.00: LPM support broken, forcing max_power Nov 6 00:22:38.782053 kernel: ata3.00: configured for UDMA/100 Nov 6 00:22:38.782316 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 6 00:22:38.782552 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 6 00:22:38.782757 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 6 00:22:38.782774 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 6 00:22:38.782786 kernel: GPT:16515071 != 27000831 Nov 6 00:22:38.782802 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 6 00:22:38.782813 kernel: GPT:16515071 != 27000831 Nov 6 00:22:38.782825 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 6 00:22:38.782836 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 6 00:22:38.783078 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 6 00:22:38.783096 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 6 00:22:38.783321 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 6 00:22:38.783344 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 6 00:22:38.783357 kernel: device-mapper: uevent: version 1.0.3 Nov 6 00:22:38.783370 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 6 00:22:38.783382 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 6 00:22:38.783393 kernel: raid6: avx2x4 gen() 26346 MB/s Nov 6 00:22:38.783405 kernel: raid6: avx2x2 gen() 29339 MB/s Nov 6 00:22:38.783417 kernel: raid6: avx2x1 gen() 23409 MB/s Nov 6 00:22:38.783432 kernel: raid6: using algorithm avx2x2 gen() 29339 MB/s Nov 6 00:22:38.783444 kernel: raid6: .... xor() 19647 MB/s, rmw enabled Nov 6 00:22:38.783456 kernel: raid6: using avx2x2 recovery algorithm Nov 6 00:22:38.783468 kernel: xor: automatically using best checksumming function avx Nov 6 00:22:38.783492 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 6 00:22:38.783505 kernel: BTRFS: device fsid 4dd99ff0-78f7-441c-acc1-7ff3d924a9b4 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (181) Nov 6 00:22:38.783517 kernel: BTRFS info (device dm-0): first mount of filesystem 4dd99ff0-78f7-441c-acc1-7ff3d924a9b4 Nov 6 00:22:38.783532 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:22:38.783544 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 6 00:22:38.783556 kernel: BTRFS info (device dm-0): enabling free space tree Nov 6 00:22:38.783568 kernel: loop: module loaded Nov 6 00:22:38.783580 kernel: loop0: detected capacity change from 0 to 100120 Nov 6 00:22:38.783592 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 6 00:22:38.783606 systemd[1]: Successfully made /usr/ read-only. Nov 6 00:22:38.783625 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 00:22:38.783639 systemd[1]: Detected virtualization kvm. Nov 6 00:22:38.783651 systemd[1]: Detected architecture x86-64. Nov 6 00:22:38.783664 systemd[1]: Running in initrd. Nov 6 00:22:38.783677 systemd[1]: No hostname configured, using default hostname. Nov 6 00:22:38.783690 systemd[1]: Hostname set to . Nov 6 00:22:38.783705 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 6 00:22:38.783718 systemd[1]: Queued start job for default target initrd.target. Nov 6 00:22:38.783731 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 6 00:22:38.783744 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:22:38.783757 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:22:38.783770 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 6 00:22:38.783787 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 00:22:38.783800 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 6 00:22:38.783813 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 6 00:22:38.783826 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:22:38.783840 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:22:38.783853 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 6 00:22:38.783869 systemd[1]: Reached target paths.target - Path Units. Nov 6 00:22:38.783881 systemd[1]: Reached target slices.target - Slice Units. Nov 6 00:22:38.783894 systemd[1]: Reached target swap.target - Swaps. Nov 6 00:22:38.783907 systemd[1]: Reached target timers.target - Timer Units. Nov 6 00:22:38.783920 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 00:22:38.783932 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 00:22:38.783945 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 6 00:22:38.783961 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 6 00:22:38.783974 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:22:38.783986 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 00:22:38.783999 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:22:38.784014 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 00:22:38.784027 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 6 00:22:38.784042 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 6 00:22:38.784055 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 00:22:38.784082 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 6 00:22:38.784095 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 6 00:22:38.784108 systemd[1]: Starting systemd-fsck-usr.service... Nov 6 00:22:38.784121 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 00:22:38.784134 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 00:22:38.784149 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:22:38.784162 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 6 00:22:38.784175 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:22:38.784188 systemd[1]: Finished systemd-fsck-usr.service. Nov 6 00:22:38.784204 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 00:22:38.784253 systemd-journald[315]: Collecting audit messages is disabled. Nov 6 00:22:38.784283 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 00:22:38.784298 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 00:22:38.784312 systemd-journald[315]: Journal started Nov 6 00:22:38.784337 systemd-journald[315]: Runtime Journal (/run/log/journal/d8cc8b82fa9e48c19fd57054abc50086) is 6M, max 48.1M, 42.1M free. Nov 6 00:22:38.787202 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 00:22:38.861101 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 6 00:22:38.863590 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 00:22:38.872552 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:22:38.877309 systemd-modules-load[316]: Inserted module 'br_netfilter' Nov 6 00:22:38.879037 kernel: Bridge firewalling registered Nov 6 00:22:38.879225 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 00:22:38.889323 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:22:38.891848 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 00:22:38.896031 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:22:38.906056 systemd-tmpfiles[335]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 6 00:22:38.912816 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:22:38.917331 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:22:38.921987 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 00:22:38.922911 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 00:22:38.931196 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 6 00:22:38.963614 dracut-cmdline[358]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5a467f58ff1d38830572ea713da04924778847a98299b0cfa25690713b346f38 Nov 6 00:22:39.005887 systemd-resolved[357]: Positive Trust Anchors: Nov 6 00:22:39.005910 systemd-resolved[357]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 00:22:39.005916 systemd-resolved[357]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 6 00:22:39.005976 systemd-resolved[357]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 00:22:39.046384 systemd-resolved[357]: Defaulting to hostname 'linux'. Nov 6 00:22:39.048057 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 00:22:39.049136 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:22:39.158248 kernel: Loading iSCSI transport class v2.0-870. Nov 6 00:22:39.176796 kernel: iscsi: registered transport (tcp) Nov 6 00:22:39.202232 kernel: iscsi: registered transport (qla4xxx) Nov 6 00:22:39.202323 kernel: QLogic iSCSI HBA Driver Nov 6 00:22:39.237224 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 00:22:39.277644 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:22:39.279860 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 00:22:39.346869 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 6 00:22:39.350410 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 6 00:22:39.355184 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 6 00:22:39.399808 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 6 00:22:39.402370 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:22:39.440595 systemd-udevd[592]: Using default interface naming scheme 'v257'. Nov 6 00:22:39.460850 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:22:39.465996 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 6 00:22:39.831695 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 00:22:39.836900 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 00:22:39.841174 dracut-pre-trigger[653]: rd.md=0: removing MD RAID activation Nov 6 00:22:40.162028 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 00:22:40.165854 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 00:22:40.188989 systemd-networkd[710]: lo: Link UP Nov 6 00:22:40.188999 systemd-networkd[710]: lo: Gained carrier Nov 6 00:22:40.192412 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 00:22:40.194672 systemd[1]: Reached target network.target - Network. Nov 6 00:22:40.279722 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:22:40.284253 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 6 00:22:40.358407 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 6 00:22:40.379770 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 6 00:22:40.412384 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 6 00:22:40.439112 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 6 00:22:40.453319 kernel: cryptd: max_cpu_qlen set to 1000 Nov 6 00:22:40.454670 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 6 00:22:40.463607 systemd-networkd[710]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 6 00:22:40.464838 systemd-networkd[710]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 00:22:40.466116 systemd-networkd[710]: eth0: Link UP Nov 6 00:22:40.466340 systemd-networkd[710]: eth0: Gained carrier Nov 6 00:22:40.466349 systemd-networkd[710]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 6 00:22:40.471378 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:22:40.471590 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:22:40.474625 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:22:40.480926 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:22:40.751107 kernel: AES CTR mode by8 optimization enabled Nov 6 00:22:40.751302 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 6 00:22:40.754170 systemd-networkd[710]: eth0: DHCPv4 address 10.0.0.58/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 6 00:22:40.789875 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:22:41.013834 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 6 00:22:41.017205 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 00:22:41.019794 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:22:41.021847 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 00:22:41.027681 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 6 00:22:41.077553 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 6 00:22:41.115719 disk-uuid[779]: Primary Header is updated. Nov 6 00:22:41.115719 disk-uuid[779]: Secondary Entries is updated. Nov 6 00:22:41.115719 disk-uuid[779]: Secondary Header is updated. Nov 6 00:22:41.657294 systemd-networkd[710]: eth0: Gained IPv6LL Nov 6 00:22:42.178725 disk-uuid[859]: Warning: The kernel is still using the old partition table. Nov 6 00:22:42.178725 disk-uuid[859]: The new table will be used at the next reboot or after you Nov 6 00:22:42.178725 disk-uuid[859]: run partprobe(8) or kpartx(8) Nov 6 00:22:42.178725 disk-uuid[859]: The operation has completed successfully. Nov 6 00:22:42.189477 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 6 00:22:42.189602 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 6 00:22:42.223879 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 6 00:22:42.317325 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (870) Nov 6 00:22:42.317408 kernel: BTRFS info (device vda6): first mount of filesystem 1bec9db2-3d02-49a5-a8a3-33baf5dbb552 Nov 6 00:22:42.317427 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:22:42.322779 kernel: BTRFS info (device vda6): turning on async discard Nov 6 00:22:42.322858 kernel: BTRFS info (device vda6): enabling free space tree Nov 6 00:22:42.332518 kernel: BTRFS info (device vda6): last unmount of filesystem 1bec9db2-3d02-49a5-a8a3-33baf5dbb552 Nov 6 00:22:42.334130 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 6 00:22:42.339156 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 6 00:22:42.478553 ignition[889]: Ignition 2.22.0 Nov 6 00:22:42.478565 ignition[889]: Stage: fetch-offline Nov 6 00:22:42.478613 ignition[889]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:22:42.478625 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 00:22:42.478710 ignition[889]: parsed url from cmdline: "" Nov 6 00:22:42.478714 ignition[889]: no config URL provided Nov 6 00:22:42.478719 ignition[889]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 00:22:42.478729 ignition[889]: no config at "/usr/lib/ignition/user.ign" Nov 6 00:22:42.478771 ignition[889]: op(1): [started] loading QEMU firmware config module Nov 6 00:22:42.478776 ignition[889]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 6 00:22:42.498913 ignition[889]: op(1): [finished] loading QEMU firmware config module Nov 6 00:22:42.498947 ignition[889]: QEMU firmware config was not found. Ignoring... Nov 6 00:22:42.587592 ignition[889]: parsing config with SHA512: a77365e0a811fd32b658c016fead0884ff6930e401a84cd55ecd8fce1a55ac6288e5b7a08027162da65505a3374eb31b2aa5f8812e8c4eb4e5ce6123dce50e39 Nov 6 00:22:42.595586 unknown[889]: fetched base config from "system" Nov 6 00:22:42.596610 unknown[889]: fetched user config from "qemu" Nov 6 00:22:42.597161 ignition[889]: fetch-offline: fetch-offline passed Nov 6 00:22:42.597248 ignition[889]: Ignition finished successfully Nov 6 00:22:42.601028 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 00:22:42.622840 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 6 00:22:42.624250 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 6 00:22:42.671031 ignition[900]: Ignition 2.22.0 Nov 6 00:22:42.671050 ignition[900]: Stage: kargs Nov 6 00:22:42.671266 ignition[900]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:22:42.671278 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 00:22:42.672607 ignition[900]: kargs: kargs passed Nov 6 00:22:42.672661 ignition[900]: Ignition finished successfully Nov 6 00:22:42.678221 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 6 00:22:42.680848 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 6 00:22:42.725087 ignition[908]: Ignition 2.22.0 Nov 6 00:22:42.725102 ignition[908]: Stage: disks Nov 6 00:22:42.725262 ignition[908]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:22:42.725273 ignition[908]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 00:22:42.726181 ignition[908]: disks: disks passed Nov 6 00:22:42.730277 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 6 00:22:42.726230 ignition[908]: Ignition finished successfully Nov 6 00:22:42.732792 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 6 00:22:42.735818 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 6 00:22:42.738727 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 00:22:42.742661 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 00:22:42.745889 systemd[1]: Reached target basic.target - Basic System. Nov 6 00:22:42.750528 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 6 00:22:42.800288 systemd-fsck[918]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 6 00:22:43.287972 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 6 00:22:43.289440 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 6 00:22:43.527123 kernel: EXT4-fs (vda9): mounted filesystem d1cfc077-cc9a-4d2c-97de-8a87792eb8cf r/w with ordered data mode. Quota mode: none. Nov 6 00:22:43.527930 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 6 00:22:43.529225 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 6 00:22:43.535552 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 00:22:43.537486 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 6 00:22:43.539842 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 6 00:22:43.539884 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 6 00:22:43.539913 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 00:22:43.568166 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 6 00:22:43.573439 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (927) Nov 6 00:22:43.574504 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 6 00:22:43.581940 kernel: BTRFS info (device vda6): first mount of filesystem 1bec9db2-3d02-49a5-a8a3-33baf5dbb552 Nov 6 00:22:43.581961 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:22:43.581973 kernel: BTRFS info (device vda6): turning on async discard Nov 6 00:22:43.581984 kernel: BTRFS info (device vda6): enabling free space tree Nov 6 00:22:43.585586 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 00:22:43.634458 initrd-setup-root[951]: cut: /sysroot/etc/passwd: No such file or directory Nov 6 00:22:43.640035 initrd-setup-root[958]: cut: /sysroot/etc/group: No such file or directory Nov 6 00:22:43.645954 initrd-setup-root[965]: cut: /sysroot/etc/shadow: No such file or directory Nov 6 00:22:43.651533 initrd-setup-root[972]: cut: /sysroot/etc/gshadow: No such file or directory Nov 6 00:22:43.771467 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 6 00:22:43.775144 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 6 00:22:43.778297 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 6 00:22:43.796506 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 6 00:22:43.799387 kernel: BTRFS info (device vda6): last unmount of filesystem 1bec9db2-3d02-49a5-a8a3-33baf5dbb552 Nov 6 00:22:43.812041 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 6 00:22:43.830221 ignition[1041]: INFO : Ignition 2.22.0 Nov 6 00:22:43.830221 ignition[1041]: INFO : Stage: mount Nov 6 00:22:43.833265 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:22:43.833265 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 00:22:43.833265 ignition[1041]: INFO : mount: mount passed Nov 6 00:22:43.833265 ignition[1041]: INFO : Ignition finished successfully Nov 6 00:22:43.843525 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 6 00:22:43.847671 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 6 00:22:44.532291 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 00:22:44.561106 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1053) Nov 6 00:22:44.710141 kernel: BTRFS info (device vda6): first mount of filesystem 1bec9db2-3d02-49a5-a8a3-33baf5dbb552 Nov 6 00:22:44.710250 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:22:44.714732 kernel: BTRFS info (device vda6): turning on async discard Nov 6 00:22:44.714775 kernel: BTRFS info (device vda6): enabling free space tree Nov 6 00:22:44.717034 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 00:22:44.776088 ignition[1070]: INFO : Ignition 2.22.0 Nov 6 00:22:44.776088 ignition[1070]: INFO : Stage: files Nov 6 00:22:44.778694 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:22:44.778694 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 00:22:44.782849 ignition[1070]: DEBUG : files: compiled without relabeling support, skipping Nov 6 00:22:44.785317 ignition[1070]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 6 00:22:44.785317 ignition[1070]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 6 00:22:44.793111 ignition[1070]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 6 00:22:44.795640 ignition[1070]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 6 00:22:44.798328 unknown[1070]: wrote ssh authorized keys file for user: core Nov 6 00:22:44.800431 ignition[1070]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 6 00:22:44.800431 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 00:22:44.800431 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 6 00:22:44.846041 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 6 00:22:45.182248 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 00:22:45.182248 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 6 00:22:45.188467 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 6 00:22:45.188467 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 6 00:22:45.188467 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 6 00:22:45.188467 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 00:22:45.200462 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 00:22:45.200462 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 00:22:45.200462 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 00:22:45.200462 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 00:22:45.200462 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 00:22:45.200462 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 00:22:45.220399 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 00:22:45.220399 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 00:22:45.220399 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 6 00:22:45.517729 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 6 00:22:46.240258 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 00:22:46.240258 ignition[1070]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 6 00:22:46.247385 ignition[1070]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 00:22:46.250587 ignition[1070]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 00:22:46.250587 ignition[1070]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 6 00:22:46.250587 ignition[1070]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 6 00:22:46.250587 ignition[1070]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 6 00:22:46.250587 ignition[1070]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 6 00:22:46.250587 ignition[1070]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 6 00:22:46.250587 ignition[1070]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 6 00:22:46.312586 ignition[1070]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 6 00:22:46.317962 ignition[1070]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 6 00:22:46.320794 ignition[1070]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 6 00:22:46.320794 ignition[1070]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 6 00:22:46.320794 ignition[1070]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 6 00:22:46.320794 ignition[1070]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 6 00:22:46.320794 ignition[1070]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 6 00:22:46.320794 ignition[1070]: INFO : files: files passed Nov 6 00:22:46.320794 ignition[1070]: INFO : Ignition finished successfully Nov 6 00:22:46.327168 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 6 00:22:46.334324 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 6 00:22:46.352461 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 6 00:22:46.360536 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 6 00:22:46.360692 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 6 00:22:46.377315 initrd-setup-root-after-ignition[1104]: grep: /sysroot/oem/oem-release: No such file or directory Nov 6 00:22:46.382374 initrd-setup-root-after-ignition[1106]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:22:46.385356 initrd-setup-root-after-ignition[1106]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:22:46.389541 initrd-setup-root-after-ignition[1110]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:22:46.391433 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 00:22:46.393011 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 6 00:22:46.399998 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 6 00:22:46.475748 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 6 00:22:46.475904 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 6 00:22:46.480466 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 6 00:22:46.480886 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 6 00:22:46.484573 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 6 00:22:46.485934 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 6 00:22:46.531128 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 00:22:46.537366 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 6 00:22:46.575803 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 6 00:22:46.576024 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:22:46.577029 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:22:46.583679 systemd[1]: Stopped target timers.target - Timer Units. Nov 6 00:22:46.586961 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 6 00:22:46.587162 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 00:22:46.588689 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 6 00:22:46.596894 systemd[1]: Stopped target basic.target - Basic System. Nov 6 00:22:46.597871 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 6 00:22:46.600844 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 00:22:46.604689 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 6 00:22:46.608535 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 6 00:22:46.612124 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 6 00:22:46.615862 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 00:22:46.622669 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 6 00:22:46.623780 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 6 00:22:46.626790 systemd[1]: Stopped target swap.target - Swaps. Nov 6 00:22:46.627610 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 6 00:22:46.627787 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 6 00:22:46.635126 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:22:46.636024 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:22:46.641966 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 6 00:22:46.643943 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:22:46.644895 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 6 00:22:46.645042 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 6 00:22:46.652781 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 6 00:22:46.652927 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 00:22:46.654136 systemd[1]: Stopped target paths.target - Path Units. Nov 6 00:22:46.658633 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 6 00:22:46.661208 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:22:46.662493 systemd[1]: Stopped target slices.target - Slice Units. Nov 6 00:22:46.666895 systemd[1]: Stopped target sockets.target - Socket Units. Nov 6 00:22:46.670878 systemd[1]: iscsid.socket: Deactivated successfully. Nov 6 00:22:46.670992 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 00:22:46.673834 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 6 00:22:46.673929 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 00:22:46.674737 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 6 00:22:46.674870 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 00:22:46.680599 systemd[1]: ignition-files.service: Deactivated successfully. Nov 6 00:22:46.680729 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 6 00:22:46.688980 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 6 00:22:46.692386 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 6 00:22:46.692587 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:22:46.709877 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 6 00:22:46.712890 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 6 00:22:46.713140 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:22:46.717134 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 6 00:22:46.717326 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:22:46.721739 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 6 00:22:46.721909 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 00:22:46.732098 ignition[1130]: INFO : Ignition 2.22.0 Nov 6 00:22:46.732098 ignition[1130]: INFO : Stage: umount Nov 6 00:22:46.732098 ignition[1130]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:22:46.732098 ignition[1130]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 00:22:46.739985 ignition[1130]: INFO : umount: umount passed Nov 6 00:22:46.739985 ignition[1130]: INFO : Ignition finished successfully Nov 6 00:22:46.740781 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 6 00:22:46.740959 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 6 00:22:46.747845 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 6 00:22:46.748011 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 6 00:22:46.754931 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 6 00:22:46.758792 systemd[1]: Stopped target network.target - Network. Nov 6 00:22:46.759570 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 6 00:22:46.759654 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 6 00:22:46.762539 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 6 00:22:46.762611 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 6 00:22:46.763191 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 6 00:22:46.763245 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 6 00:22:46.770240 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 6 00:22:46.770343 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 6 00:22:46.771373 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 6 00:22:46.776712 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 6 00:22:46.788134 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 6 00:22:46.788350 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 6 00:22:46.796194 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 6 00:22:46.796977 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 6 00:22:46.797027 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:22:46.798772 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 6 00:22:46.805122 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 6 00:22:46.805240 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 00:22:46.806186 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:22:46.817774 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 6 00:22:46.823571 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 6 00:22:46.830949 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 6 00:22:46.831862 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:22:46.836406 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 6 00:22:46.836594 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 6 00:22:46.840823 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 6 00:22:46.840937 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 6 00:22:46.844562 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 6 00:22:46.844625 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:22:46.848251 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 6 00:22:46.848350 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 6 00:22:46.849461 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 6 00:22:46.849526 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 6 00:22:46.856425 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 6 00:22:46.856501 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 00:22:46.861463 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 6 00:22:46.861533 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 6 00:22:46.868479 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 6 00:22:46.869662 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 6 00:22:46.869724 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:22:46.875016 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 00:22:46.875098 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:22:46.878723 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 6 00:22:46.878785 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 6 00:22:46.879554 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 6 00:22:46.879615 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:22:46.885768 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:22:46.885854 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:22:46.913862 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 6 00:22:46.927514 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 6 00:22:46.933623 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 6 00:22:46.933806 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 6 00:22:46.935011 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 6 00:22:46.942393 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 6 00:22:46.967889 systemd[1]: Switching root. Nov 6 00:22:47.020406 systemd-journald[315]: Journal stopped Nov 6 00:22:48.667537 systemd-journald[315]: Received SIGTERM from PID 1 (systemd). Nov 6 00:22:48.667626 kernel: SELinux: policy capability network_peer_controls=1 Nov 6 00:22:48.667641 kernel: SELinux: policy capability open_perms=1 Nov 6 00:22:48.667657 kernel: SELinux: policy capability extended_socket_class=1 Nov 6 00:22:48.667669 kernel: SELinux: policy capability always_check_network=0 Nov 6 00:22:48.667685 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 6 00:22:48.667698 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 6 00:22:48.667712 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 6 00:22:48.667728 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 6 00:22:48.667740 kernel: SELinux: policy capability userspace_initial_context=0 Nov 6 00:22:48.667753 kernel: audit: type=1403 audit(1762388567.686:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 6 00:22:48.667767 systemd[1]: Successfully loaded SELinux policy in 191.275ms. Nov 6 00:22:48.667791 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.205ms. Nov 6 00:22:48.667806 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 00:22:48.667822 systemd[1]: Detected virtualization kvm. Nov 6 00:22:48.667836 systemd[1]: Detected architecture x86-64. Nov 6 00:22:48.667849 systemd[1]: Detected first boot. Nov 6 00:22:48.667862 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 6 00:22:48.667875 zram_generator::config[1176]: No configuration found. Nov 6 00:22:48.667890 kernel: Guest personality initialized and is inactive Nov 6 00:22:48.667903 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 6 00:22:48.667917 kernel: Initialized host personality Nov 6 00:22:48.667929 kernel: NET: Registered PF_VSOCK protocol family Nov 6 00:22:48.667941 systemd[1]: Populated /etc with preset unit settings. Nov 6 00:22:48.667954 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 6 00:22:48.667966 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 6 00:22:48.667979 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 6 00:22:48.667993 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 6 00:22:48.668008 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 6 00:22:48.668021 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 6 00:22:48.668035 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 6 00:22:48.668048 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 6 00:22:48.668060 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 6 00:22:48.668098 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 6 00:22:48.668111 systemd[1]: Created slice user.slice - User and Session Slice. Nov 6 00:22:48.668128 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:22:48.668141 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:22:48.668154 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 6 00:22:48.668167 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 6 00:22:48.668180 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 6 00:22:48.668193 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 00:22:48.668208 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 6 00:22:48.668221 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:22:48.668233 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:22:48.668259 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 6 00:22:48.668272 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 6 00:22:48.668285 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 6 00:22:48.668298 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 6 00:22:48.668313 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:22:48.668326 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 00:22:48.668339 systemd[1]: Reached target slices.target - Slice Units. Nov 6 00:22:48.668352 systemd[1]: Reached target swap.target - Swaps. Nov 6 00:22:48.668365 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 6 00:22:48.668379 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 6 00:22:48.668391 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 6 00:22:48.668406 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:22:48.668419 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 00:22:48.668432 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:22:48.668444 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 6 00:22:48.668457 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 6 00:22:48.668471 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 6 00:22:48.668484 systemd[1]: Mounting media.mount - External Media Directory... Nov 6 00:22:48.668499 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:22:48.668511 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 6 00:22:48.668525 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 6 00:22:48.668537 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 6 00:22:48.668551 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 6 00:22:48.668565 systemd[1]: Reached target machines.target - Containers. Nov 6 00:22:48.668578 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 6 00:22:48.668594 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:22:48.668606 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 00:22:48.668619 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 6 00:22:48.668632 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:22:48.668645 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 00:22:48.668658 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:22:48.668671 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 6 00:22:48.668686 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:22:48.668700 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 6 00:22:48.668712 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 6 00:22:48.668725 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 6 00:22:48.668737 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 6 00:22:48.668750 systemd[1]: Stopped systemd-fsck-usr.service. Nov 6 00:22:48.668764 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:22:48.668780 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 00:22:48.668793 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 00:22:48.668806 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 00:22:48.668820 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 6 00:22:48.668832 kernel: fuse: init (API version 7.41) Nov 6 00:22:48.668847 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 6 00:22:48.668881 systemd-journald[1240]: Collecting audit messages is disabled. Nov 6 00:22:48.668905 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 00:22:48.668918 systemd-journald[1240]: Journal started Nov 6 00:22:48.668943 systemd-journald[1240]: Runtime Journal (/run/log/journal/d8cc8b82fa9e48c19fd57054abc50086) is 6M, max 48.1M, 42.1M free. Nov 6 00:22:48.349391 systemd[1]: Queued start job for default target multi-user.target. Nov 6 00:22:48.370687 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 6 00:22:48.371326 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 6 00:22:48.675768 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:22:48.682752 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 00:22:48.684031 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 6 00:22:48.685876 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 6 00:22:48.687964 systemd[1]: Mounted media.mount - External Media Directory. Nov 6 00:22:48.689796 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 6 00:22:48.691733 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 6 00:22:48.693696 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 6 00:22:48.714735 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:22:48.717096 kernel: ACPI: bus type drm_connector registered Nov 6 00:22:48.717980 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 6 00:22:48.718222 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 6 00:22:48.720540 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:22:48.720804 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:22:48.723282 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 00:22:48.723547 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 00:22:48.725542 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:22:48.725766 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:22:48.728023 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 6 00:22:48.728266 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 6 00:22:48.730395 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:22:48.730773 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:22:48.732965 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 00:22:48.735220 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:22:48.738444 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 6 00:22:48.751246 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 00:22:48.754801 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 6 00:22:48.772502 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 6 00:22:48.775621 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 6 00:22:48.777672 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 6 00:22:48.777793 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 00:22:48.781020 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 6 00:22:48.783560 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:22:48.789524 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 6 00:22:48.793934 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 6 00:22:48.794685 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 00:22:48.797153 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 6 00:22:48.799274 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 00:22:48.800728 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:22:48.805315 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 6 00:22:48.838991 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 6 00:22:48.842178 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 6 00:22:48.844261 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 6 00:22:48.853191 systemd-journald[1240]: Time spent on flushing to /var/log/journal/d8cc8b82fa9e48c19fd57054abc50086 is 26.818ms for 1050 entries. Nov 6 00:22:48.853191 systemd-journald[1240]: System Journal (/var/log/journal/d8cc8b82fa9e48c19fd57054abc50086) is 8M, max 163.5M, 155.5M free. Nov 6 00:22:49.138510 systemd-journald[1240]: Received client request to flush runtime journal. Nov 6 00:22:49.138684 kernel: loop1: detected capacity change from 0 to 110976 Nov 6 00:22:49.138745 kernel: loop2: detected capacity change from 0 to 229808 Nov 6 00:22:49.138780 kernel: loop3: detected capacity change from 0 to 128048 Nov 6 00:22:49.138813 kernel: loop4: detected capacity change from 0 to 110976 Nov 6 00:22:49.138845 kernel: loop5: detected capacity change from 0 to 229808 Nov 6 00:22:48.859275 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:22:48.931619 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:22:49.011602 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 6 00:22:49.015697 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 6 00:22:49.019732 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 6 00:22:49.024051 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 6 00:22:49.029213 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 6 00:22:49.140692 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 6 00:22:49.148110 kernel: loop6: detected capacity change from 0 to 128048 Nov 6 00:22:49.158355 (sd-merge)[1310]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 6 00:22:49.162041 (sd-merge)[1310]: Merged extensions into '/usr'. Nov 6 00:22:49.186026 systemd[1]: Reload requested from client PID 1286 ('systemd-sysext') (unit systemd-sysext.service)... Nov 6 00:22:49.186046 systemd[1]: Reloading... Nov 6 00:22:49.272011 zram_generator::config[1340]: No configuration found. Nov 6 00:22:49.652691 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 6 00:22:49.652881 systemd[1]: Reloading finished in 466 ms. Nov 6 00:22:49.763483 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 6 00:22:49.766649 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 6 00:22:49.771387 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 6 00:22:49.794023 systemd[1]: Starting ensure-sysext.service... Nov 6 00:22:49.797608 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 00:22:49.809531 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 00:22:49.817159 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 00:22:49.842936 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 6 00:22:49.848921 systemd[1]: Reload requested from client PID 1379 ('systemctl') (unit ensure-sysext.service)... Nov 6 00:22:49.848942 systemd[1]: Reloading... Nov 6 00:22:49.880190 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 6 00:22:49.880241 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 6 00:22:49.880703 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 6 00:22:49.882102 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 6 00:22:49.885026 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 6 00:22:49.885661 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Nov 6 00:22:49.885738 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Nov 6 00:22:49.886030 systemd-tmpfiles[1381]: ACLs are not supported, ignoring. Nov 6 00:22:49.886047 systemd-tmpfiles[1381]: ACLs are not supported, ignoring. Nov 6 00:22:49.898184 systemd-tmpfiles[1382]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 00:22:49.898193 systemd-tmpfiles[1382]: Skipping /boot Nov 6 00:22:49.909771 systemd-tmpfiles[1382]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 00:22:49.909956 systemd-tmpfiles[1382]: Skipping /boot Nov 6 00:22:49.951103 zram_generator::config[1412]: No configuration found. Nov 6 00:22:50.117915 systemd-resolved[1380]: Positive Trust Anchors: Nov 6 00:22:50.117939 systemd-resolved[1380]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 00:22:50.117945 systemd-resolved[1380]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 6 00:22:50.117992 systemd-resolved[1380]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 00:22:50.123663 systemd-resolved[1380]: Defaulting to hostname 'linux'. Nov 6 00:22:50.240563 systemd[1]: Reloading finished in 391 ms. Nov 6 00:22:50.269279 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 6 00:22:50.271593 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 00:22:50.273820 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 6 00:22:50.302321 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:22:50.305144 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:22:50.315343 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:22:50.319012 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 00:22:50.321831 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 6 00:22:50.325564 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 6 00:22:50.332453 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 6 00:22:50.340607 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:22:50.345343 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 6 00:22:50.350825 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:22:50.353409 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:22:50.356778 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:22:50.369008 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:22:50.370918 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:22:50.371040 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:22:50.376405 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:22:50.376660 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:22:50.380535 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:22:50.385572 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:22:50.388477 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:22:50.388756 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:22:50.391499 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 6 00:22:50.404731 systemd-udevd[1464]: Using default interface naming scheme 'v257'. Nov 6 00:22:50.415216 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:22:50.417783 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:22:50.421771 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 00:22:50.429887 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:22:50.434462 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:22:50.436320 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:22:50.436451 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:22:50.437899 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 6 00:22:50.440908 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:22:50.441214 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:22:50.444563 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 00:22:50.444847 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 00:22:50.448797 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:22:50.449338 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:22:50.450578 augenrules[1495]: No rules Nov 6 00:22:50.452289 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:22:50.452509 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:22:50.455548 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 00:22:50.455889 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 00:22:50.466676 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:22:50.470562 systemd[1]: Finished ensure-sysext.service. Nov 6 00:22:50.483329 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 00:22:50.485554 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 00:22:50.485640 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 00:22:50.488134 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 6 00:22:50.493737 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 6 00:22:50.497835 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 6 00:22:50.533477 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 6 00:22:50.642261 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 6 00:22:50.645298 systemd[1]: Reached target time-set.target - System Time Set. Nov 6 00:22:50.646537 systemd-networkd[1515]: lo: Link UP Nov 6 00:22:50.646549 systemd-networkd[1515]: lo: Gained carrier Nov 6 00:22:50.649927 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 00:22:50.649975 systemd-networkd[1515]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 6 00:22:50.649981 systemd-networkd[1515]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 00:22:50.650693 systemd-networkd[1515]: eth0: Link UP Nov 6 00:22:50.650983 systemd-networkd[1515]: eth0: Gained carrier Nov 6 00:22:50.651000 systemd-networkd[1515]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 6 00:22:50.652518 systemd[1]: Reached target network.target - Network. Nov 6 00:22:50.655084 kernel: mousedev: PS/2 mouse device common for all mice Nov 6 00:22:50.657444 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 6 00:22:50.663359 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 6 00:22:50.674256 systemd-networkd[1515]: eth0: DHCPv4 address 10.0.0.58/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 6 00:22:50.679122 systemd-timesyncd[1520]: Network configuration changed, trying to establish connection. Nov 6 00:22:51.203844 systemd-resolved[1380]: Clock change detected. Flushing caches. Nov 6 00:22:51.204088 systemd-timesyncd[1520]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 6 00:22:51.204320 systemd-timesyncd[1520]: Initial clock synchronization to Thu 2025-11-06 00:22:51.203663 UTC. Nov 6 00:22:51.209833 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 6 00:22:51.234965 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 6 00:22:51.239651 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 6 00:22:51.244859 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Nov 6 00:22:51.249462 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 6 00:22:51.249737 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 6 00:22:51.251969 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 6 00:22:51.261838 kernel: ACPI: button: Power Button [PWRF] Nov 6 00:22:51.291895 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:22:51.291935 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:22:51.321943 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 6 00:22:51.381061 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:22:51.586060 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:22:51.586615 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:22:51.592455 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:22:51.623224 kernel: kvm_amd: TSC scaling supported Nov 6 00:22:51.623328 kernel: kvm_amd: Nested Virtualization enabled Nov 6 00:22:51.623353 kernel: kvm_amd: Nested Paging enabled Nov 6 00:22:51.623371 kernel: kvm_amd: LBR virtualization supported Nov 6 00:22:51.625343 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 6 00:22:51.625399 kernel: kvm_amd: Virtual GIF supported Nov 6 00:22:51.666005 kernel: EDAC MC: Ver: 3.0.0 Nov 6 00:22:51.679839 ldconfig[1461]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 6 00:22:51.691643 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 6 00:22:51.694961 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 6 00:22:51.906262 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 6 00:22:51.915253 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:22:51.921367 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 00:22:51.924950 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 6 00:22:51.928406 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 6 00:22:51.930857 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 6 00:22:51.940542 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 6 00:22:51.944553 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 6 00:22:51.950325 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 6 00:22:51.953020 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 6 00:22:51.953078 systemd[1]: Reached target paths.target - Path Units. Nov 6 00:22:51.954821 systemd[1]: Reached target timers.target - Timer Units. Nov 6 00:22:51.958308 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 6 00:22:51.967099 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 6 00:22:51.974367 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 6 00:22:51.977845 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 6 00:22:51.983083 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 6 00:22:51.997599 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 6 00:22:52.003519 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 6 00:22:52.007137 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 6 00:22:52.010937 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 00:22:52.013786 systemd[1]: Reached target basic.target - Basic System. Nov 6 00:22:52.017895 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 6 00:22:52.017951 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 6 00:22:52.019730 systemd[1]: Starting containerd.service - containerd container runtime... Nov 6 00:22:52.046266 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 6 00:22:52.053307 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 6 00:22:52.065071 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 6 00:22:52.076348 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 6 00:22:52.080592 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 6 00:22:52.086992 jq[1582]: false Nov 6 00:22:52.087991 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 6 00:22:52.099615 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 6 00:22:52.106944 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 6 00:22:52.124060 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 6 00:22:52.127162 google_oslogin_nss_cache[1584]: oslogin_cache_refresh[1584]: Refreshing passwd entry cache Nov 6 00:22:52.123641 oslogin_cache_refresh[1584]: Refreshing passwd entry cache Nov 6 00:22:52.129420 extend-filesystems[1583]: Found /dev/vda6 Nov 6 00:22:52.132474 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 6 00:22:52.135830 extend-filesystems[1583]: Found /dev/vda9 Nov 6 00:22:52.146765 extend-filesystems[1583]: Checking size of /dev/vda9 Nov 6 00:22:52.144255 oslogin_cache_refresh[1584]: Failure getting users, quitting Nov 6 00:22:52.154207 google_oslogin_nss_cache[1584]: oslogin_cache_refresh[1584]: Failure getting users, quitting Nov 6 00:22:52.154207 google_oslogin_nss_cache[1584]: oslogin_cache_refresh[1584]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 00:22:52.154207 google_oslogin_nss_cache[1584]: oslogin_cache_refresh[1584]: Refreshing group entry cache Nov 6 00:22:52.150053 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 6 00:22:52.144282 oslogin_cache_refresh[1584]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 00:22:52.145795 oslogin_cache_refresh[1584]: Refreshing group entry cache Nov 6 00:22:52.154874 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 6 00:22:52.160385 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 6 00:22:52.162976 systemd[1]: Starting update-engine.service - Update Engine... Nov 6 00:22:52.168147 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 6 00:22:52.168468 google_oslogin_nss_cache[1584]: oslogin_cache_refresh[1584]: Failure getting groups, quitting Nov 6 00:22:52.168930 oslogin_cache_refresh[1584]: Failure getting groups, quitting Nov 6 00:22:52.169028 google_oslogin_nss_cache[1584]: oslogin_cache_refresh[1584]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 00:22:52.169080 oslogin_cache_refresh[1584]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 00:22:52.176832 extend-filesystems[1583]: Resized partition /dev/vda9 Nov 6 00:22:52.182027 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 6 00:22:52.185543 jq[1603]: true Nov 6 00:22:52.187134 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 6 00:22:52.187588 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 6 00:22:52.191855 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 6 00:22:52.192344 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 6 00:22:52.196866 systemd[1]: motdgen.service: Deactivated successfully. Nov 6 00:22:52.197365 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 6 00:22:52.202490 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 6 00:22:52.202972 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 6 00:22:52.208452 update_engine[1602]: I20251106 00:22:52.208310 1602 main.cc:92] Flatcar Update Engine starting Nov 6 00:22:52.220354 (ntainerd)[1612]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 6 00:22:52.237372 jq[1611]: true Nov 6 00:22:52.239511 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 6 00:22:52.338108 extend-filesystems[1607]: resize2fs 1.47.3 (8-Jul-2025) Nov 6 00:22:52.344867 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 6 00:22:52.406227 tar[1610]: linux-amd64/LICENSE Nov 6 00:22:52.408497 tar[1610]: linux-amd64/helm Nov 6 00:22:52.408838 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 6 00:22:52.453293 extend-filesystems[1607]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 6 00:22:52.453293 extend-filesystems[1607]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 6 00:22:52.453293 extend-filesystems[1607]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 6 00:22:52.457548 extend-filesystems[1583]: Resized filesystem in /dev/vda9 Nov 6 00:22:52.455124 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 6 00:22:52.455529 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 6 00:22:52.472247 dbus-daemon[1580]: [system] SELinux support is enabled Nov 6 00:22:52.472592 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 6 00:22:52.478288 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 6 00:22:52.478341 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 6 00:22:52.480077 bash[1648]: Updated "/home/core/.ssh/authorized_keys" Nov 6 00:22:52.480965 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 6 00:22:52.480997 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 6 00:22:52.484109 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 6 00:22:52.486215 update_engine[1602]: I20251106 00:22:52.486028 1602 update_check_scheduler.cc:74] Next update check in 5m14s Nov 6 00:22:52.488588 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 6 00:22:52.489845 systemd[1]: Started update-engine.service - Update Engine. Nov 6 00:22:52.521833 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 6 00:22:52.660231 sshd_keygen[1614]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 6 00:22:52.726003 systemd-logind[1599]: Watching system buttons on /dev/input/event2 (Power Button) Nov 6 00:22:52.726043 systemd-logind[1599]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 6 00:22:52.727331 systemd-logind[1599]: New seat seat0. Nov 6 00:22:52.808086 systemd-networkd[1515]: eth0: Gained IPv6LL Nov 6 00:22:53.175329 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 6 00:22:53.178132 systemd[1]: Started systemd-logind.service - User Login Management. Nov 6 00:22:53.182105 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 6 00:22:53.188204 systemd[1]: Reached target network-online.target - Network is Online. Nov 6 00:22:53.192900 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 6 00:22:53.196270 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 6 00:22:53.201291 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:22:53.213476 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 6 00:22:53.231174 systemd[1]: Started sshd@0-10.0.0.58:22-10.0.0.1:38008.service - OpenSSH per-connection server daemon (10.0.0.1:38008). Nov 6 00:22:53.356191 systemd[1]: issuegen.service: Deactivated successfully. Nov 6 00:22:53.362063 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 6 00:22:53.371674 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 6 00:22:53.384153 locksmithd[1652]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 6 00:22:53.391199 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 6 00:22:53.391506 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 6 00:22:53.396183 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 6 00:22:53.407667 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 6 00:22:53.453249 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 6 00:22:53.458853 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 6 00:22:53.464937 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 6 00:22:53.467320 systemd[1]: Reached target getty.target - Login Prompts. Nov 6 00:22:53.589549 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 38008 ssh2: RSA SHA256:L/X2hZ0RfhoRT0scDEdxSx2ppA9YR+iZOac1cX0yhcQ Nov 6 00:22:53.592918 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:22:53.603553 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 6 00:22:53.611103 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 6 00:22:53.621598 systemd-logind[1599]: New session 1 of user core. Nov 6 00:22:53.632085 containerd[1612]: time="2025-11-06T00:22:53Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 6 00:22:53.633344 containerd[1612]: time="2025-11-06T00:22:53.633318733Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 6 00:22:53.642481 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 6 00:22:53.649280 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 6 00:22:53.654496 containerd[1612]: time="2025-11-06T00:22:53.654442017Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.348µs" Nov 6 00:22:53.655032 containerd[1612]: time="2025-11-06T00:22:53.654621323Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 6 00:22:53.655032 containerd[1612]: time="2025-11-06T00:22:53.654650157Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 6 00:22:53.655032 containerd[1612]: time="2025-11-06T00:22:53.654865431Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 6 00:22:53.655032 containerd[1612]: time="2025-11-06T00:22:53.654889807Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 6 00:22:53.655032 containerd[1612]: time="2025-11-06T00:22:53.654918631Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 00:22:53.655032 containerd[1612]: time="2025-11-06T00:22:53.654991427Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 00:22:53.656830 containerd[1612]: time="2025-11-06T00:22:53.655001646Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 00:22:53.656830 containerd[1612]: time="2025-11-06T00:22:53.656342611Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 00:22:53.656830 containerd[1612]: time="2025-11-06T00:22:53.656356167Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 00:22:53.656830 containerd[1612]: time="2025-11-06T00:22:53.656367037Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 00:22:53.656830 containerd[1612]: time="2025-11-06T00:22:53.656374992Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 6 00:22:53.656830 containerd[1612]: time="2025-11-06T00:22:53.656470701Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 6 00:22:53.656830 containerd[1612]: time="2025-11-06T00:22:53.656738634Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 00:22:53.656830 containerd[1612]: time="2025-11-06T00:22:53.656772087Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 00:22:53.656830 containerd[1612]: time="2025-11-06T00:22:53.656781013Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 6 00:22:53.657143 containerd[1612]: time="2025-11-06T00:22:53.657122764Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 6 00:22:53.657590 containerd[1612]: time="2025-11-06T00:22:53.657572898Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 6 00:22:53.657705 containerd[1612]: time="2025-11-06T00:22:53.657690950Z" level=info msg="metadata content store policy set" policy=shared Nov 6 00:22:53.665890 containerd[1612]: time="2025-11-06T00:22:53.665771894Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 6 00:22:53.666332 containerd[1612]: time="2025-11-06T00:22:53.666269107Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 6 00:22:53.666507 containerd[1612]: time="2025-11-06T00:22:53.666490973Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 6 00:22:53.666576 containerd[1612]: time="2025-11-06T00:22:53.666563419Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 6 00:22:53.666635 containerd[1612]: time="2025-11-06T00:22:53.666622730Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 6 00:22:53.666682 containerd[1612]: time="2025-11-06T00:22:53.666672133Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 6 00:22:53.666736 containerd[1612]: time="2025-11-06T00:22:53.666724361Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 6 00:22:53.666798 containerd[1612]: time="2025-11-06T00:22:53.666785505Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 6 00:22:53.666892 containerd[1612]: time="2025-11-06T00:22:53.666878920Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 6 00:22:53.666953 containerd[1612]: time="2025-11-06T00:22:53.666939755Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 6 00:22:53.667000 containerd[1612]: time="2025-11-06T00:22:53.666989167Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 6 00:22:53.667047 containerd[1612]: time="2025-11-06T00:22:53.667036296Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 6 00:22:53.667243 containerd[1612]: time="2025-11-06T00:22:53.667226071Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 6 00:22:53.667338 containerd[1612]: time="2025-11-06T00:22:53.667305951Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 6 00:22:53.667453 containerd[1612]: time="2025-11-06T00:22:53.667431547Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 6 00:22:53.667526 containerd[1612]: time="2025-11-06T00:22:53.667511717Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 6 00:22:53.667587 containerd[1612]: time="2025-11-06T00:22:53.667573002Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 6 00:22:53.667639 containerd[1612]: time="2025-11-06T00:22:53.667628055Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 6 00:22:53.667707 containerd[1612]: time="2025-11-06T00:22:53.667692536Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 6 00:22:53.667769 containerd[1612]: time="2025-11-06T00:22:53.667756025Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 6 00:22:53.667836 containerd[1612]: time="2025-11-06T00:22:53.667823612Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 6 00:22:53.667896 containerd[1612]: time="2025-11-06T00:22:53.667884807Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 6 00:22:53.667944 containerd[1612]: time="2025-11-06T00:22:53.667933248Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 6 00:22:53.668068 containerd[1612]: time="2025-11-06T00:22:53.668054104Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 6 00:22:53.668136 containerd[1612]: time="2025-11-06T00:22:53.668123745Z" level=info msg="Start snapshots syncer" Nov 6 00:22:53.668257 containerd[1612]: time="2025-11-06T00:22:53.668239973Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 6 00:22:53.668643 containerd[1612]: time="2025-11-06T00:22:53.668607162Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 6 00:22:53.668911 containerd[1612]: time="2025-11-06T00:22:53.668890302Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 6 00:22:53.669159 containerd[1612]: time="2025-11-06T00:22:53.669105236Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 6 00:22:53.669484 containerd[1612]: time="2025-11-06T00:22:53.669453168Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 6 00:22:53.669577 containerd[1612]: time="2025-11-06T00:22:53.669561161Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 6 00:22:53.669627 containerd[1612]: time="2025-11-06T00:22:53.669616264Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 6 00:22:53.669682 containerd[1612]: time="2025-11-06T00:22:53.669670275Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 6 00:22:53.669734 containerd[1612]: time="2025-11-06T00:22:53.669723265Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 6 00:22:53.669789 containerd[1612]: time="2025-11-06T00:22:53.669777857Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 6 00:22:53.669971 containerd[1612]: time="2025-11-06T00:22:53.669840875Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 6 00:22:53.669971 containerd[1612]: time="2025-11-06T00:22:53.669888184Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 6 00:22:53.669971 containerd[1612]: time="2025-11-06T00:22:53.669901970Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 6 00:22:53.669971 containerd[1612]: time="2025-11-06T00:22:53.669912469Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 6 00:22:53.670077 containerd[1612]: time="2025-11-06T00:22:53.670063683Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 00:22:53.670086 (systemd)[1707]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 6 00:22:53.671055 containerd[1612]: time="2025-11-06T00:22:53.670878882Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 00:22:53.671055 containerd[1612]: time="2025-11-06T00:22:53.670898859Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 00:22:53.671055 containerd[1612]: time="2025-11-06T00:22:53.670908748Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 00:22:53.671055 containerd[1612]: time="2025-11-06T00:22:53.670916372Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 6 00:22:53.671055 containerd[1612]: time="2025-11-06T00:22:53.670936089Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 6 00:22:53.671055 containerd[1612]: time="2025-11-06T00:22:53.670949324Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 6 00:22:53.671055 containerd[1612]: time="2025-11-06T00:22:53.670972828Z" level=info msg="runtime interface created" Nov 6 00:22:53.671055 containerd[1612]: time="2025-11-06T00:22:53.670978859Z" level=info msg="created NRI interface" Nov 6 00:22:53.671055 containerd[1612]: time="2025-11-06T00:22:53.670986594Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 6 00:22:53.671055 containerd[1612]: time="2025-11-06T00:22:53.670997124Z" level=info msg="Connect containerd service" Nov 6 00:22:53.671055 containerd[1612]: time="2025-11-06T00:22:53.671019936Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 6 00:22:53.673791 containerd[1612]: time="2025-11-06T00:22:53.673592631Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 00:22:53.674136 systemd-logind[1599]: New session c1 of user core. Nov 6 00:22:53.750975 tar[1610]: linux-amd64/README.md Nov 6 00:22:53.772207 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 6 00:22:53.826539 systemd[1707]: Queued start job for default target default.target. Nov 6 00:22:53.827250 containerd[1612]: time="2025-11-06T00:22:53.826944901Z" level=info msg="Start subscribing containerd event" Nov 6 00:22:53.827250 containerd[1612]: time="2025-11-06T00:22:53.827054918Z" level=info msg="Start recovering state" Nov 6 00:22:53.827250 containerd[1612]: time="2025-11-06T00:22:53.827224265Z" level=info msg="Start event monitor" Nov 6 00:22:53.827250 containerd[1612]: time="2025-11-06T00:22:53.827244052Z" level=info msg="Start cni network conf syncer for default" Nov 6 00:22:53.827250 containerd[1612]: time="2025-11-06T00:22:53.827255133Z" level=info msg="Start streaming server" Nov 6 00:22:53.827385 containerd[1612]: time="2025-11-06T00:22:53.827273427Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 6 00:22:53.827385 containerd[1612]: time="2025-11-06T00:22:53.827288375Z" level=info msg="runtime interface starting up..." Nov 6 00:22:53.827385 containerd[1612]: time="2025-11-06T00:22:53.827302131Z" level=info msg="starting plugins..." Nov 6 00:22:53.827385 containerd[1612]: time="2025-11-06T00:22:53.827305778Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 6 00:22:53.827456 containerd[1612]: time="2025-11-06T00:22:53.827385658Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 6 00:22:53.827456 containerd[1612]: time="2025-11-06T00:22:53.827321888Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 6 00:22:53.827659 containerd[1612]: time="2025-11-06T00:22:53.827599709Z" level=info msg="containerd successfully booted in 0.197041s" Nov 6 00:22:53.827784 systemd[1]: Started containerd.service - containerd container runtime. Nov 6 00:22:53.837627 systemd[1707]: Created slice app.slice - User Application Slice. Nov 6 00:22:53.837666 systemd[1707]: Reached target paths.target - Paths. Nov 6 00:22:53.837725 systemd[1707]: Reached target timers.target - Timers. Nov 6 00:22:53.839587 systemd[1707]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 6 00:22:53.852130 systemd[1707]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 6 00:22:53.852266 systemd[1707]: Reached target sockets.target - Sockets. Nov 6 00:22:53.852311 systemd[1707]: Reached target basic.target - Basic System. Nov 6 00:22:53.852352 systemd[1707]: Reached target default.target - Main User Target. Nov 6 00:22:53.852398 systemd[1707]: Startup finished in 169ms. Nov 6 00:22:53.852632 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 6 00:22:53.866016 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 6 00:22:53.936355 systemd[1]: Started sshd@1-10.0.0.58:22-10.0.0.1:38012.service - OpenSSH per-connection server daemon (10.0.0.1:38012). Nov 6 00:22:54.002994 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 38012 ssh2: RSA SHA256:L/X2hZ0RfhoRT0scDEdxSx2ppA9YR+iZOac1cX0yhcQ Nov 6 00:22:54.005010 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:22:54.009684 systemd-logind[1599]: New session 2 of user core. Nov 6 00:22:54.017978 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 6 00:22:54.074788 sshd[1738]: Connection closed by 10.0.0.1 port 38012 Nov 6 00:22:54.077521 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Nov 6 00:22:54.086779 systemd[1]: sshd@1-10.0.0.58:22-10.0.0.1:38012.service: Deactivated successfully. Nov 6 00:22:54.088729 systemd[1]: session-2.scope: Deactivated successfully. Nov 6 00:22:54.089673 systemd-logind[1599]: Session 2 logged out. Waiting for processes to exit. Nov 6 00:22:54.092328 systemd[1]: Started sshd@2-10.0.0.58:22-10.0.0.1:38022.service - OpenSSH per-connection server daemon (10.0.0.1:38022). Nov 6 00:22:54.095272 systemd-logind[1599]: Removed session 2. Nov 6 00:22:54.161136 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 38022 ssh2: RSA SHA256:L/X2hZ0RfhoRT0scDEdxSx2ppA9YR+iZOac1cX0yhcQ Nov 6 00:22:54.162661 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:22:54.167872 systemd-logind[1599]: New session 3 of user core. Nov 6 00:22:54.178035 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 6 00:22:54.236883 sshd[1748]: Connection closed by 10.0.0.1 port 38022 Nov 6 00:22:54.237300 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Nov 6 00:22:54.242091 systemd[1]: sshd@2-10.0.0.58:22-10.0.0.1:38022.service: Deactivated successfully. Nov 6 00:22:54.244167 systemd[1]: session-3.scope: Deactivated successfully. Nov 6 00:22:54.245073 systemd-logind[1599]: Session 3 logged out. Waiting for processes to exit. Nov 6 00:22:54.246992 systemd-logind[1599]: Removed session 3. Nov 6 00:22:54.408664 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:22:54.411215 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 6 00:22:54.413259 systemd[1]: Startup finished in 3.893s (kernel) + 9.287s (initrd) + 6.394s (userspace) = 19.575s. Nov 6 00:22:54.414747 (kubelet)[1758]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:22:55.014492 kubelet[1758]: E1106 00:22:55.014424 1758 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:22:55.018743 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:22:55.018983 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:22:55.019382 systemd[1]: kubelet.service: Consumed 1.361s CPU time, 269M memory peak. Nov 6 00:23:04.254962 systemd[1]: Started sshd@3-10.0.0.58:22-10.0.0.1:51828.service - OpenSSH per-connection server daemon (10.0.0.1:51828). Nov 6 00:23:04.322782 sshd[1772]: Accepted publickey for core from 10.0.0.1 port 51828 ssh2: RSA SHA256:L/X2hZ0RfhoRT0scDEdxSx2ppA9YR+iZOac1cX0yhcQ Nov 6 00:23:04.324483 sshd-session[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:23:04.329781 systemd-logind[1599]: New session 4 of user core. Nov 6 00:23:04.346983 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 6 00:23:04.404968 sshd[1775]: Connection closed by 10.0.0.1 port 51828 Nov 6 00:23:04.405465 sshd-session[1772]: pam_unix(sshd:session): session closed for user core Nov 6 00:23:04.419078 systemd[1]: sshd@3-10.0.0.58:22-10.0.0.1:51828.service: Deactivated successfully. Nov 6 00:23:04.421250 systemd[1]: session-4.scope: Deactivated successfully. Nov 6 00:23:04.422153 systemd-logind[1599]: Session 4 logged out. Waiting for processes to exit. Nov 6 00:23:04.425085 systemd[1]: Started sshd@4-10.0.0.58:22-10.0.0.1:51834.service - OpenSSH per-connection server daemon (10.0.0.1:51834). Nov 6 00:23:04.425786 systemd-logind[1599]: Removed session 4. Nov 6 00:23:04.486779 sshd[1781]: Accepted publickey for core from 10.0.0.1 port 51834 ssh2: RSA SHA256:L/X2hZ0RfhoRT0scDEdxSx2ppA9YR+iZOac1cX0yhcQ Nov 6 00:23:04.488880 sshd-session[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:23:04.495226 systemd-logind[1599]: New session 5 of user core. Nov 6 00:23:04.504953 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 6 00:23:04.558853 sshd[1784]: Connection closed by 10.0.0.1 port 51834 Nov 6 00:23:04.559352 sshd-session[1781]: pam_unix(sshd:session): session closed for user core Nov 6 00:23:04.568794 systemd[1]: sshd@4-10.0.0.58:22-10.0.0.1:51834.service: Deactivated successfully. Nov 6 00:23:04.570796 systemd[1]: session-5.scope: Deactivated successfully. Nov 6 00:23:04.571564 systemd-logind[1599]: Session 5 logged out. Waiting for processes to exit. Nov 6 00:23:04.574514 systemd[1]: Started sshd@5-10.0.0.58:22-10.0.0.1:51840.service - OpenSSH per-connection server daemon (10.0.0.1:51840). Nov 6 00:23:04.575288 systemd-logind[1599]: Removed session 5. Nov 6 00:23:04.648038 sshd[1790]: Accepted publickey for core from 10.0.0.1 port 51840 ssh2: RSA SHA256:L/X2hZ0RfhoRT0scDEdxSx2ppA9YR+iZOac1cX0yhcQ Nov 6 00:23:04.650002 sshd-session[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:23:04.655927 systemd-logind[1599]: New session 6 of user core. Nov 6 00:23:04.670155 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 6 00:23:04.727348 sshd[1793]: Connection closed by 10.0.0.1 port 51840 Nov 6 00:23:04.727910 sshd-session[1790]: pam_unix(sshd:session): session closed for user core Nov 6 00:23:04.746380 systemd[1]: sshd@5-10.0.0.58:22-10.0.0.1:51840.service: Deactivated successfully. Nov 6 00:23:04.748749 systemd[1]: session-6.scope: Deactivated successfully. Nov 6 00:23:04.749728 systemd-logind[1599]: Session 6 logged out. Waiting for processes to exit. Nov 6 00:23:04.752709 systemd[1]: Started sshd@6-10.0.0.58:22-10.0.0.1:51848.service - OpenSSH per-connection server daemon (10.0.0.1:51848). Nov 6 00:23:04.753585 systemd-logind[1599]: Removed session 6. Nov 6 00:23:04.825996 sshd[1799]: Accepted publickey for core from 10.0.0.1 port 51848 ssh2: RSA SHA256:L/X2hZ0RfhoRT0scDEdxSx2ppA9YR+iZOac1cX0yhcQ Nov 6 00:23:04.828024 sshd-session[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:23:04.833840 systemd-logind[1599]: New session 7 of user core. Nov 6 00:23:04.848100 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 6 00:23:04.911620 sudo[1803]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 6 00:23:04.911956 sudo[1803]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:23:04.931146 sudo[1803]: pam_unix(sudo:session): session closed for user root Nov 6 00:23:04.933466 sshd[1802]: Connection closed by 10.0.0.1 port 51848 Nov 6 00:23:04.934245 sshd-session[1799]: pam_unix(sshd:session): session closed for user core Nov 6 00:23:04.949200 systemd[1]: sshd@6-10.0.0.58:22-10.0.0.1:51848.service: Deactivated successfully. Nov 6 00:23:04.951705 systemd[1]: session-7.scope: Deactivated successfully. Nov 6 00:23:04.952767 systemd-logind[1599]: Session 7 logged out. Waiting for processes to exit. Nov 6 00:23:04.956407 systemd[1]: Started sshd@7-10.0.0.58:22-10.0.0.1:51864.service - OpenSSH per-connection server daemon (10.0.0.1:51864). Nov 6 00:23:04.957365 systemd-logind[1599]: Removed session 7. Nov 6 00:23:05.026040 sshd[1809]: Accepted publickey for core from 10.0.0.1 port 51864 ssh2: RSA SHA256:L/X2hZ0RfhoRT0scDEdxSx2ppA9YR+iZOac1cX0yhcQ Nov 6 00:23:05.028392 sshd-session[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:23:05.029707 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 6 00:23:05.031758 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:23:05.035633 systemd-logind[1599]: New session 8 of user core. Nov 6 00:23:05.046114 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 6 00:23:05.107780 sudo[1817]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 6 00:23:05.108122 sudo[1817]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:23:05.247819 sudo[1817]: pam_unix(sudo:session): session closed for user root Nov 6 00:23:05.258471 sudo[1816]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 6 00:23:05.259045 sudo[1816]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:23:05.273961 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 00:23:05.313674 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:23:05.319494 (kubelet)[1832]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:23:05.339488 augenrules[1845]: No rules Nov 6 00:23:05.341361 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 00:23:05.342721 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 00:23:05.344248 sudo[1816]: pam_unix(sudo:session): session closed for user root Nov 6 00:23:05.346487 sshd[1815]: Connection closed by 10.0.0.1 port 51864 Nov 6 00:23:05.346869 sshd-session[1809]: pam_unix(sshd:session): session closed for user core Nov 6 00:23:05.352455 systemd[1]: sshd@7-10.0.0.58:22-10.0.0.1:51864.service: Deactivated successfully. Nov 6 00:23:05.355105 systemd[1]: session-8.scope: Deactivated successfully. Nov 6 00:23:05.357377 systemd-logind[1599]: Session 8 logged out. Waiting for processes to exit. Nov 6 00:23:05.358852 systemd[1]: Started sshd@8-10.0.0.58:22-10.0.0.1:51866.service - OpenSSH per-connection server daemon (10.0.0.1:51866). Nov 6 00:23:05.360307 systemd-logind[1599]: Removed session 8. Nov 6 00:23:05.413296 sshd[1854]: Accepted publickey for core from 10.0.0.1 port 51866 ssh2: RSA SHA256:L/X2hZ0RfhoRT0scDEdxSx2ppA9YR+iZOac1cX0yhcQ Nov 6 00:23:05.415658 sshd-session[1854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:23:05.421486 systemd-logind[1599]: New session 9 of user core. Nov 6 00:23:05.425065 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 6 00:23:05.482312 sudo[1864]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 6 00:23:05.483177 sudo[1864]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:23:05.618430 kubelet[1832]: E1106 00:23:05.618279 1832 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:23:05.625875 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:23:05.626106 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:23:05.626548 systemd[1]: kubelet.service: Consumed 818ms CPU time, 110.7M memory peak. Nov 6 00:23:06.615071 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 6 00:23:06.635257 (dockerd)[1886]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 6 00:23:07.617156 dockerd[1886]: time="2025-11-06T00:23:07.617063266Z" level=info msg="Starting up" Nov 6 00:23:07.618357 dockerd[1886]: time="2025-11-06T00:23:07.618328088Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 6 00:23:07.647372 dockerd[1886]: time="2025-11-06T00:23:07.647303557Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 6 00:23:07.853835 dockerd[1886]: time="2025-11-06T00:23:07.853754256Z" level=info msg="Loading containers: start." Nov 6 00:23:07.866847 kernel: Initializing XFRM netlink socket Nov 6 00:23:08.165831 systemd-networkd[1515]: docker0: Link UP Nov 6 00:23:08.172554 dockerd[1886]: time="2025-11-06T00:23:08.172490460Z" level=info msg="Loading containers: done." Nov 6 00:23:08.219687 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck62967039-merged.mount: Deactivated successfully. Nov 6 00:23:08.222623 dockerd[1886]: time="2025-11-06T00:23:08.222553807Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 6 00:23:08.222700 dockerd[1886]: time="2025-11-06T00:23:08.222687096Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 6 00:23:08.222890 dockerd[1886]: time="2025-11-06T00:23:08.222831327Z" level=info msg="Initializing buildkit" Nov 6 00:23:08.260655 dockerd[1886]: time="2025-11-06T00:23:08.260598885Z" level=info msg="Completed buildkit initialization" Nov 6 00:23:08.267475 dockerd[1886]: time="2025-11-06T00:23:08.267394048Z" level=info msg="Daemon has completed initialization" Nov 6 00:23:08.267668 dockerd[1886]: time="2025-11-06T00:23:08.267501089Z" level=info msg="API listen on /run/docker.sock" Nov 6 00:23:08.267839 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 6 00:23:09.120151 containerd[1612]: time="2025-11-06T00:23:09.120071733Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 6 00:23:09.941176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount107472573.mount: Deactivated successfully. Nov 6 00:23:11.261153 containerd[1612]: time="2025-11-06T00:23:11.261062916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:11.262245 containerd[1612]: time="2025-11-06T00:23:11.262164321Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Nov 6 00:23:11.263721 containerd[1612]: time="2025-11-06T00:23:11.263660637Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:11.266760 containerd[1612]: time="2025-11-06T00:23:11.266702662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:11.269658 containerd[1612]: time="2025-11-06T00:23:11.269601328Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.149446659s" Nov 6 00:23:11.269658 containerd[1612]: time="2025-11-06T00:23:11.269656652Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 6 00:23:11.270950 containerd[1612]: time="2025-11-06T00:23:11.270907808Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 6 00:23:13.018841 containerd[1612]: time="2025-11-06T00:23:13.018730961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:13.019627 containerd[1612]: time="2025-11-06T00:23:13.019571317Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Nov 6 00:23:13.021129 containerd[1612]: time="2025-11-06T00:23:13.021067222Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:13.032914 containerd[1612]: time="2025-11-06T00:23:13.032856100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:13.033994 containerd[1612]: time="2025-11-06T00:23:13.033930165Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.762981549s" Nov 6 00:23:13.033994 containerd[1612]: time="2025-11-06T00:23:13.033964339Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 6 00:23:13.034641 containerd[1612]: time="2025-11-06T00:23:13.034597436Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 6 00:23:14.990641 containerd[1612]: time="2025-11-06T00:23:14.990561747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:14.992168 containerd[1612]: time="2025-11-06T00:23:14.992053886Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Nov 6 00:23:14.993999 containerd[1612]: time="2025-11-06T00:23:14.993910497Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:14.997338 containerd[1612]: time="2025-11-06T00:23:14.997298361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:14.998452 containerd[1612]: time="2025-11-06T00:23:14.998410787Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.96376513s" Nov 6 00:23:14.998452 containerd[1612]: time="2025-11-06T00:23:14.998447305Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 6 00:23:14.999080 containerd[1612]: time="2025-11-06T00:23:14.999042812Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 6 00:23:15.815843 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 6 00:23:15.817769 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:23:16.048187 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:23:16.060154 (kubelet)[2182]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:23:16.094280 kubelet[2182]: E1106 00:23:16.094123 2182 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:23:16.098515 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:23:16.098724 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:23:16.099131 systemd[1]: kubelet.service: Consumed 222ms CPU time, 110.1M memory peak. Nov 6 00:23:16.917749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3953533255.mount: Deactivated successfully. Nov 6 00:23:17.661959 containerd[1612]: time="2025-11-06T00:23:17.661882377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:17.662890 containerd[1612]: time="2025-11-06T00:23:17.662861974Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Nov 6 00:23:17.664095 containerd[1612]: time="2025-11-06T00:23:17.664054460Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:17.666375 containerd[1612]: time="2025-11-06T00:23:17.666334627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:17.667009 containerd[1612]: time="2025-11-06T00:23:17.666964317Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 2.667881481s" Nov 6 00:23:17.667009 containerd[1612]: time="2025-11-06T00:23:17.667000064Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 6 00:23:17.667710 containerd[1612]: time="2025-11-06T00:23:17.667518457Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 6 00:23:18.368437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1935243358.mount: Deactivated successfully. Nov 6 00:23:19.154994 containerd[1612]: time="2025-11-06T00:23:19.154935474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:19.155860 containerd[1612]: time="2025-11-06T00:23:19.155785258Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Nov 6 00:23:19.157204 containerd[1612]: time="2025-11-06T00:23:19.157161429Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:19.160413 containerd[1612]: time="2025-11-06T00:23:19.160358595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:19.161303 containerd[1612]: time="2025-11-06T00:23:19.161245037Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.493690754s" Nov 6 00:23:19.161303 containerd[1612]: time="2025-11-06T00:23:19.161293388Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 6 00:23:19.161850 containerd[1612]: time="2025-11-06T00:23:19.161824384Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 6 00:23:19.648190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount482399444.mount: Deactivated successfully. Nov 6 00:23:19.655456 containerd[1612]: time="2025-11-06T00:23:19.655363446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:23:19.656357 containerd[1612]: time="2025-11-06T00:23:19.656327373Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 6 00:23:19.658914 containerd[1612]: time="2025-11-06T00:23:19.658766798Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:23:19.661088 containerd[1612]: time="2025-11-06T00:23:19.661030664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:23:19.661885 containerd[1612]: time="2025-11-06T00:23:19.661838649Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 499.985952ms" Nov 6 00:23:19.661954 containerd[1612]: time="2025-11-06T00:23:19.661879015Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 6 00:23:19.662538 containerd[1612]: time="2025-11-06T00:23:19.662513655Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 6 00:23:20.605332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4159014314.mount: Deactivated successfully. Nov 6 00:23:26.320420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 6 00:23:26.325089 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:23:26.740912 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:23:26.746406 (kubelet)[2314]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:23:27.052001 kubelet[2314]: E1106 00:23:27.051663 2314 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:23:27.060356 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:23:27.060605 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:23:27.063321 systemd[1]: kubelet.service: Consumed 523ms CPU time, 110.5M memory peak. Nov 6 00:23:28.515200 containerd[1612]: time="2025-11-06T00:23:28.515096365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:28.518700 containerd[1612]: time="2025-11-06T00:23:28.518611102Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Nov 6 00:23:28.520845 containerd[1612]: time="2025-11-06T00:23:28.520647119Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:28.529005 containerd[1612]: time="2025-11-06T00:23:28.528909302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:28.534164 containerd[1612]: time="2025-11-06T00:23:28.534060566Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 8.871510492s" Nov 6 00:23:28.534610 containerd[1612]: time="2025-11-06T00:23:28.534135549Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 6 00:23:35.146401 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:23:35.146609 systemd[1]: kubelet.service: Consumed 523ms CPU time, 110.5M memory peak. Nov 6 00:23:35.162694 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:23:35.256467 systemd[1]: Reload requested from client PID 2357 ('systemctl') (unit session-9.scope)... Nov 6 00:23:35.256524 systemd[1]: Reloading... Nov 6 00:23:35.458842 zram_generator::config[2397]: No configuration found. Nov 6 00:23:36.013663 systemd[1]: Reloading finished in 756 ms. Nov 6 00:23:36.198241 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 6 00:23:36.200324 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 6 00:23:36.203476 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:23:36.203538 systemd[1]: kubelet.service: Consumed 250ms CPU time, 98.4M memory peak. Nov 6 00:23:36.219380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:23:36.693578 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:23:36.715385 (kubelet)[2449]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 00:23:36.826729 kubelet[2449]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:23:36.826729 kubelet[2449]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 00:23:36.833191 kubelet[2449]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:23:36.833191 kubelet[2449]: I1106 00:23:36.828936 2449 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 00:23:37.398857 kubelet[2449]: I1106 00:23:37.398677 2449 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 6 00:23:37.398857 kubelet[2449]: I1106 00:23:37.398723 2449 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 00:23:37.399124 kubelet[2449]: I1106 00:23:37.399088 2449 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 00:23:37.456989 kubelet[2449]: I1106 00:23:37.451685 2449 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 00:23:37.614121 kubelet[2449]: E1106 00:23:37.609899 2449 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.58:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 6 00:23:37.634482 kubelet[2449]: I1106 00:23:37.634435 2449 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 00:23:37.657180 kubelet[2449]: I1106 00:23:37.655250 2449 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 6 00:23:37.657180 kubelet[2449]: I1106 00:23:37.656548 2449 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 00:23:37.661131 kubelet[2449]: I1106 00:23:37.657475 2449 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 00:23:37.661131 kubelet[2449]: I1106 00:23:37.660365 2449 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 00:23:37.663027 kubelet[2449]: I1106 00:23:37.662915 2449 container_manager_linux.go:303] "Creating device plugin manager" Nov 6 00:23:37.665566 kubelet[2449]: I1106 00:23:37.665445 2449 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:23:37.672335 kubelet[2449]: I1106 00:23:37.672239 2449 kubelet.go:480] "Attempting to sync node with API server" Nov 6 00:23:37.672335 kubelet[2449]: I1106 00:23:37.672291 2449 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 00:23:37.672335 kubelet[2449]: I1106 00:23:37.672359 2449 kubelet.go:386] "Adding apiserver pod source" Nov 6 00:23:37.672594 kubelet[2449]: I1106 00:23:37.672397 2449 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 00:23:37.686322 kubelet[2449]: I1106 00:23:37.684801 2449 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 6 00:23:37.687550 kubelet[2449]: I1106 00:23:37.687494 2449 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 00:23:37.692183 kubelet[2449]: E1106 00:23:37.688534 2449 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 00:23:37.692183 kubelet[2449]: W1106 00:23:37.689152 2449 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 6 00:23:37.695168 kubelet[2449]: E1106 00:23:37.694369 2449 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.58:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 00:23:37.698634 kubelet[2449]: I1106 00:23:37.698573 2449 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 6 00:23:37.699888 kubelet[2449]: I1106 00:23:37.698683 2449 server.go:1289] "Started kubelet" Nov 6 00:23:37.705835 kubelet[2449]: I1106 00:23:37.705569 2449 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 00:23:37.716366 kubelet[2449]: I1106 00:23:37.707527 2449 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 00:23:37.716366 kubelet[2449]: I1106 00:23:37.708691 2449 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 00:23:37.716366 kubelet[2449]: I1106 00:23:37.708769 2449 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 00:23:37.716366 kubelet[2449]: I1106 00:23:37.709831 2449 server.go:317] "Adding debug handlers to kubelet server" Nov 6 00:23:37.716366 kubelet[2449]: I1106 00:23:37.710945 2449 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 6 00:23:37.716366 kubelet[2449]: I1106 00:23:37.711060 2449 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 00:23:37.723589 kubelet[2449]: I1106 00:23:37.721380 2449 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 6 00:23:37.723589 kubelet[2449]: I1106 00:23:37.721517 2449 reconciler.go:26] "Reconciler: start to sync state" Nov 6 00:23:37.724697 kubelet[2449]: E1106 00:23:37.724555 2449 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:23:37.738845 kubelet[2449]: E1106 00:23:37.734391 2449 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 00:23:37.738845 kubelet[2449]: E1106 00:23:37.734972 2449 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.58:6443: connect: connection refused" interval="200ms" Nov 6 00:23:37.738845 kubelet[2449]: E1106 00:23:37.735874 2449 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 00:23:37.743195 kubelet[2449]: E1106 00:23:37.720122 2449 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.58:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.58:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875431c3a1e113f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-06 00:23:37.698611519 +0000 UTC m=+0.975366217,LastTimestamp:2025-11-06 00:23:37.698611519 +0000 UTC m=+0.975366217,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 6 00:23:37.748338 kubelet[2449]: I1106 00:23:37.744933 2449 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 00:23:37.756791 kubelet[2449]: I1106 00:23:37.756581 2449 factory.go:223] Registration of the containerd container factory successfully Nov 6 00:23:37.756791 kubelet[2449]: I1106 00:23:37.756617 2449 factory.go:223] Registration of the systemd container factory successfully Nov 6 00:23:37.803986 kubelet[2449]: I1106 00:23:37.803537 2449 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 00:23:37.803986 kubelet[2449]: I1106 00:23:37.803568 2449 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 00:23:37.803986 kubelet[2449]: I1106 00:23:37.803600 2449 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:23:37.833954 kubelet[2449]: I1106 00:23:37.821495 2449 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 6 00:23:37.833954 kubelet[2449]: E1106 00:23:37.825712 2449 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:23:37.834878 kubelet[2449]: I1106 00:23:37.834773 2449 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 6 00:23:37.835021 kubelet[2449]: I1106 00:23:37.834969 2449 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 6 00:23:37.835021 kubelet[2449]: I1106 00:23:37.835000 2449 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 00:23:37.838371 kubelet[2449]: E1106 00:23:37.836789 2449 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 00:23:37.838371 kubelet[2449]: I1106 00:23:37.837477 2449 kubelet.go:2436] "Starting kubelet main sync loop" Nov 6 00:23:37.838371 kubelet[2449]: E1106 00:23:37.837656 2449 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 00:23:37.868022 update_engine[1602]: I20251106 00:23:37.866914 1602 update_attempter.cc:509] Updating boot flags... Nov 6 00:23:37.936278 kubelet[2449]: E1106 00:23:37.935976 2449 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:23:37.938297 kubelet[2449]: E1106 00:23:37.938244 2449 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.58:6443: connect: connection refused" interval="400ms" Nov 6 00:23:37.940148 kubelet[2449]: E1106 00:23:37.938524 2449 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 6 00:23:38.037616 kubelet[2449]: E1106 00:23:38.037155 2449 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:23:38.139210 kubelet[2449]: E1106 00:23:38.138271 2449 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:23:38.139376 kubelet[2449]: E1106 00:23:38.139266 2449 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 6 00:23:38.243136 kubelet[2449]: E1106 00:23:38.239937 2449 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:23:38.339227 kubelet[2449]: E1106 00:23:38.338952 2449 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.58:6443: connect: connection refused" interval="800ms" Nov 6 00:23:38.345891 kubelet[2449]: E1106 00:23:38.342834 2449 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:23:38.446772 kubelet[2449]: E1106 00:23:38.446393 2449 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:23:38.446772 kubelet[2449]: E1106 00:23:38.446600 2449 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.58:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.58:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875431c3a1e113f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-06 00:23:37.698611519 +0000 UTC m=+0.975366217,LastTimestamp:2025-11-06 00:23:37.698611519 +0000 UTC m=+0.975366217,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 6 00:23:38.539719 kubelet[2449]: E1106 00:23:38.539398 2449 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 6 00:23:38.547235 kubelet[2449]: E1106 00:23:38.547164 2449 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:23:38.557071 kubelet[2449]: E1106 00:23:38.556995 2449 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 00:23:38.641838 kubelet[2449]: E1106 00:23:38.641637 2449 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.58:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 00:23:38.649263 kubelet[2449]: E1106 00:23:38.648250 2449 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:23:38.735295 kubelet[2449]: E1106 00:23:38.732687 2449 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 00:23:38.750322 kubelet[2449]: E1106 00:23:38.750205 2449 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:23:38.850762 kubelet[2449]: E1106 00:23:38.850668 2449 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:23:38.951487 kubelet[2449]: E1106 00:23:38.951326 2449 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:23:38.977392 kubelet[2449]: I1106 00:23:38.976954 2449 policy_none.go:49] "None policy: Start" Nov 6 00:23:38.977392 kubelet[2449]: I1106 00:23:38.977012 2449 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 6 00:23:38.977392 kubelet[2449]: I1106 00:23:38.977033 2449 state_mem.go:35] "Initializing new in-memory state store" Nov 6 00:23:39.052059 kubelet[2449]: E1106 00:23:39.051787 2449 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:23:39.255535 kubelet[2449]: E1106 00:23:39.252034 2449 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:23:39.256375 kubelet[2449]: E1106 00:23:39.256111 2449 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.58:6443: connect: connection refused" interval="1.6s" Nov 6 00:23:39.310271 kubelet[2449]: E1106 00:23:39.310184 2449 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 00:23:39.340412 kubelet[2449]: E1106 00:23:39.340332 2449 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 6 00:23:39.356981 kubelet[2449]: E1106 00:23:39.356928 2449 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:23:39.397229 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 6 00:23:39.459245 kubelet[2449]: E1106 00:23:39.459179 2449 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:23:39.565426 kubelet[2449]: E1106 00:23:39.561955 2449 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:23:39.660658 kubelet[2449]: E1106 00:23:39.659016 2449 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.58:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 6 00:23:39.670423 kubelet[2449]: E1106 00:23:39.665412 2449 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 00:23:39.707253 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 6 00:23:39.730847 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 6 00:23:39.751606 kubelet[2449]: E1106 00:23:39.751548 2449 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 00:23:39.751929 kubelet[2449]: I1106 00:23:39.751887 2449 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 00:23:39.753145 kubelet[2449]: I1106 00:23:39.751916 2449 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 00:23:39.753453 kubelet[2449]: I1106 00:23:39.753365 2449 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 00:23:39.755823 kubelet[2449]: E1106 00:23:39.755769 2449 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 00:23:39.755910 kubelet[2449]: E1106 00:23:39.755844 2449 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 6 00:23:39.859246 kubelet[2449]: I1106 00:23:39.856770 2449 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:23:39.860198 kubelet[2449]: E1106 00:23:39.860136 2449 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.58:6443/api/v1/nodes\": dial tcp 10.0.0.58:6443: connect: connection refused" node="localhost" Nov 6 00:23:40.066178 kubelet[2449]: I1106 00:23:40.065666 2449 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:23:40.066178 kubelet[2449]: E1106 00:23:40.066164 2449 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.58:6443/api/v1/nodes\": dial tcp 10.0.0.58:6443: connect: connection refused" node="localhost" Nov 6 00:23:40.479236 kubelet[2449]: I1106 00:23:40.477139 2449 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:23:40.479236 kubelet[2449]: E1106 00:23:40.477591 2449 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.58:6443/api/v1/nodes\": dial tcp 10.0.0.58:6443: connect: connection refused" node="localhost" Nov 6 00:23:40.690277 kubelet[2449]: E1106 00:23:40.690202 2449 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 00:23:40.856849 kubelet[2449]: E1106 00:23:40.856636 2449 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.58:6443: connect: connection refused" interval="3.2s" Nov 6 00:23:41.073368 kubelet[2449]: I1106 00:23:41.073097 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/178acae2a068e6ad4135ccb32ea3ff14-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"178acae2a068e6ad4135ccb32ea3ff14\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:23:41.073368 kubelet[2449]: I1106 00:23:41.073166 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/178acae2a068e6ad4135ccb32ea3ff14-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"178acae2a068e6ad4135ccb32ea3ff14\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:23:41.073368 kubelet[2449]: I1106 00:23:41.073200 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/178acae2a068e6ad4135ccb32ea3ff14-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"178acae2a068e6ad4135ccb32ea3ff14\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:23:41.111138 systemd[1]: Created slice kubepods-burstable-pod178acae2a068e6ad4135ccb32ea3ff14.slice - libcontainer container kubepods-burstable-pod178acae2a068e6ad4135ccb32ea3ff14.slice. Nov 6 00:23:41.135196 kubelet[2449]: E1106 00:23:41.133700 2449 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:23:41.177175 kubelet[2449]: I1106 00:23:41.173388 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:23:41.177175 kubelet[2449]: I1106 00:23:41.174994 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:23:41.177175 kubelet[2449]: I1106 00:23:41.175028 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:23:41.177175 kubelet[2449]: I1106 00:23:41.175059 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:23:41.177175 kubelet[2449]: I1106 00:23:41.175083 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:23:41.206038 systemd[1]: Created slice kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice - libcontainer container kubepods-burstable-pod20c890a246d840d308022312da9174cb.slice. Nov 6 00:23:41.211038 kubelet[2449]: E1106 00:23:41.210909 2449 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.58:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 00:23:41.217129 kubelet[2449]: E1106 00:23:41.217063 2449 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:23:41.273259 systemd[1]: Created slice kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice - libcontainer container kubepods-burstable-podd13d96f639b65e57f439b4396b605564.slice. Nov 6 00:23:41.279390 kubelet[2449]: I1106 00:23:41.277437 2449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 6 00:23:41.283780 kubelet[2449]: E1106 00:23:41.283106 2449 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:23:41.283780 kubelet[2449]: E1106 00:23:41.283629 2449 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 00:23:41.289743 kubelet[2449]: I1106 00:23:41.289256 2449 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:23:41.290279 kubelet[2449]: E1106 00:23:41.290108 2449 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.58:6443/api/v1/nodes\": dial tcp 10.0.0.58:6443: connect: connection refused" node="localhost" Nov 6 00:23:41.438465 kubelet[2449]: E1106 00:23:41.436606 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:23:41.438602 containerd[1612]: time="2025-11-06T00:23:41.437644876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:178acae2a068e6ad4135ccb32ea3ff14,Namespace:kube-system,Attempt:0,}" Nov 6 00:23:41.533411 kubelet[2449]: E1106 00:23:41.530285 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:23:41.535606 containerd[1612]: time="2025-11-06T00:23:41.532626819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,}" Nov 6 00:23:41.587183 kubelet[2449]: E1106 00:23:41.586337 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:23:41.592377 containerd[1612]: time="2025-11-06T00:23:41.591756423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,}" Nov 6 00:23:41.605789 containerd[1612]: time="2025-11-06T00:23:41.598277640Z" level=info msg="connecting to shim 4e7be6a1ed9a96947fadded9009a0d5b659c4522732677fdcb78e1bd2492539d" address="unix:///run/containerd/s/5e5ec2a84522fc98ebe47fd6c23e6705331087d476c2073424aa2669bca407ac" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:23:41.638304 containerd[1612]: time="2025-11-06T00:23:41.638238767Z" level=info msg="connecting to shim 7b3cfec85eea507aaaa1774d328abe86980055346288fed15252699128a0ae4a" address="unix:///run/containerd/s/95ecf8b90d8d752526d0f729b13db0db07efd1bf8e6960b9c0d17e3298fd1a4a" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:23:41.738302 systemd[1]: Started cri-containerd-7b3cfec85eea507aaaa1774d328abe86980055346288fed15252699128a0ae4a.scope - libcontainer container 7b3cfec85eea507aaaa1774d328abe86980055346288fed15252699128a0ae4a. Nov 6 00:23:41.738943 containerd[1612]: time="2025-11-06T00:23:41.738307392Z" level=info msg="connecting to shim 0658d853ae46d48b6b1306b4eb698ce54a68c9f344479939f4589e2a463af33f" address="unix:///run/containerd/s/963ce717ca6ff5663828b0e25a35fb49999d8352a44c03529ad1aa2eb2267e93" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:23:41.794187 systemd[1]: Started cri-containerd-4e7be6a1ed9a96947fadded9009a0d5b659c4522732677fdcb78e1bd2492539d.scope - libcontainer container 4e7be6a1ed9a96947fadded9009a0d5b659c4522732677fdcb78e1bd2492539d. Nov 6 00:23:41.974221 systemd[1]: Started cri-containerd-0658d853ae46d48b6b1306b4eb698ce54a68c9f344479939f4589e2a463af33f.scope - libcontainer container 0658d853ae46d48b6b1306b4eb698ce54a68c9f344479939f4589e2a463af33f. Nov 6 00:23:42.074299 containerd[1612]: time="2025-11-06T00:23:42.073427399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:178acae2a068e6ad4135ccb32ea3ff14,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e7be6a1ed9a96947fadded9009a0d5b659c4522732677fdcb78e1bd2492539d\"" Nov 6 00:23:42.077573 kubelet[2449]: E1106 00:23:42.077011 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:23:42.085206 containerd[1612]: time="2025-11-06T00:23:42.085030282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:20c890a246d840d308022312da9174cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b3cfec85eea507aaaa1774d328abe86980055346288fed15252699128a0ae4a\"" Nov 6 00:23:42.086047 kubelet[2449]: E1106 00:23:42.085995 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:23:42.087760 containerd[1612]: time="2025-11-06T00:23:42.087679045Z" level=info msg="CreateContainer within sandbox \"4e7be6a1ed9a96947fadded9009a0d5b659c4522732677fdcb78e1bd2492539d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 6 00:23:42.095303 containerd[1612]: time="2025-11-06T00:23:42.095224860Z" level=info msg="CreateContainer within sandbox \"7b3cfec85eea507aaaa1774d328abe86980055346288fed15252699128a0ae4a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 6 00:23:42.159864 containerd[1612]: time="2025-11-06T00:23:42.159332950Z" level=info msg="Container 958ef0c7431633397a37375bcdba91880e618bafcdcb86886961958c77c23c6a: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:23:42.191982 containerd[1612]: time="2025-11-06T00:23:42.191888371Z" level=info msg="Container d73372379a0aa0f084812101d657325aa95d6f166e65058b85bc764ebaadd10e: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:23:42.198867 containerd[1612]: time="2025-11-06T00:23:42.198738714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d13d96f639b65e57f439b4396b605564,Namespace:kube-system,Attempt:0,} returns sandbox id \"0658d853ae46d48b6b1306b4eb698ce54a68c9f344479939f4589e2a463af33f\"" Nov 6 00:23:42.199828 kubelet[2449]: E1106 00:23:42.199761 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:23:42.200644 containerd[1612]: time="2025-11-06T00:23:42.200598190Z" level=info msg="CreateContainer within sandbox \"4e7be6a1ed9a96947fadded9009a0d5b659c4522732677fdcb78e1bd2492539d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"958ef0c7431633397a37375bcdba91880e618bafcdcb86886961958c77c23c6a\"" Nov 6 00:23:42.202185 containerd[1612]: time="2025-11-06T00:23:42.202133975Z" level=info msg="StartContainer for \"958ef0c7431633397a37375bcdba91880e618bafcdcb86886961958c77c23c6a\"" Nov 6 00:23:42.207257 containerd[1612]: time="2025-11-06T00:23:42.206489505Z" level=info msg="connecting to shim 958ef0c7431633397a37375bcdba91880e618bafcdcb86886961958c77c23c6a" address="unix:///run/containerd/s/5e5ec2a84522fc98ebe47fd6c23e6705331087d476c2073424aa2669bca407ac" protocol=ttrpc version=3 Nov 6 00:23:42.222836 containerd[1612]: time="2025-11-06T00:23:42.222753504Z" level=info msg="CreateContainer within sandbox \"0658d853ae46d48b6b1306b4eb698ce54a68c9f344479939f4589e2a463af33f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 6 00:23:42.244281 containerd[1612]: time="2025-11-06T00:23:42.236300381Z" level=info msg="CreateContainer within sandbox \"7b3cfec85eea507aaaa1774d328abe86980055346288fed15252699128a0ae4a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d73372379a0aa0f084812101d657325aa95d6f166e65058b85bc764ebaadd10e\"" Nov 6 00:23:42.244281 containerd[1612]: time="2025-11-06T00:23:42.236897487Z" level=info msg="StartContainer for \"d73372379a0aa0f084812101d657325aa95d6f166e65058b85bc764ebaadd10e\"" Nov 6 00:23:42.244281 containerd[1612]: time="2025-11-06T00:23:42.243400096Z" level=info msg="connecting to shim d73372379a0aa0f084812101d657325aa95d6f166e65058b85bc764ebaadd10e" address="unix:///run/containerd/s/95ecf8b90d8d752526d0f729b13db0db07efd1bf8e6960b9c0d17e3298fd1a4a" protocol=ttrpc version=3 Nov 6 00:23:42.277111 containerd[1612]: time="2025-11-06T00:23:42.272821628Z" level=info msg="Container 7e98baad34a0dd427565b801a1dac25634371da539489351a8360c5c6d313303: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:23:42.286443 systemd[1]: Started cri-containerd-958ef0c7431633397a37375bcdba91880e618bafcdcb86886961958c77c23c6a.scope - libcontainer container 958ef0c7431633397a37375bcdba91880e618bafcdcb86886961958c77c23c6a. Nov 6 00:23:42.318671 systemd[1]: Started cri-containerd-d73372379a0aa0f084812101d657325aa95d6f166e65058b85bc764ebaadd10e.scope - libcontainer container d73372379a0aa0f084812101d657325aa95d6f166e65058b85bc764ebaadd10e. Nov 6 00:23:42.367532 kubelet[2449]: E1106 00:23:42.363417 2449 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.58:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 00:23:42.549612 containerd[1612]: time="2025-11-06T00:23:42.549551363Z" level=info msg="StartContainer for \"958ef0c7431633397a37375bcdba91880e618bafcdcb86886961958c77c23c6a\" returns successfully" Nov 6 00:23:42.551455 containerd[1612]: time="2025-11-06T00:23:42.551416188Z" level=info msg="StartContainer for \"d73372379a0aa0f084812101d657325aa95d6f166e65058b85bc764ebaadd10e\" returns successfully" Nov 6 00:23:42.556662 containerd[1612]: time="2025-11-06T00:23:42.556553082Z" level=info msg="CreateContainer within sandbox \"0658d853ae46d48b6b1306b4eb698ce54a68c9f344479939f4589e2a463af33f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7e98baad34a0dd427565b801a1dac25634371da539489351a8360c5c6d313303\"" Nov 6 00:23:42.558799 containerd[1612]: time="2025-11-06T00:23:42.557502651Z" level=info msg="StartContainer for \"7e98baad34a0dd427565b801a1dac25634371da539489351a8360c5c6d313303\"" Nov 6 00:23:42.558993 containerd[1612]: time="2025-11-06T00:23:42.558942797Z" level=info msg="connecting to shim 7e98baad34a0dd427565b801a1dac25634371da539489351a8360c5c6d313303" address="unix:///run/containerd/s/963ce717ca6ff5663828b0e25a35fb49999d8352a44c03529ad1aa2eb2267e93" protocol=ttrpc version=3 Nov 6 00:23:42.601020 systemd[1]: Started cri-containerd-7e98baad34a0dd427565b801a1dac25634371da539489351a8360c5c6d313303.scope - libcontainer container 7e98baad34a0dd427565b801a1dac25634371da539489351a8360c5c6d313303. Nov 6 00:23:42.717790 containerd[1612]: time="2025-11-06T00:23:42.716766730Z" level=info msg="StartContainer for \"7e98baad34a0dd427565b801a1dac25634371da539489351a8360c5c6d313303\" returns successfully" Nov 6 00:23:42.893847 kubelet[2449]: I1106 00:23:42.893776 2449 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:23:42.906537 kubelet[2449]: E1106 00:23:42.906205 2449 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:23:42.906537 kubelet[2449]: E1106 00:23:42.906367 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:23:42.906966 kubelet[2449]: E1106 00:23:42.906942 2449 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:23:42.907202 kubelet[2449]: E1106 00:23:42.907183 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:23:42.915886 kubelet[2449]: E1106 00:23:42.915834 2449 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:23:42.916495 kubelet[2449]: E1106 00:23:42.916453 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:23:44.097095 kubelet[2449]: E1106 00:23:44.097051 2449 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:23:44.097095 kubelet[2449]: E1106 00:23:44.097075 2449 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:23:44.099200 kubelet[2449]: E1106 00:23:44.097224 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:23:44.099200 kubelet[2449]: E1106 00:23:44.097311 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:23:46.089426 kubelet[2449]: E1106 00:23:46.089362 2449 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 00:23:46.105020 kubelet[2449]: E1106 00:23:46.104030 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:23:47.590065 kubelet[2449]: E1106 00:23:47.589989 2449 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 6 00:23:47.658699 kubelet[2449]: I1106 00:23:47.657933 2449 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 6 00:23:47.708292 kubelet[2449]: I1106 00:23:47.708234 2449 apiserver.go:52] "Watching apiserver" Nov 6 00:23:47.724010 kubelet[2449]: I1106 00:23:47.722512 2449 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 6 00:23:47.730364 kubelet[2449]: I1106 00:23:47.730062 2449 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 6 00:23:47.779385 kubelet[2449]: E1106 00:23:47.777053 2449 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 6 00:23:47.779385 kubelet[2449]: I1106 00:23:47.777107 2449 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 6 00:23:47.785235 kubelet[2449]: E1106 00:23:47.785115 2449 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 6 00:23:47.785235 kubelet[2449]: I1106 00:23:47.785163 2449 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 6 00:23:47.793321 kubelet[2449]: E1106 00:23:47.793266 2449 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 6 00:23:49.677102 kubelet[2449]: I1106 00:23:49.675565 2449 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 6 00:23:49.723922 kubelet[2449]: E1106 00:23:49.722068 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:23:50.184613 kubelet[2449]: E1106 00:23:50.184095 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:23:51.785179 kubelet[2449]: I1106 00:23:51.782071 2449 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 6 00:23:51.813049 kubelet[2449]: E1106 00:23:51.810635 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:23:52.192515 kubelet[2449]: E1106 00:23:52.192460 2449 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:23:52.825209 systemd[1]: Reload requested from client PID 2752 ('systemctl') (unit session-9.scope)... Nov 6 00:23:52.826316 systemd[1]: Reloading... Nov 6 00:23:53.113305 zram_generator::config[2796]: No configuration found. Nov 6 00:23:53.658179 systemd[1]: Reloading finished in 829 ms. Nov 6 00:23:53.741660 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:23:53.771617 systemd[1]: kubelet.service: Deactivated successfully. Nov 6 00:23:53.773150 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:23:53.773229 systemd[1]: kubelet.service: Consumed 2.179s CPU time, 132.1M memory peak. Nov 6 00:23:53.784800 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:23:54.299153 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:23:54.325454 (kubelet)[2840]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 00:23:54.476557 kubelet[2840]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:23:54.476557 kubelet[2840]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 00:23:54.476557 kubelet[2840]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:23:54.477425 kubelet[2840]: I1106 00:23:54.476572 2840 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 00:23:54.495446 kubelet[2840]: I1106 00:23:54.495377 2840 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 6 00:23:54.495446 kubelet[2840]: I1106 00:23:54.495423 2840 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 00:23:54.495932 kubelet[2840]: I1106 00:23:54.495758 2840 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 00:23:54.511516 kubelet[2840]: I1106 00:23:54.510114 2840 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 6 00:23:54.518119 kubelet[2840]: I1106 00:23:54.517096 2840 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 00:23:54.534082 kubelet[2840]: I1106 00:23:54.531341 2840 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 00:23:54.540216 kubelet[2840]: I1106 00:23:54.540141 2840 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 6 00:23:54.540625 kubelet[2840]: I1106 00:23:54.540577 2840 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 00:23:54.552433 kubelet[2840]: I1106 00:23:54.540626 2840 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 00:23:54.552433 kubelet[2840]: I1106 00:23:54.551759 2840 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 00:23:54.552433 kubelet[2840]: I1106 00:23:54.551778 2840 container_manager_linux.go:303] "Creating device plugin manager" Nov 6 00:23:54.552433 kubelet[2840]: I1106 00:23:54.551913 2840 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:23:54.569354 kubelet[2840]: I1106 00:23:54.566250 2840 kubelet.go:480] "Attempting to sync node with API server" Nov 6 00:23:54.569354 kubelet[2840]: I1106 00:23:54.566286 2840 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 00:23:54.569354 kubelet[2840]: I1106 00:23:54.566315 2840 kubelet.go:386] "Adding apiserver pod source" Nov 6 00:23:54.569354 kubelet[2840]: I1106 00:23:54.566332 2840 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 00:23:54.575900 kubelet[2840]: I1106 00:23:54.570400 2840 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 6 00:23:54.575900 kubelet[2840]: I1106 00:23:54.571369 2840 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 00:23:54.581075 kernel: hrtimer: interrupt took 9643759 ns Nov 6 00:23:54.643847 kubelet[2840]: I1106 00:23:54.640801 2840 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 6 00:23:54.643847 kubelet[2840]: I1106 00:23:54.642157 2840 server.go:1289] "Started kubelet" Nov 6 00:23:54.644615 kubelet[2840]: I1106 00:23:54.644034 2840 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 00:23:54.644615 kubelet[2840]: I1106 00:23:54.644592 2840 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 00:23:54.660917 kubelet[2840]: I1106 00:23:54.651566 2840 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 00:23:54.663373 kubelet[2840]: I1106 00:23:54.663315 2840 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 00:23:54.666873 kubelet[2840]: I1106 00:23:54.664016 2840 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 00:23:54.675647 kubelet[2840]: I1106 00:23:54.670030 2840 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 6 00:23:54.679562 kubelet[2840]: I1106 00:23:54.677478 2840 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 6 00:23:54.688733 kubelet[2840]: I1106 00:23:54.680596 2840 reconciler.go:26] "Reconciler: start to sync state" Nov 6 00:23:54.689354 kubelet[2840]: I1106 00:23:54.689310 2840 factory.go:223] Registration of the systemd container factory successfully Nov 6 00:23:54.698902 kubelet[2840]: I1106 00:23:54.696928 2840 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 00:23:54.712434 kubelet[2840]: I1106 00:23:54.708038 2840 server.go:317] "Adding debug handlers to kubelet server" Nov 6 00:23:54.746008 kubelet[2840]: I1106 00:23:54.728626 2840 factory.go:223] Registration of the containerd container factory successfully Nov 6 00:23:54.757239 kubelet[2840]: E1106 00:23:54.757184 2840 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 00:23:54.840921 kubelet[2840]: I1106 00:23:54.830184 2840 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 6 00:23:54.850636 kubelet[2840]: I1106 00:23:54.850592 2840 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 6 00:23:54.852985 kubelet[2840]: I1106 00:23:54.852730 2840 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 6 00:23:54.852985 kubelet[2840]: I1106 00:23:54.852769 2840 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 00:23:54.852985 kubelet[2840]: I1106 00:23:54.852779 2840 kubelet.go:2436] "Starting kubelet main sync loop" Nov 6 00:23:54.866210 kubelet[2840]: E1106 00:23:54.863867 2840 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 00:23:54.906336 kubelet[2840]: I1106 00:23:54.906285 2840 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 00:23:54.906336 kubelet[2840]: I1106 00:23:54.906306 2840 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 00:23:54.906336 kubelet[2840]: I1106 00:23:54.906341 2840 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:23:54.910959 kubelet[2840]: I1106 00:23:54.906547 2840 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 6 00:23:54.910959 kubelet[2840]: I1106 00:23:54.906570 2840 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 6 00:23:54.910959 kubelet[2840]: I1106 00:23:54.906590 2840 policy_none.go:49] "None policy: Start" Nov 6 00:23:54.910959 kubelet[2840]: I1106 00:23:54.906600 2840 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 6 00:23:54.910959 kubelet[2840]: I1106 00:23:54.906611 2840 state_mem.go:35] "Initializing new in-memory state store" Nov 6 00:23:54.910959 kubelet[2840]: I1106 00:23:54.906710 2840 state_mem.go:75] "Updated machine memory state" Nov 6 00:23:54.940886 kubelet[2840]: E1106 00:23:54.939553 2840 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 00:23:54.940886 kubelet[2840]: I1106 00:23:54.939872 2840 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 00:23:54.940886 kubelet[2840]: I1106 00:23:54.939890 2840 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 00:23:54.940886 kubelet[2840]: I1106 00:23:54.940795 2840 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 00:23:54.948905 kubelet[2840]: E1106 00:23:54.946780 2840 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 00:23:54.968080 kubelet[2840]: I1106 00:23:54.965149 2840 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 6 00:23:54.968080 kubelet[2840]: I1106 00:23:54.965740 2840 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 6 00:23:54.968080 kubelet[2840]: I1106 00:23:54.966115 2840 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 6 00:23:54.988080 kubelet[2840]: I1106 00:23:54.988023 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/178acae2a068e6ad4135ccb32ea3ff14-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"178acae2a068e6ad4135ccb32ea3ff14\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:23:54.988766 kubelet[2840]: I1106 00:23:54.988439 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/178acae2a068e6ad4135ccb32ea3ff14-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"178acae2a068e6ad4135ccb32ea3ff14\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:23:54.988766 kubelet[2840]: I1106 00:23:54.988487 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/178acae2a068e6ad4135ccb32ea3ff14-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"178acae2a068e6ad4135ccb32ea3ff14\") " pod="kube-system/kube-apiserver-localhost" Nov 6 00:23:54.988766 kubelet[2840]: I1106 00:23:54.988531 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:23:54.988766 kubelet[2840]: I1106 00:23:54.988558 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:23:54.988766 kubelet[2840]: I1106 00:23:54.988593 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:23:54.989063 kubelet[2840]: I1106 00:23:54.988619 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:23:54.989063 kubelet[2840]: I1106 00:23:54.988643 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/20c890a246d840d308022312da9174cb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"20c890a246d840d308022312da9174cb\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 00:23:54.989063 kubelet[2840]: I1106 00:23:54.988669 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13d96f639b65e57f439b4396b605564-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d13d96f639b65e57f439b4396b605564\") " pod="kube-system/kube-scheduler-localhost" Nov 6 00:23:55.015283 kubelet[2840]: E1106 00:23:55.014797 2840 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 6 00:23:55.023618 kubelet[2840]: E1106 00:23:55.023426 2840 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 6 00:23:55.070539 kubelet[2840]: I1106 00:23:55.069502 2840 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 00:23:55.140515 kubelet[2840]: I1106 00:23:55.139427 2840 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 6 00:23:55.140515 kubelet[2840]: I1106 00:23:55.139530 2840 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 6 00:23:55.318722 kubelet[2840]: E1106 00:23:55.318677 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:23:55.323781 kubelet[2840]: E1106 00:23:55.323714 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:23:55.324203 kubelet[2840]: E1106 00:23:55.324054 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:23:55.573325 kubelet[2840]: I1106 00:23:55.571431 2840 apiserver.go:52] "Watching apiserver" Nov 6 00:23:55.579361 kubelet[2840]: I1106 00:23:55.578963 2840 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 6 00:23:55.898319 kubelet[2840]: I1106 00:23:55.897222 2840 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 6 00:23:55.898319 kubelet[2840]: E1106 00:23:55.897860 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:23:55.898855 kubelet[2840]: E1106 00:23:55.898600 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:23:55.899116 kubelet[2840]: I1106 00:23:55.899030 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.89901735 podStartE2EDuration="4.89901735s" podCreationTimestamp="2025-11-06 00:23:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:23:55.893198935 +0000 UTC m=+1.557033220" watchObservedRunningTime="2025-11-06 00:23:55.89901735 +0000 UTC m=+1.562851635" Nov 6 00:23:55.938785 kubelet[2840]: E1106 00:23:55.936255 2840 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 6 00:23:55.938785 kubelet[2840]: E1106 00:23:55.936496 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:23:55.938785 kubelet[2840]: I1106 00:23:55.937386 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.937374525 podStartE2EDuration="1.937374525s" podCreationTimestamp="2025-11-06 00:23:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:23:55.936858876 +0000 UTC m=+1.600693161" watchObservedRunningTime="2025-11-06 00:23:55.937374525 +0000 UTC m=+1.601208810" Nov 6 00:23:56.007845 kubelet[2840]: I1106 00:23:56.006583 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=7.006563229 podStartE2EDuration="7.006563229s" podCreationTimestamp="2025-11-06 00:23:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:23:55.963571315 +0000 UTC m=+1.627405600" watchObservedRunningTime="2025-11-06 00:23:56.006563229 +0000 UTC m=+1.670397514" Nov 6 00:23:56.903927 kubelet[2840]: E1106 00:23:56.902740 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:23:56.903927 kubelet[2840]: E1106 00:23:56.903119 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:23:57.906494 kubelet[2840]: E1106 00:23:57.905274 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:23:58.621697 kubelet[2840]: I1106 00:23:58.621560 2840 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 6 00:23:58.627066 containerd[1612]: time="2025-11-06T00:23:58.626286146Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 6 00:23:58.628044 kubelet[2840]: I1106 00:23:58.627845 2840 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 6 00:23:58.910920 kubelet[2840]: E1106 00:23:58.910148 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:23:59.475677 systemd[1]: Created slice kubepods-besteffort-pod2198f7ae_7094_4fa2_80ee_9ef49fe461cf.slice - libcontainer container kubepods-besteffort-pod2198f7ae_7094_4fa2_80ee_9ef49fe461cf.slice. Nov 6 00:23:59.543261 kubelet[2840]: I1106 00:23:59.543086 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2198f7ae-7094-4fa2-80ee-9ef49fe461cf-kube-proxy\") pod \"kube-proxy-ts4h5\" (UID: \"2198f7ae-7094-4fa2-80ee-9ef49fe461cf\") " pod="kube-system/kube-proxy-ts4h5" Nov 6 00:23:59.543261 kubelet[2840]: I1106 00:23:59.543143 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2198f7ae-7094-4fa2-80ee-9ef49fe461cf-xtables-lock\") pod \"kube-proxy-ts4h5\" (UID: \"2198f7ae-7094-4fa2-80ee-9ef49fe461cf\") " pod="kube-system/kube-proxy-ts4h5" Nov 6 00:23:59.543261 kubelet[2840]: I1106 00:23:59.543172 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2198f7ae-7094-4fa2-80ee-9ef49fe461cf-lib-modules\") pod \"kube-proxy-ts4h5\" (UID: \"2198f7ae-7094-4fa2-80ee-9ef49fe461cf\") " pod="kube-system/kube-proxy-ts4h5" Nov 6 00:23:59.543261 kubelet[2840]: I1106 00:23:59.543193 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2xtg\" (UniqueName: \"kubernetes.io/projected/2198f7ae-7094-4fa2-80ee-9ef49fe461cf-kube-api-access-q2xtg\") pod \"kube-proxy-ts4h5\" (UID: \"2198f7ae-7094-4fa2-80ee-9ef49fe461cf\") " pod="kube-system/kube-proxy-ts4h5" Nov 6 00:23:59.700446 systemd[1]: Created slice kubepods-besteffort-pod79283931_21d3_495c_a612_f74226e92271.slice - libcontainer container kubepods-besteffort-pod79283931_21d3_495c_a612_f74226e92271.slice. Nov 6 00:23:59.806855 kubelet[2840]: E1106 00:23:59.799352 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:23:59.809529 containerd[1612]: time="2025-11-06T00:23:59.809432369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ts4h5,Uid:2198f7ae-7094-4fa2-80ee-9ef49fe461cf,Namespace:kube-system,Attempt:0,}" Nov 6 00:23:59.853323 kubelet[2840]: I1106 00:23:59.852861 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj85x\" (UniqueName: \"kubernetes.io/projected/79283931-21d3-495c-a612-f74226e92271-kube-api-access-zj85x\") pod \"tigera-operator-7dcd859c48-pfcgz\" (UID: \"79283931-21d3-495c-a612-f74226e92271\") " pod="tigera-operator/tigera-operator-7dcd859c48-pfcgz" Nov 6 00:23:59.853323 kubelet[2840]: I1106 00:23:59.852928 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/79283931-21d3-495c-a612-f74226e92271-var-lib-calico\") pod \"tigera-operator-7dcd859c48-pfcgz\" (UID: \"79283931-21d3-495c-a612-f74226e92271\") " pod="tigera-operator/tigera-operator-7dcd859c48-pfcgz" Nov 6 00:23:59.961255 containerd[1612]: time="2025-11-06T00:23:59.961141030Z" level=info msg="connecting to shim c81e5807c1df6fbc77f2c7209e8e0d32d9118a8790ac4b3e6e6e943b6ea9ea42" address="unix:///run/containerd/s/83289906cad0156b59961179dadf14a9491936581803a7e936a6f8516531198c" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:24:00.086173 containerd[1612]: time="2025-11-06T00:24:00.085731169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-pfcgz,Uid:79283931-21d3-495c-a612-f74226e92271,Namespace:tigera-operator,Attempt:0,}" Nov 6 00:24:00.216108 systemd[1]: Started cri-containerd-c81e5807c1df6fbc77f2c7209e8e0d32d9118a8790ac4b3e6e6e943b6ea9ea42.scope - libcontainer container c81e5807c1df6fbc77f2c7209e8e0d32d9118a8790ac4b3e6e6e943b6ea9ea42. Nov 6 00:24:00.373892 containerd[1612]: time="2025-11-06T00:24:00.373529321Z" level=info msg="connecting to shim e748fd0d2e3115940d4ecc079eb516aae989b2c747c724683267edda6bb18d0c" address="unix:///run/containerd/s/0d27c75fab859a838e2e7c23d6b9cc84960e09257324e7af38502892789ff7c3" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:24:00.398772 containerd[1612]: time="2025-11-06T00:24:00.398676467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ts4h5,Uid:2198f7ae-7094-4fa2-80ee-9ef49fe461cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"c81e5807c1df6fbc77f2c7209e8e0d32d9118a8790ac4b3e6e6e943b6ea9ea42\"" Nov 6 00:24:00.400221 kubelet[2840]: E1106 00:24:00.400183 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:00.415303 containerd[1612]: time="2025-11-06T00:24:00.415213651Z" level=info msg="CreateContainer within sandbox \"c81e5807c1df6fbc77f2c7209e8e0d32d9118a8790ac4b3e6e6e943b6ea9ea42\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 6 00:24:00.446760 containerd[1612]: time="2025-11-06T00:24:00.446656584Z" level=info msg="Container 0b365476ac6fbc54890f69dec9275ecad3c14c5060422f744c1bf703ed16fc29: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:24:00.459543 systemd[1]: Started cri-containerd-e748fd0d2e3115940d4ecc079eb516aae989b2c747c724683267edda6bb18d0c.scope - libcontainer container e748fd0d2e3115940d4ecc079eb516aae989b2c747c724683267edda6bb18d0c. Nov 6 00:24:00.502731 containerd[1612]: time="2025-11-06T00:24:00.502627129Z" level=info msg="CreateContainer within sandbox \"c81e5807c1df6fbc77f2c7209e8e0d32d9118a8790ac4b3e6e6e943b6ea9ea42\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0b365476ac6fbc54890f69dec9275ecad3c14c5060422f744c1bf703ed16fc29\"" Nov 6 00:24:00.507259 containerd[1612]: time="2025-11-06T00:24:00.507177226Z" level=info msg="StartContainer for \"0b365476ac6fbc54890f69dec9275ecad3c14c5060422f744c1bf703ed16fc29\"" Nov 6 00:24:00.514791 containerd[1612]: time="2025-11-06T00:24:00.514706429Z" level=info msg="connecting to shim 0b365476ac6fbc54890f69dec9275ecad3c14c5060422f744c1bf703ed16fc29" address="unix:///run/containerd/s/83289906cad0156b59961179dadf14a9491936581803a7e936a6f8516531198c" protocol=ttrpc version=3 Nov 6 00:24:00.580017 systemd[1]: Started cri-containerd-0b365476ac6fbc54890f69dec9275ecad3c14c5060422f744c1bf703ed16fc29.scope - libcontainer container 0b365476ac6fbc54890f69dec9275ecad3c14c5060422f744c1bf703ed16fc29. Nov 6 00:24:00.733179 containerd[1612]: time="2025-11-06T00:24:00.731263432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-pfcgz,Uid:79283931-21d3-495c-a612-f74226e92271,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e748fd0d2e3115940d4ecc079eb516aae989b2c747c724683267edda6bb18d0c\"" Nov 6 00:24:00.756446 containerd[1612]: time="2025-11-06T00:24:00.755964971Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 6 00:24:00.781144 containerd[1612]: time="2025-11-06T00:24:00.775159246Z" level=info msg="StartContainer for \"0b365476ac6fbc54890f69dec9275ecad3c14c5060422f744c1bf703ed16fc29\" returns successfully" Nov 6 00:24:00.947083 kubelet[2840]: E1106 00:24:00.924128 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:02.629165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1400252078.mount: Deactivated successfully. Nov 6 00:24:03.388115 kubelet[2840]: E1106 00:24:03.386054 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:03.499148 kubelet[2840]: I1106 00:24:03.496491 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ts4h5" podStartSLOduration=4.496467575 podStartE2EDuration="4.496467575s" podCreationTimestamp="2025-11-06 00:23:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:24:00.975341219 +0000 UTC m=+6.639175504" watchObservedRunningTime="2025-11-06 00:24:03.496467575 +0000 UTC m=+9.160301860" Nov 6 00:24:03.954825 kubelet[2840]: E1106 00:24:03.954469 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:05.388905 kubelet[2840]: E1106 00:24:05.388861 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:05.451941 containerd[1612]: time="2025-11-06T00:24:05.450279917Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:05.454623 containerd[1612]: time="2025-11-06T00:24:05.454528964Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 6 00:24:05.459758 containerd[1612]: time="2025-11-06T00:24:05.458232237Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:05.470217 containerd[1612]: time="2025-11-06T00:24:05.468463718Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:05.472721 containerd[1612]: time="2025-11-06T00:24:05.472642754Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 4.716625937s" Nov 6 00:24:05.472721 containerd[1612]: time="2025-11-06T00:24:05.472705322Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 6 00:24:05.506491 containerd[1612]: time="2025-11-06T00:24:05.504651453Z" level=info msg="CreateContainer within sandbox \"e748fd0d2e3115940d4ecc079eb516aae989b2c747c724683267edda6bb18d0c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 6 00:24:05.539628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1594778405.mount: Deactivated successfully. Nov 6 00:24:05.546901 containerd[1612]: time="2025-11-06T00:24:05.546846978Z" level=info msg="Container 8a6c1dc885aa3621f1fc575b0976e5fc6133fbe13760a2f26aacf5ca52c705ba: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:24:05.658338 containerd[1612]: time="2025-11-06T00:24:05.658206523Z" level=info msg="CreateContainer within sandbox \"e748fd0d2e3115940d4ecc079eb516aae989b2c747c724683267edda6bb18d0c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8a6c1dc885aa3621f1fc575b0976e5fc6133fbe13760a2f26aacf5ca52c705ba\"" Nov 6 00:24:05.666066 containerd[1612]: time="2025-11-06T00:24:05.659318290Z" level=info msg="StartContainer for \"8a6c1dc885aa3621f1fc575b0976e5fc6133fbe13760a2f26aacf5ca52c705ba\"" Nov 6 00:24:05.670637 containerd[1612]: time="2025-11-06T00:24:05.670350514Z" level=info msg="connecting to shim 8a6c1dc885aa3621f1fc575b0976e5fc6133fbe13760a2f26aacf5ca52c705ba" address="unix:///run/containerd/s/0d27c75fab859a838e2e7c23d6b9cc84960e09257324e7af38502892789ff7c3" protocol=ttrpc version=3 Nov 6 00:24:05.774199 systemd[1]: Started cri-containerd-8a6c1dc885aa3621f1fc575b0976e5fc6133fbe13760a2f26aacf5ca52c705ba.scope - libcontainer container 8a6c1dc885aa3621f1fc575b0976e5fc6133fbe13760a2f26aacf5ca52c705ba. Nov 6 00:24:05.979988 containerd[1612]: time="2025-11-06T00:24:05.975501204Z" level=info msg="StartContainer for \"8a6c1dc885aa3621f1fc575b0976e5fc6133fbe13760a2f26aacf5ca52c705ba\" returns successfully" Nov 6 00:24:05.994403 kubelet[2840]: E1106 00:24:05.994353 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:07.093356 kubelet[2840]: I1106 00:24:07.092070 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-pfcgz" podStartSLOduration=3.370802722 podStartE2EDuration="8.092043343s" podCreationTimestamp="2025-11-06 00:23:59 +0000 UTC" firstStartedPulling="2025-11-06 00:24:00.754771489 +0000 UTC m=+6.418605785" lastFinishedPulling="2025-11-06 00:24:05.476012121 +0000 UTC m=+11.139846406" observedRunningTime="2025-11-06 00:24:07.086894758 +0000 UTC m=+12.750729063" watchObservedRunningTime="2025-11-06 00:24:07.092043343 +0000 UTC m=+12.755877628" Nov 6 00:24:13.349349 sudo[1864]: pam_unix(sudo:session): session closed for user root Nov 6 00:24:13.356340 sshd[1857]: Connection closed by 10.0.0.1 port 51866 Nov 6 00:24:13.358559 sshd-session[1854]: pam_unix(sshd:session): session closed for user core Nov 6 00:24:13.367420 systemd-logind[1599]: Session 9 logged out. Waiting for processes to exit. Nov 6 00:24:13.368784 systemd[1]: sshd@8-10.0.0.58:22-10.0.0.1:51866.service: Deactivated successfully. Nov 6 00:24:13.380043 systemd[1]: session-9.scope: Deactivated successfully. Nov 6 00:24:13.380953 systemd[1]: session-9.scope: Consumed 9.218s CPU time, 223.4M memory peak. Nov 6 00:24:13.389824 systemd-logind[1599]: Removed session 9. Nov 6 00:24:19.232407 systemd[1]: Created slice kubepods-besteffort-podde5e0109_50bc_4db0_9bda_d2729947d33f.slice - libcontainer container kubepods-besteffort-podde5e0109_50bc_4db0_9bda_d2729947d33f.slice. Nov 6 00:24:19.280372 systemd[1]: Created slice kubepods-besteffort-podbd91887f_ca64_4783_811f_2ebc8c15b1fe.slice - libcontainer container kubepods-besteffort-podbd91887f_ca64_4783_811f_2ebc8c15b1fe.slice. Nov 6 00:24:19.285494 kubelet[2840]: I1106 00:24:19.285459 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bd91887f-ca64-4783-811f-2ebc8c15b1fe-cni-log-dir\") pod \"calico-node-m8w79\" (UID: \"bd91887f-ca64-4783-811f-2ebc8c15b1fe\") " pod="calico-system/calico-node-m8w79" Nov 6 00:24:19.286338 kubelet[2840]: I1106 00:24:19.285999 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd91887f-ca64-4783-811f-2ebc8c15b1fe-tigera-ca-bundle\") pod \"calico-node-m8w79\" (UID: \"bd91887f-ca64-4783-811f-2ebc8c15b1fe\") " pod="calico-system/calico-node-m8w79" Nov 6 00:24:19.286338 kubelet[2840]: I1106 00:24:19.286026 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bd91887f-ca64-4783-811f-2ebc8c15b1fe-var-run-calico\") pod \"calico-node-m8w79\" (UID: \"bd91887f-ca64-4783-811f-2ebc8c15b1fe\") " pod="calico-system/calico-node-m8w79" Nov 6 00:24:19.286338 kubelet[2840]: I1106 00:24:19.286042 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bd91887f-ca64-4783-811f-2ebc8c15b1fe-node-certs\") pod \"calico-node-m8w79\" (UID: \"bd91887f-ca64-4783-811f-2ebc8c15b1fe\") " pod="calico-system/calico-node-m8w79" Nov 6 00:24:19.286338 kubelet[2840]: I1106 00:24:19.286056 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c5xn\" (UniqueName: \"kubernetes.io/projected/bd91887f-ca64-4783-811f-2ebc8c15b1fe-kube-api-access-6c5xn\") pod \"calico-node-m8w79\" (UID: \"bd91887f-ca64-4783-811f-2ebc8c15b1fe\") " pod="calico-system/calico-node-m8w79" Nov 6 00:24:19.286338 kubelet[2840]: I1106 00:24:19.286071 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bd91887f-ca64-4783-811f-2ebc8c15b1fe-var-lib-calico\") pod \"calico-node-m8w79\" (UID: \"bd91887f-ca64-4783-811f-2ebc8c15b1fe\") " pod="calico-system/calico-node-m8w79" Nov 6 00:24:19.286492 kubelet[2840]: I1106 00:24:19.286085 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bd91887f-ca64-4783-811f-2ebc8c15b1fe-cni-bin-dir\") pod \"calico-node-m8w79\" (UID: \"bd91887f-ca64-4783-811f-2ebc8c15b1fe\") " pod="calico-system/calico-node-m8w79" Nov 6 00:24:19.286492 kubelet[2840]: I1106 00:24:19.286097 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bd91887f-ca64-4783-811f-2ebc8c15b1fe-cni-net-dir\") pod \"calico-node-m8w79\" (UID: \"bd91887f-ca64-4783-811f-2ebc8c15b1fe\") " pod="calico-system/calico-node-m8w79" Nov 6 00:24:19.286492 kubelet[2840]: I1106 00:24:19.286113 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de5e0109-50bc-4db0-9bda-d2729947d33f-tigera-ca-bundle\") pod \"calico-typha-7d8b5556f7-2dgbl\" (UID: \"de5e0109-50bc-4db0-9bda-d2729947d33f\") " pod="calico-system/calico-typha-7d8b5556f7-2dgbl" Nov 6 00:24:19.286492 kubelet[2840]: I1106 00:24:19.286126 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd91887f-ca64-4783-811f-2ebc8c15b1fe-lib-modules\") pod \"calico-node-m8w79\" (UID: \"bd91887f-ca64-4783-811f-2ebc8c15b1fe\") " pod="calico-system/calico-node-m8w79" Nov 6 00:24:19.286492 kubelet[2840]: I1106 00:24:19.286150 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bd91887f-ca64-4783-811f-2ebc8c15b1fe-policysync\") pod \"calico-node-m8w79\" (UID: \"bd91887f-ca64-4783-811f-2ebc8c15b1fe\") " pod="calico-system/calico-node-m8w79" Nov 6 00:24:19.286649 kubelet[2840]: I1106 00:24:19.286168 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bd91887f-ca64-4783-811f-2ebc8c15b1fe-flexvol-driver-host\") pod \"calico-node-m8w79\" (UID: \"bd91887f-ca64-4783-811f-2ebc8c15b1fe\") " pod="calico-system/calico-node-m8w79" Nov 6 00:24:19.286649 kubelet[2840]: I1106 00:24:19.286182 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd91887f-ca64-4783-811f-2ebc8c15b1fe-xtables-lock\") pod \"calico-node-m8w79\" (UID: \"bd91887f-ca64-4783-811f-2ebc8c15b1fe\") " pod="calico-system/calico-node-m8w79" Nov 6 00:24:19.286649 kubelet[2840]: I1106 00:24:19.286198 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/de5e0109-50bc-4db0-9bda-d2729947d33f-typha-certs\") pod \"calico-typha-7d8b5556f7-2dgbl\" (UID: \"de5e0109-50bc-4db0-9bda-d2729947d33f\") " pod="calico-system/calico-typha-7d8b5556f7-2dgbl" Nov 6 00:24:19.286649 kubelet[2840]: I1106 00:24:19.286219 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xjm8\" (UniqueName: \"kubernetes.io/projected/de5e0109-50bc-4db0-9bda-d2729947d33f-kube-api-access-5xjm8\") pod \"calico-typha-7d8b5556f7-2dgbl\" (UID: \"de5e0109-50bc-4db0-9bda-d2729947d33f\") " pod="calico-system/calico-typha-7d8b5556f7-2dgbl" Nov 6 00:24:19.407335 kubelet[2840]: E1106 00:24:19.406978 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.408013 kubelet[2840]: W1106 00:24:19.407911 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.408297 kubelet[2840]: E1106 00:24:19.408198 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.409834 kubelet[2840]: E1106 00:24:19.409584 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.409924 kubelet[2840]: W1106 00:24:19.409908 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.410503 kubelet[2840]: E1106 00:24:19.410485 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.414986 kubelet[2840]: E1106 00:24:19.414935 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.414986 kubelet[2840]: W1106 00:24:19.414984 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.415136 kubelet[2840]: E1106 00:24:19.415009 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.425667 kubelet[2840]: E1106 00:24:19.425494 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.425667 kubelet[2840]: W1106 00:24:19.425647 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.425667 kubelet[2840]: E1106 00:24:19.425665 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.429832 kubelet[2840]: E1106 00:24:19.428395 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2gxx8" podUID="02736082-3e52-4d26-97e7-7ca149273f4e" Nov 6 00:24:19.482084 kubelet[2840]: E1106 00:24:19.482009 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.482084 kubelet[2840]: W1106 00:24:19.482041 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.483963 kubelet[2840]: E1106 00:24:19.482218 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.483963 kubelet[2840]: E1106 00:24:19.482965 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.483963 kubelet[2840]: W1106 00:24:19.482975 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.484135 kubelet[2840]: E1106 00:24:19.484111 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.484373 kubelet[2840]: E1106 00:24:19.484354 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.484373 kubelet[2840]: W1106 00:24:19.484366 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.484470 kubelet[2840]: E1106 00:24:19.484376 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.485595 kubelet[2840]: E1106 00:24:19.485551 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.485595 kubelet[2840]: W1106 00:24:19.485567 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.485595 kubelet[2840]: E1106 00:24:19.485581 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.486883 kubelet[2840]: E1106 00:24:19.486860 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.486883 kubelet[2840]: W1106 00:24:19.486878 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.486987 kubelet[2840]: E1106 00:24:19.486891 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.487207 kubelet[2840]: E1106 00:24:19.487175 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.487207 kubelet[2840]: W1106 00:24:19.487195 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.487351 kubelet[2840]: E1106 00:24:19.487219 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.487511 kubelet[2840]: E1106 00:24:19.487494 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.487511 kubelet[2840]: W1106 00:24:19.487508 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.487615 kubelet[2840]: E1106 00:24:19.487518 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.487884 kubelet[2840]: E1106 00:24:19.487866 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.487884 kubelet[2840]: W1106 00:24:19.487880 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.487959 kubelet[2840]: E1106 00:24:19.487893 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.488106 kubelet[2840]: E1106 00:24:19.488093 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.488106 kubelet[2840]: W1106 00:24:19.488104 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.488167 kubelet[2840]: E1106 00:24:19.488115 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.488311 kubelet[2840]: E1106 00:24:19.488298 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.488311 kubelet[2840]: W1106 00:24:19.488308 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.488363 kubelet[2840]: E1106 00:24:19.488318 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.488518 kubelet[2840]: E1106 00:24:19.488497 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.488518 kubelet[2840]: W1106 00:24:19.488512 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.488623 kubelet[2840]: E1106 00:24:19.488522 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.488830 kubelet[2840]: E1106 00:24:19.488779 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.488830 kubelet[2840]: W1106 00:24:19.488818 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.488885 kubelet[2840]: E1106 00:24:19.488830 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.489077 kubelet[2840]: E1106 00:24:19.489061 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.489077 kubelet[2840]: W1106 00:24:19.489072 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.489134 kubelet[2840]: E1106 00:24:19.489081 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.489290 kubelet[2840]: E1106 00:24:19.489263 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.489290 kubelet[2840]: W1106 00:24:19.489286 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.489339 kubelet[2840]: E1106 00:24:19.489295 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.489497 kubelet[2840]: E1106 00:24:19.489482 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.489497 kubelet[2840]: W1106 00:24:19.489493 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.489554 kubelet[2840]: E1106 00:24:19.489502 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.489708 kubelet[2840]: E1106 00:24:19.489692 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.489708 kubelet[2840]: W1106 00:24:19.489702 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.489760 kubelet[2840]: E1106 00:24:19.489712 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.489934 kubelet[2840]: E1106 00:24:19.489919 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.489934 kubelet[2840]: W1106 00:24:19.489930 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.489998 kubelet[2840]: E1106 00:24:19.489940 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.490147 kubelet[2840]: E1106 00:24:19.490115 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.490147 kubelet[2840]: W1106 00:24:19.490142 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.490258 kubelet[2840]: E1106 00:24:19.490151 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.490360 kubelet[2840]: E1106 00:24:19.490345 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.490360 kubelet[2840]: W1106 00:24:19.490356 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.490414 kubelet[2840]: E1106 00:24:19.490365 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.490563 kubelet[2840]: E1106 00:24:19.490549 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.490563 kubelet[2840]: W1106 00:24:19.490559 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.490621 kubelet[2840]: E1106 00:24:19.490568 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.492155 kubelet[2840]: E1106 00:24:19.492137 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.492155 kubelet[2840]: W1106 00:24:19.492151 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.492245 kubelet[2840]: E1106 00:24:19.492165 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.492245 kubelet[2840]: I1106 00:24:19.492198 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/02736082-3e52-4d26-97e7-7ca149273f4e-registration-dir\") pod \"csi-node-driver-2gxx8\" (UID: \"02736082-3e52-4d26-97e7-7ca149273f4e\") " pod="calico-system/csi-node-driver-2gxx8" Nov 6 00:24:19.492903 kubelet[2840]: E1106 00:24:19.492886 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.492903 kubelet[2840]: W1106 00:24:19.492900 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.492978 kubelet[2840]: E1106 00:24:19.492912 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.493352 kubelet[2840]: I1106 00:24:19.493320 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02736082-3e52-4d26-97e7-7ca149273f4e-kubelet-dir\") pod \"csi-node-driver-2gxx8\" (UID: \"02736082-3e52-4d26-97e7-7ca149273f4e\") " pod="calico-system/csi-node-driver-2gxx8" Nov 6 00:24:19.493446 kubelet[2840]: E1106 00:24:19.493432 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.493506 kubelet[2840]: W1106 00:24:19.493490 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.493506 kubelet[2840]: E1106 00:24:19.493505 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.493698 kubelet[2840]: E1106 00:24:19.493661 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.493698 kubelet[2840]: W1106 00:24:19.493670 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.493698 kubelet[2840]: E1106 00:24:19.493677 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.493868 kubelet[2840]: E1106 00:24:19.493857 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.493868 kubelet[2840]: W1106 00:24:19.493865 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.493946 kubelet[2840]: E1106 00:24:19.493873 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.494022 kubelet[2840]: E1106 00:24:19.494010 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.494022 kubelet[2840]: W1106 00:24:19.494019 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.494090 kubelet[2840]: E1106 00:24:19.494026 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.494180 kubelet[2840]: E1106 00:24:19.494168 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.494180 kubelet[2840]: W1106 00:24:19.494177 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.494240 kubelet[2840]: E1106 00:24:19.494184 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.494240 kubelet[2840]: I1106 00:24:19.494203 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/02736082-3e52-4d26-97e7-7ca149273f4e-socket-dir\") pod \"csi-node-driver-2gxx8\" (UID: \"02736082-3e52-4d26-97e7-7ca149273f4e\") " pod="calico-system/csi-node-driver-2gxx8" Nov 6 00:24:19.494434 kubelet[2840]: E1106 00:24:19.494413 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.494491 kubelet[2840]: W1106 00:24:19.494434 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.494491 kubelet[2840]: E1106 00:24:19.494448 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.494533 kubelet[2840]: I1106 00:24:19.494497 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/02736082-3e52-4d26-97e7-7ca149273f4e-varrun\") pod \"csi-node-driver-2gxx8\" (UID: \"02736082-3e52-4d26-97e7-7ca149273f4e\") " pod="calico-system/csi-node-driver-2gxx8" Nov 6 00:24:19.494738 kubelet[2840]: E1106 00:24:19.494706 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.494772 kubelet[2840]: W1106 00:24:19.494756 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.494772 kubelet[2840]: E1106 00:24:19.494767 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.494842 kubelet[2840]: I1106 00:24:19.494787 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4vx7\" (UniqueName: \"kubernetes.io/projected/02736082-3e52-4d26-97e7-7ca149273f4e-kube-api-access-n4vx7\") pod \"csi-node-driver-2gxx8\" (UID: \"02736082-3e52-4d26-97e7-7ca149273f4e\") " pod="calico-system/csi-node-driver-2gxx8" Nov 6 00:24:19.496853 kubelet[2840]: E1106 00:24:19.496081 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.496853 kubelet[2840]: W1106 00:24:19.496097 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.496853 kubelet[2840]: E1106 00:24:19.496110 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.496853 kubelet[2840]: E1106 00:24:19.496353 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.496853 kubelet[2840]: W1106 00:24:19.496362 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.496853 kubelet[2840]: E1106 00:24:19.496373 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.496853 kubelet[2840]: E1106 00:24:19.496583 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.496853 kubelet[2840]: W1106 00:24:19.496592 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.496853 kubelet[2840]: E1106 00:24:19.496606 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.497862 kubelet[2840]: E1106 00:24:19.497840 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.497862 kubelet[2840]: W1106 00:24:19.497857 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.497937 kubelet[2840]: E1106 00:24:19.497869 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.498683 kubelet[2840]: E1106 00:24:19.498665 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.498683 kubelet[2840]: W1106 00:24:19.498679 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.498767 kubelet[2840]: E1106 00:24:19.498691 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.500222 kubelet[2840]: E1106 00:24:19.499889 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.500222 kubelet[2840]: W1106 00:24:19.499903 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.500222 kubelet[2840]: E1106 00:24:19.499913 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.537421 kubelet[2840]: E1106 00:24:19.537364 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:19.538414 containerd[1612]: time="2025-11-06T00:24:19.538369748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d8b5556f7-2dgbl,Uid:de5e0109-50bc-4db0-9bda-d2729947d33f,Namespace:calico-system,Attempt:0,}" Nov 6 00:24:19.560682 containerd[1612]: time="2025-11-06T00:24:19.560606329Z" level=info msg="connecting to shim 5b82d9a106120dd810d0580ad4d37f9100df7381d1c17e0e37369c6ddc1547a5" address="unix:///run/containerd/s/87ea35eb8840a295f766f254a7147d2ab3c96673a235984ae2960e8273f12298" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:24:19.585915 kubelet[2840]: E1106 00:24:19.585860 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:19.586399 containerd[1612]: time="2025-11-06T00:24:19.586360801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m8w79,Uid:bd91887f-ca64-4783-811f-2ebc8c15b1fe,Namespace:calico-system,Attempt:0,}" Nov 6 00:24:19.595762 kubelet[2840]: E1106 00:24:19.595730 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.595762 kubelet[2840]: W1106 00:24:19.595750 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.595762 kubelet[2840]: E1106 00:24:19.595772 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.596146 kubelet[2840]: E1106 00:24:19.596128 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.596146 kubelet[2840]: W1106 00:24:19.596143 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.596146 kubelet[2840]: E1106 00:24:19.596154 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.596474 kubelet[2840]: E1106 00:24:19.596460 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.596474 kubelet[2840]: W1106 00:24:19.596472 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.596550 kubelet[2840]: E1106 00:24:19.596483 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.596788 kubelet[2840]: E1106 00:24:19.596765 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.596788 kubelet[2840]: W1106 00:24:19.596782 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.596898 kubelet[2840]: E1106 00:24:19.596796 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.597156 kubelet[2840]: E1106 00:24:19.597141 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.597156 kubelet[2840]: W1106 00:24:19.597153 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.597480 kubelet[2840]: E1106 00:24:19.597175 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.597600 kubelet[2840]: E1106 00:24:19.597583 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.597600 kubelet[2840]: W1106 00:24:19.597596 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.597656 kubelet[2840]: E1106 00:24:19.597606 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.597857 kubelet[2840]: E1106 00:24:19.597800 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.597857 kubelet[2840]: W1106 00:24:19.597828 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.597857 kubelet[2840]: E1106 00:24:19.597839 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.598120 kubelet[2840]: E1106 00:24:19.598106 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.598120 kubelet[2840]: W1106 00:24:19.598117 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.598172 kubelet[2840]: E1106 00:24:19.598127 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.598425 kubelet[2840]: E1106 00:24:19.598408 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.598425 kubelet[2840]: W1106 00:24:19.598422 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.598488 kubelet[2840]: E1106 00:24:19.598432 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.598662 kubelet[2840]: E1106 00:24:19.598645 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.598662 kubelet[2840]: W1106 00:24:19.598659 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.599047 kubelet[2840]: E1106 00:24:19.598672 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.599047 kubelet[2840]: E1106 00:24:19.598946 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.599047 kubelet[2840]: W1106 00:24:19.598955 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.599047 kubelet[2840]: E1106 00:24:19.598965 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.599257 kubelet[2840]: E1106 00:24:19.599225 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.599257 kubelet[2840]: W1106 00:24:19.599239 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.599257 kubelet[2840]: E1106 00:24:19.599248 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.599530 kubelet[2840]: E1106 00:24:19.599515 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.599530 kubelet[2840]: W1106 00:24:19.599527 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.599602 kubelet[2840]: E1106 00:24:19.599537 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.599772 kubelet[2840]: E1106 00:24:19.599747 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.599772 kubelet[2840]: W1106 00:24:19.599758 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.599772 kubelet[2840]: E1106 00:24:19.599768 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.601007 kubelet[2840]: E1106 00:24:19.600060 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.601007 kubelet[2840]: W1106 00:24:19.600075 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.601007 kubelet[2840]: E1106 00:24:19.600084 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.601007 kubelet[2840]: E1106 00:24:19.600342 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.601007 kubelet[2840]: W1106 00:24:19.600351 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.601007 kubelet[2840]: E1106 00:24:19.600369 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.600129 systemd[1]: Started cri-containerd-5b82d9a106120dd810d0580ad4d37f9100df7381d1c17e0e37369c6ddc1547a5.scope - libcontainer container 5b82d9a106120dd810d0580ad4d37f9100df7381d1c17e0e37369c6ddc1547a5. Nov 6 00:24:19.602601 kubelet[2840]: E1106 00:24:19.602583 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.602601 kubelet[2840]: W1106 00:24:19.602599 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.602687 kubelet[2840]: E1106 00:24:19.602610 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.602960 kubelet[2840]: E1106 00:24:19.602937 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.602960 kubelet[2840]: W1106 00:24:19.602952 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.603034 kubelet[2840]: E1106 00:24:19.602964 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.603246 kubelet[2840]: E1106 00:24:19.603226 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.603246 kubelet[2840]: W1106 00:24:19.603241 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.603330 kubelet[2840]: E1106 00:24:19.603252 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.604484 kubelet[2840]: E1106 00:24:19.604452 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.604484 kubelet[2840]: W1106 00:24:19.604480 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.604849 kubelet[2840]: E1106 00:24:19.604658 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.605582 kubelet[2840]: E1106 00:24:19.605559 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.605582 kubelet[2840]: W1106 00:24:19.605574 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.605684 kubelet[2840]: E1106 00:24:19.605588 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.606004 kubelet[2840]: E1106 00:24:19.605951 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.606004 kubelet[2840]: W1106 00:24:19.605965 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.606004 kubelet[2840]: E1106 00:24:19.605977 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.606229 kubelet[2840]: E1106 00:24:19.606216 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.606229 kubelet[2840]: W1106 00:24:19.606226 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.606394 kubelet[2840]: E1106 00:24:19.606234 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.606537 kubelet[2840]: E1106 00:24:19.606524 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.606537 kubelet[2840]: W1106 00:24:19.606534 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.606635 kubelet[2840]: E1106 00:24:19.606542 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.606779 kubelet[2840]: E1106 00:24:19.606765 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.606779 kubelet[2840]: W1106 00:24:19.606775 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.606877 kubelet[2840]: E1106 00:24:19.606782 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.618020 containerd[1612]: time="2025-11-06T00:24:19.617910968Z" level=info msg="connecting to shim 9ee4a82ecca180edfa0e027d32d898d2ccc00ddfa9add9901b7f0d8f66506892" address="unix:///run/containerd/s/d6bfbadf8bc3166d916589dca06c24021620097b020128ad789f3bce6cd78850" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:24:19.618151 kubelet[2840]: E1106 00:24:19.617980 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:19.618151 kubelet[2840]: W1106 00:24:19.617999 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:19.618151 kubelet[2840]: E1106 00:24:19.618021 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:19.649079 systemd[1]: Started cri-containerd-9ee4a82ecca180edfa0e027d32d898d2ccc00ddfa9add9901b7f0d8f66506892.scope - libcontainer container 9ee4a82ecca180edfa0e027d32d898d2ccc00ddfa9add9901b7f0d8f66506892. Nov 6 00:24:19.667844 containerd[1612]: time="2025-11-06T00:24:19.667774093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d8b5556f7-2dgbl,Uid:de5e0109-50bc-4db0-9bda-d2729947d33f,Namespace:calico-system,Attempt:0,} returns sandbox id \"5b82d9a106120dd810d0580ad4d37f9100df7381d1c17e0e37369c6ddc1547a5\"" Nov 6 00:24:19.676927 kubelet[2840]: E1106 00:24:19.676663 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:19.681607 containerd[1612]: time="2025-11-06T00:24:19.681557364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 6 00:24:19.699109 containerd[1612]: time="2025-11-06T00:24:19.699057959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m8w79,Uid:bd91887f-ca64-4783-811f-2ebc8c15b1fe,Namespace:calico-system,Attempt:0,} returns sandbox id \"9ee4a82ecca180edfa0e027d32d898d2ccc00ddfa9add9901b7f0d8f66506892\"" Nov 6 00:24:19.699774 kubelet[2840]: E1106 00:24:19.699736 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:20.853506 kubelet[2840]: E1106 00:24:20.853435 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2gxx8" podUID="02736082-3e52-4d26-97e7-7ca149273f4e" Nov 6 00:24:22.142685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1861728587.mount: Deactivated successfully. Nov 6 00:24:22.853477 kubelet[2840]: E1106 00:24:22.853418 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2gxx8" podUID="02736082-3e52-4d26-97e7-7ca149273f4e" Nov 6 00:24:23.188764 containerd[1612]: time="2025-11-06T00:24:23.188576516Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:23.189496 containerd[1612]: time="2025-11-06T00:24:23.189448301Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 6 00:24:23.190867 containerd[1612]: time="2025-11-06T00:24:23.190788996Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:23.193663 containerd[1612]: time="2025-11-06T00:24:23.193609417Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:23.194251 containerd[1612]: time="2025-11-06T00:24:23.194202190Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.512599681s" Nov 6 00:24:23.194251 containerd[1612]: time="2025-11-06T00:24:23.194237256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 6 00:24:23.195294 containerd[1612]: time="2025-11-06T00:24:23.195270013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 6 00:24:23.211991 containerd[1612]: time="2025-11-06T00:24:23.211940127Z" level=info msg="CreateContainer within sandbox \"5b82d9a106120dd810d0580ad4d37f9100df7381d1c17e0e37369c6ddc1547a5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 6 00:24:23.312239 containerd[1612]: time="2025-11-06T00:24:23.311988232Z" level=info msg="Container 309cf5e073c4d1f1c57f49fb6d2f87665fceece1f42ddf63f99b91fafa92cbdb: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:24:23.545546 containerd[1612]: time="2025-11-06T00:24:23.545406556Z" level=info msg="CreateContainer within sandbox \"5b82d9a106120dd810d0580ad4d37f9100df7381d1c17e0e37369c6ddc1547a5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"309cf5e073c4d1f1c57f49fb6d2f87665fceece1f42ddf63f99b91fafa92cbdb\"" Nov 6 00:24:23.546103 containerd[1612]: time="2025-11-06T00:24:23.546068578Z" level=info msg="StartContainer for \"309cf5e073c4d1f1c57f49fb6d2f87665fceece1f42ddf63f99b91fafa92cbdb\"" Nov 6 00:24:23.547078 containerd[1612]: time="2025-11-06T00:24:23.547043697Z" level=info msg="connecting to shim 309cf5e073c4d1f1c57f49fb6d2f87665fceece1f42ddf63f99b91fafa92cbdb" address="unix:///run/containerd/s/87ea35eb8840a295f766f254a7147d2ab3c96673a235984ae2960e8273f12298" protocol=ttrpc version=3 Nov 6 00:24:23.568965 systemd[1]: Started cri-containerd-309cf5e073c4d1f1c57f49fb6d2f87665fceece1f42ddf63f99b91fafa92cbdb.scope - libcontainer container 309cf5e073c4d1f1c57f49fb6d2f87665fceece1f42ddf63f99b91fafa92cbdb. Nov 6 00:24:23.718942 containerd[1612]: time="2025-11-06T00:24:23.718879189Z" level=info msg="StartContainer for \"309cf5e073c4d1f1c57f49fb6d2f87665fceece1f42ddf63f99b91fafa92cbdb\" returns successfully" Nov 6 00:24:24.066942 kubelet[2840]: E1106 00:24:24.066891 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:24.121664 kubelet[2840]: E1106 00:24:24.121143 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:24.121664 kubelet[2840]: W1106 00:24:24.121191 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:24.121664 kubelet[2840]: E1106 00:24:24.121218 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:24.123852 kubelet[2840]: E1106 00:24:24.122388 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:24.123852 kubelet[2840]: W1106 00:24:24.122406 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:24.123852 kubelet[2840]: E1106 00:24:24.122420 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:24.124313 kubelet[2840]: E1106 00:24:24.124228 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:24.124313 kubelet[2840]: W1106 00:24:24.124243 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:24.124313 kubelet[2840]: E1106 00:24:24.124256 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:24.124774 kubelet[2840]: E1106 00:24:24.124686 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:24.124774 kubelet[2840]: W1106 00:24:24.124700 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:24.124774 kubelet[2840]: E1106 00:24:24.124712 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:24.125128 kubelet[2840]: E1106 00:24:24.125113 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:24.125265 kubelet[2840]: W1106 00:24:24.125202 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:24.125265 kubelet[2840]: E1106 00:24:24.125219 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:24.125618 kubelet[2840]: E1106 00:24:24.125547 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:24.125618 kubelet[2840]: W1106 00:24:24.125561 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:24.125618 kubelet[2840]: E1106 00:24:24.125572 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:24.126019 kubelet[2840]: E1106 00:24:24.125944 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:24.126019 kubelet[2840]: W1106 00:24:24.125959 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:24.126019 kubelet[2840]: E1106 00:24:24.125970 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:24.126464 kubelet[2840]: E1106 00:24:24.126352 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:24.126464 kubelet[2840]: W1106 00:24:24.126369 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:24.126464 kubelet[2840]: E1106 00:24:24.126384 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:24.127094 kubelet[2840]: E1106 00:24:24.127022 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:24.127094 kubelet[2840]: W1106 00:24:24.127037 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:24.127094 kubelet[2840]: E1106 00:24:24.127048 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:24.127662 kubelet[2840]: E1106 00:24:24.127586 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:24.127974 kubelet[2840]: W1106 00:24:24.127745 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:24.127974 kubelet[2840]: E1106 00:24:24.127763 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:24.128680 kubelet[2840]: E1106 00:24:24.128665 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:24.128994 kubelet[2840]: W1106 00:24:24.128775 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:24.128994 kubelet[2840]: E1106 00:24:24.128793 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:24.131069 kubelet[2840]: E1106 00:24:24.131050 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:24.131184 kubelet[2840]: W1106 00:24:24.131154 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:24.131257 kubelet[2840]: E1106 00:24:24.131245 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:24.131666 kubelet[2840]: E1106 00:24:24.131584 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:24.131666 kubelet[2840]: W1106 00:24:24.131598 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:24.131666 kubelet[2840]: E1106 00:24:24.131611 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:24.133043 kubelet[2840]: E1106 00:24:24.132966 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:24.133043 kubelet[2840]: W1106 00:24:24.132982 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:24.133043 kubelet[2840]: E1106 00:24:24.132994 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:24.133495 kubelet[2840]: E1106 00:24:24.133413 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:24.133722 kubelet[2840]: W1106 00:24:24.133694 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:24.134575 kubelet[2840]: E1106 00:24:24.134314 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:24.135318 kubelet[2840]: E1106 00:24:24.135275 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:24.135318 kubelet[2840]: W1106 00:24:24.135306 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:24.135410 kubelet[2840]: E1106 00:24:24.135329 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:24.135856 kubelet[2840]: E1106 00:24:24.135778 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:24.135856 kubelet[2840]: W1106 00:24:24.135793 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:24.135856 kubelet[2840]: E1106 00:24:24.135812 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:24.136872 kubelet[2840]: E1106 00:24:24.136844 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:24.136872 kubelet[2840]: W1106 00:24:24.136862 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:24.136872 kubelet[2840]: E1106 00:24:24.136872 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:24.137233 kubelet[2840]: E1106 00:24:24.137203 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:24.137233 kubelet[2840]: W1106 00:24:24.137219 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:24.137233 kubelet[2840]: E1106 00:24:24.137229 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:24.137576 kubelet[2840]: E1106 00:24:24.137554 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:24.137576 kubelet[2840]: W1106 00:24:24.137564 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:24.137576 kubelet[2840]: E1106 00:24:24.137575 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:24.139037 kubelet[2840]: E1106 00:24:24.139009 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:24.139037 kubelet[2840]: W1106 00:24:24.139026 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:24.139037 kubelet[2840]: E1106 00:24:24.139038 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:24.139353 kubelet[2840]: E1106 00:24:24.139328 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:24.139353 kubelet[2840]: W1106 00:24:24.139342 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:24.139353 kubelet[2840]: E1106 00:24:24.139352 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:24.139655 kubelet[2840]: E1106 00:24:24.139631 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:24.139655 kubelet[2840]: W1106 00:24:24.139648 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:24.139655 kubelet[2840]: E1106 00:24:24.139657 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:24.139948 kubelet[2840]: E1106 00:24:24.139922 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:24.139948 kubelet[2840]: W1106 00:24:24.139938 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:24.139948 kubelet[2840]: E1106 00:24:24.139948 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:24.140208 kubelet[2840]: E1106 00:24:24.140137 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:24.140208 kubelet[2840]: W1106 00:24:24.140195 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:24.140208 kubelet[2840]: E1106 00:24:24.140206 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:24.141845 kubelet[2840]: E1106 00:24:24.140892 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:24.141845 kubelet[2840]: W1106 00:24:24.140906 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:24.141845 kubelet[2840]: E1106 00:24:24.140916 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:24.141845 kubelet[2840]: E1106 00:24:24.141390 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:24.141845 kubelet[2840]: W1106 00:24:24.141401 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:24.141845 kubelet[2840]: E1106 00:24:24.141411 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:24.142996 kubelet[2840]: E1106 00:24:24.142960 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:24.142996 kubelet[2840]: W1106 00:24:24.142985 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:24.142996 kubelet[2840]: E1106 00:24:24.142998 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:24.143514 kubelet[2840]: E1106 00:24:24.143482 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:24.143514 kubelet[2840]: W1106 00:24:24.143499 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:24.143514 kubelet[2840]: E1106 00:24:24.143509 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:24.143827 kubelet[2840]: E1106 00:24:24.143784 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:24.143827 kubelet[2840]: W1106 00:24:24.143818 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:24.143915 kubelet[2840]: E1106 00:24:24.143832 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:24.144835 kubelet[2840]: E1106 00:24:24.144147 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:24.144835 kubelet[2840]: W1106 00:24:24.144173 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:24.144835 kubelet[2840]: E1106 00:24:24.144183 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:24.145998 kubelet[2840]: E1106 00:24:24.145967 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:24.145998 kubelet[2840]: W1106 00:24:24.145987 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:24.145998 kubelet[2840]: E1106 00:24:24.146000 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:24.146639 kubelet[2840]: E1106 00:24:24.146609 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:24.146639 kubelet[2840]: W1106 00:24:24.146625 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:24.146639 kubelet[2840]: E1106 00:24:24.146634 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:24.853912 kubelet[2840]: E1106 00:24:24.853798 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2gxx8" podUID="02736082-3e52-4d26-97e7-7ca149273f4e" Nov 6 00:24:25.068501 kubelet[2840]: I1106 00:24:25.068460 2840 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 6 00:24:25.069081 kubelet[2840]: E1106 00:24:25.068897 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:25.141093 kubelet[2840]: E1106 00:24:25.140976 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:25.141093 kubelet[2840]: W1106 00:24:25.140998 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:25.141093 kubelet[2840]: E1106 00:24:25.141017 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:25.141330 kubelet[2840]: E1106 00:24:25.141276 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:25.141330 kubelet[2840]: W1106 00:24:25.141286 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:25.141330 kubelet[2840]: E1106 00:24:25.141297 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:25.141512 kubelet[2840]: E1106 00:24:25.141498 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:25.141512 kubelet[2840]: W1106 00:24:25.141509 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:25.141601 kubelet[2840]: E1106 00:24:25.141519 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:25.141827 kubelet[2840]: E1106 00:24:25.141788 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:25.141827 kubelet[2840]: W1106 00:24:25.141798 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:25.141921 kubelet[2840]: E1106 00:24:25.141830 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:25.142325 kubelet[2840]: E1106 00:24:25.142283 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:25.142325 kubelet[2840]: W1106 00:24:25.142308 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:25.142442 kubelet[2840]: E1106 00:24:25.142334 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:25.142602 kubelet[2840]: E1106 00:24:25.142577 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:25.142602 kubelet[2840]: W1106 00:24:25.142591 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:25.142660 kubelet[2840]: E1106 00:24:25.142603 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:25.142838 kubelet[2840]: E1106 00:24:25.142822 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:25.142838 kubelet[2840]: W1106 00:24:25.142835 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:25.142902 kubelet[2840]: E1106 00:24:25.142846 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:25.143065 kubelet[2840]: E1106 00:24:25.143049 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:25.143065 kubelet[2840]: W1106 00:24:25.143062 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:25.143119 kubelet[2840]: E1106 00:24:25.143074 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:25.143306 kubelet[2840]: E1106 00:24:25.143290 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:25.143306 kubelet[2840]: W1106 00:24:25.143303 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:25.143380 kubelet[2840]: E1106 00:24:25.143313 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:25.143533 kubelet[2840]: E1106 00:24:25.143516 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:25.143533 kubelet[2840]: W1106 00:24:25.143529 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:25.143581 kubelet[2840]: E1106 00:24:25.143539 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:25.143763 kubelet[2840]: E1106 00:24:25.143747 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:25.143763 kubelet[2840]: W1106 00:24:25.143760 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:25.143845 kubelet[2840]: E1106 00:24:25.143771 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:25.143995 kubelet[2840]: E1106 00:24:25.143975 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:25.143995 kubelet[2840]: W1106 00:24:25.143987 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:25.144044 kubelet[2840]: E1106 00:24:25.144000 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:25.144217 kubelet[2840]: E1106 00:24:25.144197 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:25.144217 kubelet[2840]: W1106 00:24:25.144209 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:25.144283 kubelet[2840]: E1106 00:24:25.144219 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:25.144457 kubelet[2840]: E1106 00:24:25.144430 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:25.144457 kubelet[2840]: W1106 00:24:25.144442 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:25.144457 kubelet[2840]: E1106 00:24:25.144452 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:25.144644 kubelet[2840]: E1106 00:24:25.144627 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:25.144644 kubelet[2840]: W1106 00:24:25.144639 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:25.144733 kubelet[2840]: E1106 00:24:25.144648 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:25.145016 kubelet[2840]: E1106 00:24:25.144994 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:25.145016 kubelet[2840]: W1106 00:24:25.145008 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:25.145088 kubelet[2840]: E1106 00:24:25.145018 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:25.146737 kubelet[2840]: E1106 00:24:25.146709 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:25.146737 kubelet[2840]: W1106 00:24:25.146725 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:25.146737 kubelet[2840]: E1106 00:24:25.146737 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:25.146996 kubelet[2840]: E1106 00:24:25.146979 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:25.146996 kubelet[2840]: W1106 00:24:25.146992 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:25.147043 kubelet[2840]: E1106 00:24:25.147002 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:25.147220 kubelet[2840]: E1106 00:24:25.147204 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:25.147253 kubelet[2840]: W1106 00:24:25.147214 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:25.147253 kubelet[2840]: E1106 00:24:25.147233 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:25.147434 kubelet[2840]: E1106 00:24:25.147412 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:25.147434 kubelet[2840]: W1106 00:24:25.147423 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:25.147434 kubelet[2840]: E1106 00:24:25.147430 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:25.147682 kubelet[2840]: E1106 00:24:25.147647 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:25.147682 kubelet[2840]: W1106 00:24:25.147664 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:25.147682 kubelet[2840]: E1106 00:24:25.147677 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:25.148022 kubelet[2840]: E1106 00:24:25.148004 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:25.148062 kubelet[2840]: W1106 00:24:25.148024 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:25.148062 kubelet[2840]: E1106 00:24:25.148040 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:25.148346 kubelet[2840]: E1106 00:24:25.148298 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:25.148346 kubelet[2840]: W1106 00:24:25.148315 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:25.148346 kubelet[2840]: E1106 00:24:25.148328 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:25.148524 kubelet[2840]: E1106 00:24:25.148508 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:25.148524 kubelet[2840]: W1106 00:24:25.148521 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:25.148580 kubelet[2840]: E1106 00:24:25.148530 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:25.148714 kubelet[2840]: E1106 00:24:25.148700 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:25.148714 kubelet[2840]: W1106 00:24:25.148709 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:25.148781 kubelet[2840]: E1106 00:24:25.148718 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:25.148958 kubelet[2840]: E1106 00:24:25.148942 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:25.148958 kubelet[2840]: W1106 00:24:25.148955 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:25.149042 kubelet[2840]: E1106 00:24:25.148965 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:25.149286 kubelet[2840]: E1106 00:24:25.149264 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:25.149286 kubelet[2840]: W1106 00:24:25.149281 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:25.149361 kubelet[2840]: E1106 00:24:25.149294 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:25.149588 kubelet[2840]: E1106 00:24:25.149568 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:25.149588 kubelet[2840]: W1106 00:24:25.149582 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:25.149659 kubelet[2840]: E1106 00:24:25.149593 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:25.149834 kubelet[2840]: E1106 00:24:25.149798 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:25.149834 kubelet[2840]: W1106 00:24:25.149826 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:25.149918 kubelet[2840]: E1106 00:24:25.149842 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:25.150104 kubelet[2840]: E1106 00:24:25.150074 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:25.150104 kubelet[2840]: W1106 00:24:25.150091 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:25.150104 kubelet[2840]: E1106 00:24:25.150105 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:25.150398 kubelet[2840]: E1106 00:24:25.150370 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:25.150398 kubelet[2840]: W1106 00:24:25.150383 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:25.150398 kubelet[2840]: E1106 00:24:25.150393 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:25.150738 kubelet[2840]: E1106 00:24:25.150723 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:25.150738 kubelet[2840]: W1106 00:24:25.150736 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:25.150790 kubelet[2840]: E1106 00:24:25.150745 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:25.150983 kubelet[2840]: E1106 00:24:25.150959 2840 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:24:25.150983 kubelet[2840]: W1106 00:24:25.150972 2840 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:24:25.150983 kubelet[2840]: E1106 00:24:25.150980 2840 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:24:25.203700 containerd[1612]: time="2025-11-06T00:24:25.203637317Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:25.204636 containerd[1612]: time="2025-11-06T00:24:25.204599121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 6 00:24:25.205965 containerd[1612]: time="2025-11-06T00:24:25.205926762Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:25.208350 containerd[1612]: time="2025-11-06T00:24:25.208300284Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:25.209152 containerd[1612]: time="2025-11-06T00:24:25.209005227Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.013703405s" Nov 6 00:24:25.209152 containerd[1612]: time="2025-11-06T00:24:25.209043459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 6 00:24:25.214445 containerd[1612]: time="2025-11-06T00:24:25.214372225Z" level=info msg="CreateContainer within sandbox \"9ee4a82ecca180edfa0e027d32d898d2ccc00ddfa9add9901b7f0d8f66506892\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 6 00:24:25.225304 containerd[1612]: time="2025-11-06T00:24:25.225239973Z" level=info msg="Container ab02f7eb2cbbbba57bd0ec72c10b2f598fb1180e2457b62c4aca7b662686501c: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:24:25.234527 containerd[1612]: time="2025-11-06T00:24:25.234450762Z" level=info msg="CreateContainer within sandbox \"9ee4a82ecca180edfa0e027d32d898d2ccc00ddfa9add9901b7f0d8f66506892\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ab02f7eb2cbbbba57bd0ec72c10b2f598fb1180e2457b62c4aca7b662686501c\"" Nov 6 00:24:25.235222 containerd[1612]: time="2025-11-06T00:24:25.235187224Z" level=info msg="StartContainer for \"ab02f7eb2cbbbba57bd0ec72c10b2f598fb1180e2457b62c4aca7b662686501c\"" Nov 6 00:24:25.237004 containerd[1612]: time="2025-11-06T00:24:25.236935453Z" level=info msg="connecting to shim ab02f7eb2cbbbba57bd0ec72c10b2f598fb1180e2457b62c4aca7b662686501c" address="unix:///run/containerd/s/d6bfbadf8bc3166d916589dca06c24021620097b020128ad789f3bce6cd78850" protocol=ttrpc version=3 Nov 6 00:24:25.276146 systemd[1]: Started cri-containerd-ab02f7eb2cbbbba57bd0ec72c10b2f598fb1180e2457b62c4aca7b662686501c.scope - libcontainer container ab02f7eb2cbbbba57bd0ec72c10b2f598fb1180e2457b62c4aca7b662686501c. Nov 6 00:24:25.326836 containerd[1612]: time="2025-11-06T00:24:25.326702974Z" level=info msg="StartContainer for \"ab02f7eb2cbbbba57bd0ec72c10b2f598fb1180e2457b62c4aca7b662686501c\" returns successfully" Nov 6 00:24:25.341266 systemd[1]: cri-containerd-ab02f7eb2cbbbba57bd0ec72c10b2f598fb1180e2457b62c4aca7b662686501c.scope: Deactivated successfully. Nov 6 00:24:25.341668 systemd[1]: cri-containerd-ab02f7eb2cbbbba57bd0ec72c10b2f598fb1180e2457b62c4aca7b662686501c.scope: Consumed 43ms CPU time, 6.2M memory peak, 4.6M written to disk. Nov 6 00:24:25.343866 containerd[1612]: time="2025-11-06T00:24:25.343824674Z" level=info msg="received exit event container_id:\"ab02f7eb2cbbbba57bd0ec72c10b2f598fb1180e2457b62c4aca7b662686501c\" id:\"ab02f7eb2cbbbba57bd0ec72c10b2f598fb1180e2457b62c4aca7b662686501c\" pid:3574 exited_at:{seconds:1762388665 nanos:343200091}" Nov 6 00:24:25.343990 containerd[1612]: time="2025-11-06T00:24:25.343838971Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ab02f7eb2cbbbba57bd0ec72c10b2f598fb1180e2457b62c4aca7b662686501c\" id:\"ab02f7eb2cbbbba57bd0ec72c10b2f598fb1180e2457b62c4aca7b662686501c\" pid:3574 exited_at:{seconds:1762388665 nanos:343200091}" Nov 6 00:24:25.366273 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab02f7eb2cbbbba57bd0ec72c10b2f598fb1180e2457b62c4aca7b662686501c-rootfs.mount: Deactivated successfully. Nov 6 00:24:26.072834 kubelet[2840]: E1106 00:24:26.071853 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:26.075773 containerd[1612]: time="2025-11-06T00:24:26.075672912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 6 00:24:26.088993 kubelet[2840]: I1106 00:24:26.088934 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7d8b5556f7-2dgbl" podStartSLOduration=3.571207389 podStartE2EDuration="7.088918189s" podCreationTimestamp="2025-11-06 00:24:19 +0000 UTC" firstStartedPulling="2025-11-06 00:24:19.677455859 +0000 UTC m=+25.341290134" lastFinishedPulling="2025-11-06 00:24:23.195166438 +0000 UTC m=+28.859000934" observedRunningTime="2025-11-06 00:24:24.113427085 +0000 UTC m=+29.777261390" watchObservedRunningTime="2025-11-06 00:24:26.088918189 +0000 UTC m=+31.752752464" Nov 6 00:24:26.853451 kubelet[2840]: E1106 00:24:26.853352 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2gxx8" podUID="02736082-3e52-4d26-97e7-7ca149273f4e" Nov 6 00:24:28.652224 containerd[1612]: time="2025-11-06T00:24:28.652156472Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:28.652973 containerd[1612]: time="2025-11-06T00:24:28.652953651Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 6 00:24:28.654390 containerd[1612]: time="2025-11-06T00:24:28.654338325Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:28.656503 containerd[1612]: time="2025-11-06T00:24:28.656432748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:28.656986 containerd[1612]: time="2025-11-06T00:24:28.656952392Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.581188699s" Nov 6 00:24:28.656986 containerd[1612]: time="2025-11-06T00:24:28.656986046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 6 00:24:28.716869 containerd[1612]: time="2025-11-06T00:24:28.716783902Z" level=info msg="CreateContainer within sandbox \"9ee4a82ecca180edfa0e027d32d898d2ccc00ddfa9add9901b7f0d8f66506892\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 6 00:24:28.814289 containerd[1612]: time="2025-11-06T00:24:28.814215644Z" level=info msg="Container ab7339d7c47ce1c4b2b2513ada92753499d3f542c43d9d8f5e209eac3bc994d9: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:24:28.824515 containerd[1612]: time="2025-11-06T00:24:28.824458439Z" level=info msg="CreateContainer within sandbox \"9ee4a82ecca180edfa0e027d32d898d2ccc00ddfa9add9901b7f0d8f66506892\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ab7339d7c47ce1c4b2b2513ada92753499d3f542c43d9d8f5e209eac3bc994d9\"" Nov 6 00:24:28.825048 containerd[1612]: time="2025-11-06T00:24:28.825005937Z" level=info msg="StartContainer for \"ab7339d7c47ce1c4b2b2513ada92753499d3f542c43d9d8f5e209eac3bc994d9\"" Nov 6 00:24:28.826838 containerd[1612]: time="2025-11-06T00:24:28.826788738Z" level=info msg="connecting to shim ab7339d7c47ce1c4b2b2513ada92753499d3f542c43d9d8f5e209eac3bc994d9" address="unix:///run/containerd/s/d6bfbadf8bc3166d916589dca06c24021620097b020128ad789f3bce6cd78850" protocol=ttrpc version=3 Nov 6 00:24:28.853351 kubelet[2840]: E1106 00:24:28.853292 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2gxx8" podUID="02736082-3e52-4d26-97e7-7ca149273f4e" Nov 6 00:24:28.854036 systemd[1]: Started cri-containerd-ab7339d7c47ce1c4b2b2513ada92753499d3f542c43d9d8f5e209eac3bc994d9.scope - libcontainer container ab7339d7c47ce1c4b2b2513ada92753499d3f542c43d9d8f5e209eac3bc994d9. Nov 6 00:24:28.964566 containerd[1612]: time="2025-11-06T00:24:28.964420851Z" level=info msg="StartContainer for \"ab7339d7c47ce1c4b2b2513ada92753499d3f542c43d9d8f5e209eac3bc994d9\" returns successfully" Nov 6 00:24:29.079222 kubelet[2840]: E1106 00:24:29.079164 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:30.081376 kubelet[2840]: E1106 00:24:30.081320 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:30.499646 systemd[1]: cri-containerd-ab7339d7c47ce1c4b2b2513ada92753499d3f542c43d9d8f5e209eac3bc994d9.scope: Deactivated successfully. Nov 6 00:24:30.500332 systemd[1]: cri-containerd-ab7339d7c47ce1c4b2b2513ada92753499d3f542c43d9d8f5e209eac3bc994d9.scope: Consumed 675ms CPU time, 181.3M memory peak, 3.2M read from disk, 171.3M written to disk. Nov 6 00:24:30.502559 containerd[1612]: time="2025-11-06T00:24:30.502529129Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ab7339d7c47ce1c4b2b2513ada92753499d3f542c43d9d8f5e209eac3bc994d9\" id:\"ab7339d7c47ce1c4b2b2513ada92753499d3f542c43d9d8f5e209eac3bc994d9\" pid:3632 exited_at:{seconds:1762388670 nanos:500080790}" Nov 6 00:24:30.522791 containerd[1612]: time="2025-11-06T00:24:30.522753203Z" level=info msg="received exit event container_id:\"ab7339d7c47ce1c4b2b2513ada92753499d3f542c43d9d8f5e209eac3bc994d9\" id:\"ab7339d7c47ce1c4b2b2513ada92753499d3f542c43d9d8f5e209eac3bc994d9\" pid:3632 exited_at:{seconds:1762388670 nanos:500080790}" Nov 6 00:24:30.544963 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab7339d7c47ce1c4b2b2513ada92753499d3f542c43d9d8f5e209eac3bc994d9-rootfs.mount: Deactivated successfully. Nov 6 00:24:30.577322 kubelet[2840]: I1106 00:24:30.577285 2840 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 6 00:24:30.869209 systemd[1]: Created slice kubepods-besteffort-pod02736082_3e52_4d26_97e7_7ca149273f4e.slice - libcontainer container kubepods-besteffort-pod02736082_3e52_4d26_97e7_7ca149273f4e.slice. Nov 6 00:24:30.871759 containerd[1612]: time="2025-11-06T00:24:30.871705498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2gxx8,Uid:02736082-3e52-4d26-97e7-7ca149273f4e,Namespace:calico-system,Attempt:0,}" Nov 6 00:24:31.057518 systemd[1]: Created slice kubepods-besteffort-poda4ce142d_4dd8_4bd0_9300_ec9175d131da.slice - libcontainer container kubepods-besteffort-poda4ce142d_4dd8_4bd0_9300_ec9175d131da.slice. Nov 6 00:24:31.178855 systemd[1]: Created slice kubepods-burstable-pod83dbbda3_67a2_4589_b3f0_66ca7b03029b.slice - libcontainer container kubepods-burstable-pod83dbbda3_67a2_4589_b3f0_66ca7b03029b.slice. Nov 6 00:24:31.185520 containerd[1612]: time="2025-11-06T00:24:31.185466351Z" level=error msg="Failed to destroy network for sandbox \"077a73f7abbb18d9c5a4b2f2021a5129053a948592c67fae85f25af589dd5e15\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:31.187523 systemd[1]: run-netns-cni\x2d62df1af7\x2d3dcd\x2dfda4\x2d34ff\x2d6b34959e818b.mount: Deactivated successfully. Nov 6 00:24:31.189009 kubelet[2840]: I1106 00:24:31.188962 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6cf5\" (UniqueName: \"kubernetes.io/projected/a4ce142d-4dd8-4bd0-9300-ec9175d131da-kube-api-access-m6cf5\") pod \"goldmane-666569f655-nvmth\" (UID: \"a4ce142d-4dd8-4bd0-9300-ec9175d131da\") " pod="calico-system/goldmane-666569f655-nvmth" Nov 6 00:24:31.189359 kubelet[2840]: I1106 00:24:31.189069 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4ce142d-4dd8-4bd0-9300-ec9175d131da-config\") pod \"goldmane-666569f655-nvmth\" (UID: \"a4ce142d-4dd8-4bd0-9300-ec9175d131da\") " pod="calico-system/goldmane-666569f655-nvmth" Nov 6 00:24:31.189359 kubelet[2840]: I1106 00:24:31.189111 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4ce142d-4dd8-4bd0-9300-ec9175d131da-goldmane-ca-bundle\") pod \"goldmane-666569f655-nvmth\" (UID: \"a4ce142d-4dd8-4bd0-9300-ec9175d131da\") " pod="calico-system/goldmane-666569f655-nvmth" Nov 6 00:24:31.189359 kubelet[2840]: I1106 00:24:31.189135 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a4ce142d-4dd8-4bd0-9300-ec9175d131da-goldmane-key-pair\") pod \"goldmane-666569f655-nvmth\" (UID: \"a4ce142d-4dd8-4bd0-9300-ec9175d131da\") " pod="calico-system/goldmane-666569f655-nvmth" Nov 6 00:24:31.251531 containerd[1612]: time="2025-11-06T00:24:31.251426149Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2gxx8,Uid:02736082-3e52-4d26-97e7-7ca149273f4e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"077a73f7abbb18d9c5a4b2f2021a5129053a948592c67fae85f25af589dd5e15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:31.251890 kubelet[2840]: E1106 00:24:31.251790 2840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"077a73f7abbb18d9c5a4b2f2021a5129053a948592c67fae85f25af589dd5e15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:31.252094 kubelet[2840]: E1106 00:24:31.251915 2840 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"077a73f7abbb18d9c5a4b2f2021a5129053a948592c67fae85f25af589dd5e15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2gxx8" Nov 6 00:24:31.252094 kubelet[2840]: E1106 00:24:31.251935 2840 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"077a73f7abbb18d9c5a4b2f2021a5129053a948592c67fae85f25af589dd5e15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2gxx8" Nov 6 00:24:31.252094 kubelet[2840]: E1106 00:24:31.251984 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2gxx8_calico-system(02736082-3e52-4d26-97e7-7ca149273f4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2gxx8_calico-system(02736082-3e52-4d26-97e7-7ca149273f4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"077a73f7abbb18d9c5a4b2f2021a5129053a948592c67fae85f25af589dd5e15\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2gxx8" podUID="02736082-3e52-4d26-97e7-7ca149273f4e" Nov 6 00:24:31.258587 systemd[1]: Created slice kubepods-burstable-pod5aa1b76a_bf42_444c_9590_6b40021950af.slice - libcontainer container kubepods-burstable-pod5aa1b76a_bf42_444c_9590_6b40021950af.slice. Nov 6 00:24:31.271289 systemd[1]: Created slice kubepods-besteffort-pode5128615_7aa8_48b2_97ed_a5b035282b5e.slice - libcontainer container kubepods-besteffort-pode5128615_7aa8_48b2_97ed_a5b035282b5e.slice. Nov 6 00:24:31.278392 kubelet[2840]: E1106 00:24:31.278324 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:31.281461 containerd[1612]: time="2025-11-06T00:24:31.281412798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 6 00:24:31.291718 kubelet[2840]: I1106 00:24:31.289952 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83dbbda3-67a2-4589-b3f0-66ca7b03029b-config-volume\") pod \"coredns-674b8bbfcf-6jnrw\" (UID: \"83dbbda3-67a2-4589-b3f0-66ca7b03029b\") " pod="kube-system/coredns-674b8bbfcf-6jnrw" Nov 6 00:24:31.291718 kubelet[2840]: I1106 00:24:31.290012 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dz8l\" (UniqueName: \"kubernetes.io/projected/83dbbda3-67a2-4589-b3f0-66ca7b03029b-kube-api-access-8dz8l\") pod \"coredns-674b8bbfcf-6jnrw\" (UID: \"83dbbda3-67a2-4589-b3f0-66ca7b03029b\") " pod="kube-system/coredns-674b8bbfcf-6jnrw" Nov 6 00:24:31.295577 systemd[1]: Created slice kubepods-besteffort-pod04a82cf5_90fc_40d4_9038_65add7f7f20f.slice - libcontainer container kubepods-besteffort-pod04a82cf5_90fc_40d4_9038_65add7f7f20f.slice. Nov 6 00:24:31.313883 systemd[1]: Created slice kubepods-besteffort-podc3d35955_d96c_4c0f_8dbc_021043287219.slice - libcontainer container kubepods-besteffort-podc3d35955_d96c_4c0f_8dbc_021043287219.slice. Nov 6 00:24:31.322359 systemd[1]: Created slice kubepods-besteffort-pod00228e39_7e54_4a3f_b428_59bfdf4f00aa.slice - libcontainer container kubepods-besteffort-pod00228e39_7e54_4a3f_b428_59bfdf4f00aa.slice. Nov 6 00:24:31.362849 containerd[1612]: time="2025-11-06T00:24:31.362754013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nvmth,Uid:a4ce142d-4dd8-4bd0-9300-ec9175d131da,Namespace:calico-system,Attempt:0,}" Nov 6 00:24:31.390405 kubelet[2840]: I1106 00:24:31.390339 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/04a82cf5-90fc-40d4-9038-65add7f7f20f-calico-apiserver-certs\") pod \"calico-apiserver-79747456c8-9kz7k\" (UID: \"04a82cf5-90fc-40d4-9038-65add7f7f20f\") " pod="calico-apiserver/calico-apiserver-79747456c8-9kz7k" Nov 6 00:24:31.390405 kubelet[2840]: I1106 00:24:31.390398 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/00228e39-7e54-4a3f-b428-59bfdf4f00aa-calico-apiserver-certs\") pod \"calico-apiserver-79747456c8-w2vxq\" (UID: \"00228e39-7e54-4a3f-b428-59bfdf4f00aa\") " pod="calico-apiserver/calico-apiserver-79747456c8-w2vxq" Nov 6 00:24:31.390606 kubelet[2840]: I1106 00:24:31.390445 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdg5t\" (UniqueName: \"kubernetes.io/projected/00228e39-7e54-4a3f-b428-59bfdf4f00aa-kube-api-access-fdg5t\") pod \"calico-apiserver-79747456c8-w2vxq\" (UID: \"00228e39-7e54-4a3f-b428-59bfdf4f00aa\") " pod="calico-apiserver/calico-apiserver-79747456c8-w2vxq" Nov 6 00:24:31.390606 kubelet[2840]: I1106 00:24:31.390462 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5128615-7aa8-48b2-97ed-a5b035282b5e-tigera-ca-bundle\") pod \"calico-kube-controllers-864c69c456-2zzkg\" (UID: \"e5128615-7aa8-48b2-97ed-a5b035282b5e\") " pod="calico-system/calico-kube-controllers-864c69c456-2zzkg" Nov 6 00:24:31.392111 kubelet[2840]: I1106 00:24:31.390982 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md2tm\" (UniqueName: \"kubernetes.io/projected/5aa1b76a-bf42-444c-9590-6b40021950af-kube-api-access-md2tm\") pod \"coredns-674b8bbfcf-96c9f\" (UID: \"5aa1b76a-bf42-444c-9590-6b40021950af\") " pod="kube-system/coredns-674b8bbfcf-96c9f" Nov 6 00:24:31.392111 kubelet[2840]: I1106 00:24:31.391044 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzr9j\" (UniqueName: \"kubernetes.io/projected/e5128615-7aa8-48b2-97ed-a5b035282b5e-kube-api-access-tzr9j\") pod \"calico-kube-controllers-864c69c456-2zzkg\" (UID: \"e5128615-7aa8-48b2-97ed-a5b035282b5e\") " pod="calico-system/calico-kube-controllers-864c69c456-2zzkg" Nov 6 00:24:31.392111 kubelet[2840]: I1106 00:24:31.391068 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3d35955-d96c-4c0f-8dbc-021043287219-whisker-ca-bundle\") pod \"whisker-5c989599c-n9489\" (UID: \"c3d35955-d96c-4c0f-8dbc-021043287219\") " pod="calico-system/whisker-5c989599c-n9489" Nov 6 00:24:31.392111 kubelet[2840]: I1106 00:24:31.391093 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7vzd\" (UniqueName: \"kubernetes.io/projected/c3d35955-d96c-4c0f-8dbc-021043287219-kube-api-access-d7vzd\") pod \"whisker-5c989599c-n9489\" (UID: \"c3d35955-d96c-4c0f-8dbc-021043287219\") " pod="calico-system/whisker-5c989599c-n9489" Nov 6 00:24:31.392111 kubelet[2840]: I1106 00:24:31.391128 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5aa1b76a-bf42-444c-9590-6b40021950af-config-volume\") pod \"coredns-674b8bbfcf-96c9f\" (UID: \"5aa1b76a-bf42-444c-9590-6b40021950af\") " pod="kube-system/coredns-674b8bbfcf-96c9f" Nov 6 00:24:31.393087 kubelet[2840]: I1106 00:24:31.393064 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c3d35955-d96c-4c0f-8dbc-021043287219-whisker-backend-key-pair\") pod \"whisker-5c989599c-n9489\" (UID: \"c3d35955-d96c-4c0f-8dbc-021043287219\") " pod="calico-system/whisker-5c989599c-n9489" Nov 6 00:24:31.394752 kubelet[2840]: I1106 00:24:31.394725 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvdc8\" (UniqueName: \"kubernetes.io/projected/04a82cf5-90fc-40d4-9038-65add7f7f20f-kube-api-access-jvdc8\") pod \"calico-apiserver-79747456c8-9kz7k\" (UID: \"04a82cf5-90fc-40d4-9038-65add7f7f20f\") " pod="calico-apiserver/calico-apiserver-79747456c8-9kz7k" Nov 6 00:24:31.423397 containerd[1612]: time="2025-11-06T00:24:31.423321935Z" level=error msg="Failed to destroy network for sandbox \"32b909143ba628d5529a2e4a0e3e3c8775fb1496ed9d503d8766f1b040928bfa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:31.449288 containerd[1612]: time="2025-11-06T00:24:31.449115918Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nvmth,Uid:a4ce142d-4dd8-4bd0-9300-ec9175d131da,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"32b909143ba628d5529a2e4a0e3e3c8775fb1496ed9d503d8766f1b040928bfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:31.449507 kubelet[2840]: E1106 00:24:31.449447 2840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32b909143ba628d5529a2e4a0e3e3c8775fb1496ed9d503d8766f1b040928bfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:31.449569 kubelet[2840]: E1106 00:24:31.449520 2840 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32b909143ba628d5529a2e4a0e3e3c8775fb1496ed9d503d8766f1b040928bfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-nvmth" Nov 6 00:24:31.449569 kubelet[2840]: E1106 00:24:31.449542 2840 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32b909143ba628d5529a2e4a0e3e3c8775fb1496ed9d503d8766f1b040928bfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-nvmth" Nov 6 00:24:31.449725 kubelet[2840]: E1106 00:24:31.449602 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-nvmth_calico-system(a4ce142d-4dd8-4bd0-9300-ec9175d131da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-nvmth_calico-system(a4ce142d-4dd8-4bd0-9300-ec9175d131da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"32b909143ba628d5529a2e4a0e3e3c8775fb1496ed9d503d8766f1b040928bfa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-nvmth" podUID="a4ce142d-4dd8-4bd0-9300-ec9175d131da" Nov 6 00:24:31.563573 kubelet[2840]: E1106 00:24:31.563535 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:31.564319 containerd[1612]: time="2025-11-06T00:24:31.564234182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-96c9f,Uid:5aa1b76a-bf42-444c-9590-6b40021950af,Namespace:kube-system,Attempt:0,}" Nov 6 00:24:31.575328 containerd[1612]: time="2025-11-06T00:24:31.575282402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-864c69c456-2zzkg,Uid:e5128615-7aa8-48b2-97ed-a5b035282b5e,Namespace:calico-system,Attempt:0,}" Nov 6 00:24:31.612877 containerd[1612]: time="2025-11-06T00:24:31.612794035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79747456c8-9kz7k,Uid:04a82cf5-90fc-40d4-9038-65add7f7f20f,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:24:31.617771 containerd[1612]: time="2025-11-06T00:24:31.617716927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c989599c-n9489,Uid:c3d35955-d96c-4c0f-8dbc-021043287219,Namespace:calico-system,Attempt:0,}" Nov 6 00:24:31.626675 containerd[1612]: time="2025-11-06T00:24:31.626602853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79747456c8-w2vxq,Uid:00228e39-7e54-4a3f-b428-59bfdf4f00aa,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:24:31.783537 kubelet[2840]: E1106 00:24:31.782477 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:31.784100 containerd[1612]: time="2025-11-06T00:24:31.784038586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6jnrw,Uid:83dbbda3-67a2-4589-b3f0-66ca7b03029b,Namespace:kube-system,Attempt:0,}" Nov 6 00:24:31.865950 containerd[1612]: time="2025-11-06T00:24:31.865857130Z" level=error msg="Failed to destroy network for sandbox \"8384aeb6c3f744e2b3713bf39cb31fef4bda7a92bb818213abcdd5cac7694dcc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:31.868697 containerd[1612]: time="2025-11-06T00:24:31.868627236Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-96c9f,Uid:5aa1b76a-bf42-444c-9590-6b40021950af,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8384aeb6c3f744e2b3713bf39cb31fef4bda7a92bb818213abcdd5cac7694dcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:31.869986 kubelet[2840]: E1106 00:24:31.869731 2840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8384aeb6c3f744e2b3713bf39cb31fef4bda7a92bb818213abcdd5cac7694dcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:31.870149 kubelet[2840]: E1106 00:24:31.870026 2840 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8384aeb6c3f744e2b3713bf39cb31fef4bda7a92bb818213abcdd5cac7694dcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-96c9f" Nov 6 00:24:31.870149 kubelet[2840]: E1106 00:24:31.870068 2840 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8384aeb6c3f744e2b3713bf39cb31fef4bda7a92bb818213abcdd5cac7694dcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-96c9f" Nov 6 00:24:31.870294 kubelet[2840]: E1106 00:24:31.870177 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-96c9f_kube-system(5aa1b76a-bf42-444c-9590-6b40021950af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-96c9f_kube-system(5aa1b76a-bf42-444c-9590-6b40021950af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8384aeb6c3f744e2b3713bf39cb31fef4bda7a92bb818213abcdd5cac7694dcc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-96c9f" podUID="5aa1b76a-bf42-444c-9590-6b40021950af" Nov 6 00:24:31.898095 containerd[1612]: time="2025-11-06T00:24:31.897961669Z" level=error msg="Failed to destroy network for sandbox \"e98f0b513c8c26fbf70c03d424fa8caf49201a41b461250b6fd68c327e1aa004\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:31.901431 containerd[1612]: time="2025-11-06T00:24:31.901231226Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-864c69c456-2zzkg,Uid:e5128615-7aa8-48b2-97ed-a5b035282b5e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e98f0b513c8c26fbf70c03d424fa8caf49201a41b461250b6fd68c327e1aa004\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:31.901758 kubelet[2840]: E1106 00:24:31.901706 2840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e98f0b513c8c26fbf70c03d424fa8caf49201a41b461250b6fd68c327e1aa004\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:31.901864 kubelet[2840]: E1106 00:24:31.901800 2840 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e98f0b513c8c26fbf70c03d424fa8caf49201a41b461250b6fd68c327e1aa004\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-864c69c456-2zzkg" Nov 6 00:24:31.901940 kubelet[2840]: E1106 00:24:31.901865 2840 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e98f0b513c8c26fbf70c03d424fa8caf49201a41b461250b6fd68c327e1aa004\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-864c69c456-2zzkg" Nov 6 00:24:31.904256 kubelet[2840]: E1106 00:24:31.901952 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-864c69c456-2zzkg_calico-system(e5128615-7aa8-48b2-97ed-a5b035282b5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-864c69c456-2zzkg_calico-system(e5128615-7aa8-48b2-97ed-a5b035282b5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e98f0b513c8c26fbf70c03d424fa8caf49201a41b461250b6fd68c327e1aa004\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-864c69c456-2zzkg" podUID="e5128615-7aa8-48b2-97ed-a5b035282b5e" Nov 6 00:24:31.910219 containerd[1612]: time="2025-11-06T00:24:31.910111220Z" level=error msg="Failed to destroy network for sandbox \"8bb6d35f3da946ede17512f6ed6803e170b2320c952ed282e21a2a46498a09e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:31.914061 containerd[1612]: time="2025-11-06T00:24:31.913990151Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79747456c8-9kz7k,Uid:04a82cf5-90fc-40d4-9038-65add7f7f20f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bb6d35f3da946ede17512f6ed6803e170b2320c952ed282e21a2a46498a09e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:31.914835 kubelet[2840]: E1106 00:24:31.914730 2840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bb6d35f3da946ede17512f6ed6803e170b2320c952ed282e21a2a46498a09e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:31.914913 kubelet[2840]: E1106 00:24:31.914835 2840 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bb6d35f3da946ede17512f6ed6803e170b2320c952ed282e21a2a46498a09e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79747456c8-9kz7k" Nov 6 00:24:31.914913 kubelet[2840]: E1106 00:24:31.914865 2840 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bb6d35f3da946ede17512f6ed6803e170b2320c952ed282e21a2a46498a09e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79747456c8-9kz7k" Nov 6 00:24:31.915028 kubelet[2840]: E1106 00:24:31.914924 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79747456c8-9kz7k_calico-apiserver(04a82cf5-90fc-40d4-9038-65add7f7f20f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79747456c8-9kz7k_calico-apiserver(04a82cf5-90fc-40d4-9038-65add7f7f20f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8bb6d35f3da946ede17512f6ed6803e170b2320c952ed282e21a2a46498a09e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79747456c8-9kz7k" podUID="04a82cf5-90fc-40d4-9038-65add7f7f20f" Nov 6 00:24:31.930090 containerd[1612]: time="2025-11-06T00:24:31.930029606Z" level=error msg="Failed to destroy network for sandbox \"442da16970497eba8276c26148b6d16766fe9ca783fc9e982d2126f383509bcd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:31.933252 containerd[1612]: time="2025-11-06T00:24:31.933072838Z" level=error msg="Failed to destroy network for sandbox \"8205015e23978bcd2d5d345b628fe682d9ebe9423986b00602e9fa2536da5150\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:31.934068 containerd[1612]: time="2025-11-06T00:24:31.934020563Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79747456c8-w2vxq,Uid:00228e39-7e54-4a3f-b428-59bfdf4f00aa,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"442da16970497eba8276c26148b6d16766fe9ca783fc9e982d2126f383509bcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:31.934521 kubelet[2840]: E1106 00:24:31.934447 2840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"442da16970497eba8276c26148b6d16766fe9ca783fc9e982d2126f383509bcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:31.934603 kubelet[2840]: E1106 00:24:31.934533 2840 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"442da16970497eba8276c26148b6d16766fe9ca783fc9e982d2126f383509bcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79747456c8-w2vxq" Nov 6 00:24:31.934603 kubelet[2840]: E1106 00:24:31.934559 2840 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"442da16970497eba8276c26148b6d16766fe9ca783fc9e982d2126f383509bcd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79747456c8-w2vxq" Nov 6 00:24:31.934736 kubelet[2840]: E1106 00:24:31.934639 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79747456c8-w2vxq_calico-apiserver(00228e39-7e54-4a3f-b428-59bfdf4f00aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79747456c8-w2vxq_calico-apiserver(00228e39-7e54-4a3f-b428-59bfdf4f00aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"442da16970497eba8276c26148b6d16766fe9ca783fc9e982d2126f383509bcd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79747456c8-w2vxq" podUID="00228e39-7e54-4a3f-b428-59bfdf4f00aa" Nov 6 00:24:31.935893 containerd[1612]: time="2025-11-06T00:24:31.935744333Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c989599c-n9489,Uid:c3d35955-d96c-4c0f-8dbc-021043287219,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8205015e23978bcd2d5d345b628fe682d9ebe9423986b00602e9fa2536da5150\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:31.936333 kubelet[2840]: E1106 00:24:31.936269 2840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8205015e23978bcd2d5d345b628fe682d9ebe9423986b00602e9fa2536da5150\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:31.936540 kubelet[2840]: E1106 00:24:31.936383 2840 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8205015e23978bcd2d5d345b628fe682d9ebe9423986b00602e9fa2536da5150\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c989599c-n9489" Nov 6 00:24:31.936540 kubelet[2840]: E1106 00:24:31.936424 2840 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8205015e23978bcd2d5d345b628fe682d9ebe9423986b00602e9fa2536da5150\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c989599c-n9489" Nov 6 00:24:31.936634 kubelet[2840]: E1106 00:24:31.936540 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5c989599c-n9489_calico-system(c3d35955-d96c-4c0f-8dbc-021043287219)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5c989599c-n9489_calico-system(c3d35955-d96c-4c0f-8dbc-021043287219)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8205015e23978bcd2d5d345b628fe682d9ebe9423986b00602e9fa2536da5150\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5c989599c-n9489" podUID="c3d35955-d96c-4c0f-8dbc-021043287219" Nov 6 00:24:31.939450 containerd[1612]: time="2025-11-06T00:24:31.939377662Z" level=error msg="Failed to destroy network for sandbox \"6fc7edc11666641d05b544f054b5898994d0a2c3452aad7609bdb82a6f765927\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:31.941135 containerd[1612]: time="2025-11-06T00:24:31.941040665Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6jnrw,Uid:83dbbda3-67a2-4589-b3f0-66ca7b03029b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fc7edc11666641d05b544f054b5898994d0a2c3452aad7609bdb82a6f765927\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:31.941632 kubelet[2840]: E1106 00:24:31.941582 2840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fc7edc11666641d05b544f054b5898994d0a2c3452aad7609bdb82a6f765927\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:24:31.941717 kubelet[2840]: E1106 00:24:31.941672 2840 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fc7edc11666641d05b544f054b5898994d0a2c3452aad7609bdb82a6f765927\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6jnrw" Nov 6 00:24:31.941717 kubelet[2840]: E1106 00:24:31.941702 2840 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fc7edc11666641d05b544f054b5898994d0a2c3452aad7609bdb82a6f765927\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6jnrw" Nov 6 00:24:31.942394 kubelet[2840]: E1106 00:24:31.941792 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6jnrw_kube-system(83dbbda3-67a2-4589-b3f0-66ca7b03029b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6jnrw_kube-system(83dbbda3-67a2-4589-b3f0-66ca7b03029b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6fc7edc11666641d05b544f054b5898994d0a2c3452aad7609bdb82a6f765927\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6jnrw" podUID="83dbbda3-67a2-4589-b3f0-66ca7b03029b" Nov 6 00:24:32.548016 systemd[1]: run-netns-cni\x2d777d3cd4\x2d14e7\x2d1eeb\x2d20fc\x2d3c70051b64d6.mount: Deactivated successfully. Nov 6 00:24:32.548162 systemd[1]: run-netns-cni\x2da80d39db\x2de86c\x2da507\x2d5d36\x2d0141f0afbb30.mount: Deactivated successfully. Nov 6 00:24:32.548265 systemd[1]: run-netns-cni\x2d08af59b7\x2d5a7d\x2d11f3\x2d9705\x2dc784f8a078ef.mount: Deactivated successfully. Nov 6 00:24:32.548408 systemd[1]: run-netns-cni\x2d76fb4015\x2de6aa\x2d2a3f\x2d1e36\x2de42aef84bece.mount: Deactivated successfully. Nov 6 00:24:35.298793 kubelet[2840]: I1106 00:24:35.298724 2840 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 6 00:24:35.299675 kubelet[2840]: E1106 00:24:35.299230 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:36.113552 kubelet[2840]: E1106 00:24:36.113516 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:39.891574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1715477452.mount: Deactivated successfully. Nov 6 00:24:42.069072 systemd[1]: Started sshd@9-10.0.0.58:22-10.0.0.1:38494.service - OpenSSH per-connection server daemon (10.0.0.1:38494). Nov 6 00:24:42.095191 containerd[1612]: time="2025-11-06T00:24:42.095121600Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:42.100918 containerd[1612]: time="2025-11-06T00:24:42.100861150Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 6 00:24:42.106259 containerd[1612]: time="2025-11-06T00:24:42.106199782Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:42.111243 containerd[1612]: time="2025-11-06T00:24:42.111201391Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:24:42.128852 containerd[1612]: time="2025-11-06T00:24:42.128657744Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 10.847183278s" Nov 6 00:24:42.128852 containerd[1612]: time="2025-11-06T00:24:42.128722518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 6 00:24:42.176582 containerd[1612]: time="2025-11-06T00:24:42.176523530Z" level=info msg="CreateContainer within sandbox \"9ee4a82ecca180edfa0e027d32d898d2ccc00ddfa9add9901b7f0d8f66506892\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 6 00:24:42.185879 sshd[3948]: Accepted publickey for core from 10.0.0.1 port 38494 ssh2: RSA SHA256:L/X2hZ0RfhoRT0scDEdxSx2ppA9YR+iZOac1cX0yhcQ Nov 6 00:24:42.188059 sshd-session[3948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:24:42.189825 containerd[1612]: time="2025-11-06T00:24:42.189764680Z" level=info msg="Container ae64c1d8601a83c5414344192f698b931f94d3b78c3e14d03c9f66ad2fec9f10: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:24:42.199960 systemd-logind[1599]: New session 10 of user core. Nov 6 00:24:42.205466 containerd[1612]: time="2025-11-06T00:24:42.205420780Z" level=info msg="CreateContainer within sandbox \"9ee4a82ecca180edfa0e027d32d898d2ccc00ddfa9add9901b7f0d8f66506892\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ae64c1d8601a83c5414344192f698b931f94d3b78c3e14d03c9f66ad2fec9f10\"" Nov 6 00:24:42.205984 containerd[1612]: time="2025-11-06T00:24:42.205959281Z" level=info msg="StartContainer for \"ae64c1d8601a83c5414344192f698b931f94d3b78c3e14d03c9f66ad2fec9f10\"" Nov 6 00:24:42.207534 containerd[1612]: time="2025-11-06T00:24:42.207505609Z" level=info msg="connecting to shim ae64c1d8601a83c5414344192f698b931f94d3b78c3e14d03c9f66ad2fec9f10" address="unix:///run/containerd/s/d6bfbadf8bc3166d916589dca06c24021620097b020128ad789f3bce6cd78850" protocol=ttrpc version=3 Nov 6 00:24:42.210078 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 6 00:24:42.251153 systemd[1]: Started cri-containerd-ae64c1d8601a83c5414344192f698b931f94d3b78c3e14d03c9f66ad2fec9f10.scope - libcontainer container ae64c1d8601a83c5414344192f698b931f94d3b78c3e14d03c9f66ad2fec9f10. Nov 6 00:24:42.311941 containerd[1612]: time="2025-11-06T00:24:42.311886407Z" level=info msg="StartContainer for \"ae64c1d8601a83c5414344192f698b931f94d3b78c3e14d03c9f66ad2fec9f10\" returns successfully" Nov 6 00:24:42.390211 sshd[3953]: Connection closed by 10.0.0.1 port 38494 Nov 6 00:24:42.392015 sshd-session[3948]: pam_unix(sshd:session): session closed for user core Nov 6 00:24:42.397611 systemd-logind[1599]: Session 10 logged out. Waiting for processes to exit. Nov 6 00:24:42.398124 systemd[1]: sshd@9-10.0.0.58:22-10.0.0.1:38494.service: Deactivated successfully. Nov 6 00:24:42.401216 systemd[1]: session-10.scope: Deactivated successfully. Nov 6 00:24:42.402848 systemd-logind[1599]: Removed session 10. Nov 6 00:24:42.411393 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 6 00:24:42.411618 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 6 00:24:42.674643 kubelet[2840]: I1106 00:24:42.674248 2840 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3d35955-d96c-4c0f-8dbc-021043287219-whisker-ca-bundle\") pod \"c3d35955-d96c-4c0f-8dbc-021043287219\" (UID: \"c3d35955-d96c-4c0f-8dbc-021043287219\") " Nov 6 00:24:42.674643 kubelet[2840]: I1106 00:24:42.674297 2840 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7vzd\" (UniqueName: \"kubernetes.io/projected/c3d35955-d96c-4c0f-8dbc-021043287219-kube-api-access-d7vzd\") pod \"c3d35955-d96c-4c0f-8dbc-021043287219\" (UID: \"c3d35955-d96c-4c0f-8dbc-021043287219\") " Nov 6 00:24:42.674643 kubelet[2840]: I1106 00:24:42.674315 2840 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c3d35955-d96c-4c0f-8dbc-021043287219-whisker-backend-key-pair\") pod \"c3d35955-d96c-4c0f-8dbc-021043287219\" (UID: \"c3d35955-d96c-4c0f-8dbc-021043287219\") " Nov 6 00:24:42.674643 kubelet[2840]: I1106 00:24:42.674742 2840 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3d35955-d96c-4c0f-8dbc-021043287219-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c3d35955-d96c-4c0f-8dbc-021043287219" (UID: "c3d35955-d96c-4c0f-8dbc-021043287219"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 00:24:42.678982 kubelet[2840]: I1106 00:24:42.678902 2840 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3d35955-d96c-4c0f-8dbc-021043287219-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c3d35955-d96c-4c0f-8dbc-021043287219" (UID: "c3d35955-d96c-4c0f-8dbc-021043287219"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 6 00:24:42.678982 kubelet[2840]: I1106 00:24:42.678925 2840 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3d35955-d96c-4c0f-8dbc-021043287219-kube-api-access-d7vzd" (OuterVolumeSpecName: "kube-api-access-d7vzd") pod "c3d35955-d96c-4c0f-8dbc-021043287219" (UID: "c3d35955-d96c-4c0f-8dbc-021043287219"). InnerVolumeSpecName "kube-api-access-d7vzd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 00:24:42.775468 kubelet[2840]: I1106 00:24:42.775152 2840 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3d35955-d96c-4c0f-8dbc-021043287219-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 6 00:24:42.775468 kubelet[2840]: I1106 00:24:42.775190 2840 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7vzd\" (UniqueName: \"kubernetes.io/projected/c3d35955-d96c-4c0f-8dbc-021043287219-kube-api-access-d7vzd\") on node \"localhost\" DevicePath \"\"" Nov 6 00:24:42.775468 kubelet[2840]: I1106 00:24:42.775199 2840 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c3d35955-d96c-4c0f-8dbc-021043287219-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 6 00:24:42.856024 containerd[1612]: time="2025-11-06T00:24:42.855961925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2gxx8,Uid:02736082-3e52-4d26-97e7-7ca149273f4e,Namespace:calico-system,Attempt:0,}" Nov 6 00:24:42.863962 systemd[1]: Removed slice kubepods-besteffort-podc3d35955_d96c_4c0f_8dbc_021043287219.slice - libcontainer container kubepods-besteffort-podc3d35955_d96c_4c0f_8dbc_021043287219.slice. Nov 6 00:24:43.131733 systemd-networkd[1515]: cali7dd6dd02aac: Link UP Nov 6 00:24:43.132079 systemd-networkd[1515]: cali7dd6dd02aac: Gained carrier Nov 6 00:24:43.163202 systemd[1]: var-lib-kubelet-pods-c3d35955\x2dd96c\x2d4c0f\x2d8dbc\x2d021043287219-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd7vzd.mount: Deactivated successfully. Nov 6 00:24:43.163352 systemd[1]: var-lib-kubelet-pods-c3d35955\x2dd96c\x2d4c0f\x2d8dbc\x2d021043287219-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 6 00:24:43.176549 containerd[1612]: 2025-11-06 00:24:42.923 [INFO][4035] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 6 00:24:43.176549 containerd[1612]: 2025-11-06 00:24:42.955 [INFO][4035] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--2gxx8-eth0 csi-node-driver- calico-system 02736082-3e52-4d26-97e7-7ca149273f4e 783 0 2025-11-06 00:24:19 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-2gxx8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7dd6dd02aac [] [] }} ContainerID="6ecbfeecc99f93ce060fc10c6c346c30a85d650ce7fa88ec49e602e8e5a93143" Namespace="calico-system" Pod="csi-node-driver-2gxx8" WorkloadEndpoint="localhost-k8s-csi--node--driver--2gxx8-" Nov 6 00:24:43.176549 containerd[1612]: 2025-11-06 00:24:42.955 [INFO][4035] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6ecbfeecc99f93ce060fc10c6c346c30a85d650ce7fa88ec49e602e8e5a93143" Namespace="calico-system" Pod="csi-node-driver-2gxx8" WorkloadEndpoint="localhost-k8s-csi--node--driver--2gxx8-eth0" Nov 6 00:24:43.176549 containerd[1612]: 2025-11-06 00:24:43.061 [INFO][4048] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ecbfeecc99f93ce060fc10c6c346c30a85d650ce7fa88ec49e602e8e5a93143" HandleID="k8s-pod-network.6ecbfeecc99f93ce060fc10c6c346c30a85d650ce7fa88ec49e602e8e5a93143" Workload="localhost-k8s-csi--node--driver--2gxx8-eth0" Nov 6 00:24:43.177341 containerd[1612]: 2025-11-06 00:24:43.062 [INFO][4048] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6ecbfeecc99f93ce060fc10c6c346c30a85d650ce7fa88ec49e602e8e5a93143" HandleID="k8s-pod-network.6ecbfeecc99f93ce060fc10c6c346c30a85d650ce7fa88ec49e602e8e5a93143" Workload="localhost-k8s-csi--node--driver--2gxx8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fe60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-2gxx8", "timestamp":"2025-11-06 00:24:43.061328859 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:24:43.177341 containerd[1612]: 2025-11-06 00:24:43.062 [INFO][4048] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:24:43.177341 containerd[1612]: 2025-11-06 00:24:43.062 [INFO][4048] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:24:43.177341 containerd[1612]: 2025-11-06 00:24:43.063 [INFO][4048] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 00:24:43.177341 containerd[1612]: 2025-11-06 00:24:43.077 [INFO][4048] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6ecbfeecc99f93ce060fc10c6c346c30a85d650ce7fa88ec49e602e8e5a93143" host="localhost" Nov 6 00:24:43.177341 containerd[1612]: 2025-11-06 00:24:43.087 [INFO][4048] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 00:24:43.177341 containerd[1612]: 2025-11-06 00:24:43.093 [INFO][4048] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 00:24:43.177341 containerd[1612]: 2025-11-06 00:24:43.096 [INFO][4048] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 00:24:43.177341 containerd[1612]: 2025-11-06 00:24:43.099 [INFO][4048] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 00:24:43.177341 containerd[1612]: 2025-11-06 00:24:43.099 [INFO][4048] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6ecbfeecc99f93ce060fc10c6c346c30a85d650ce7fa88ec49e602e8e5a93143" host="localhost" Nov 6 00:24:43.177659 containerd[1612]: 2025-11-06 00:24:43.102 [INFO][4048] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6ecbfeecc99f93ce060fc10c6c346c30a85d650ce7fa88ec49e602e8e5a93143 Nov 6 00:24:43.177659 containerd[1612]: 2025-11-06 00:24:43.110 [INFO][4048] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6ecbfeecc99f93ce060fc10c6c346c30a85d650ce7fa88ec49e602e8e5a93143" host="localhost" Nov 6 00:24:43.177659 containerd[1612]: 2025-11-06 00:24:43.117 [INFO][4048] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.6ecbfeecc99f93ce060fc10c6c346c30a85d650ce7fa88ec49e602e8e5a93143" host="localhost" Nov 6 00:24:43.177659 containerd[1612]: 2025-11-06 00:24:43.117 [INFO][4048] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.6ecbfeecc99f93ce060fc10c6c346c30a85d650ce7fa88ec49e602e8e5a93143" host="localhost" Nov 6 00:24:43.177659 containerd[1612]: 2025-11-06 00:24:43.117 [INFO][4048] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:24:43.177659 containerd[1612]: 2025-11-06 00:24:43.117 [INFO][4048] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="6ecbfeecc99f93ce060fc10c6c346c30a85d650ce7fa88ec49e602e8e5a93143" HandleID="k8s-pod-network.6ecbfeecc99f93ce060fc10c6c346c30a85d650ce7fa88ec49e602e8e5a93143" Workload="localhost-k8s-csi--node--driver--2gxx8-eth0" Nov 6 00:24:43.177784 containerd[1612]: 2025-11-06 00:24:43.122 [INFO][4035] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6ecbfeecc99f93ce060fc10c6c346c30a85d650ce7fa88ec49e602e8e5a93143" Namespace="calico-system" Pod="csi-node-driver-2gxx8" WorkloadEndpoint="localhost-k8s-csi--node--driver--2gxx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2gxx8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"02736082-3e52-4d26-97e7-7ca149273f4e", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 24, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-2gxx8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7dd6dd02aac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:24:43.181930 containerd[1612]: 2025-11-06 00:24:43.122 [INFO][4035] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="6ecbfeecc99f93ce060fc10c6c346c30a85d650ce7fa88ec49e602e8e5a93143" Namespace="calico-system" Pod="csi-node-driver-2gxx8" WorkloadEndpoint="localhost-k8s-csi--node--driver--2gxx8-eth0" Nov 6 00:24:43.181930 containerd[1612]: 2025-11-06 00:24:43.122 [INFO][4035] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7dd6dd02aac ContainerID="6ecbfeecc99f93ce060fc10c6c346c30a85d650ce7fa88ec49e602e8e5a93143" Namespace="calico-system" Pod="csi-node-driver-2gxx8" WorkloadEndpoint="localhost-k8s-csi--node--driver--2gxx8-eth0" Nov 6 00:24:43.181930 containerd[1612]: 2025-11-06 00:24:43.132 [INFO][4035] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ecbfeecc99f93ce060fc10c6c346c30a85d650ce7fa88ec49e602e8e5a93143" Namespace="calico-system" Pod="csi-node-driver-2gxx8" WorkloadEndpoint="localhost-k8s-csi--node--driver--2gxx8-eth0" Nov 6 00:24:43.182018 containerd[1612]: 2025-11-06 00:24:43.133 [INFO][4035] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6ecbfeecc99f93ce060fc10c6c346c30a85d650ce7fa88ec49e602e8e5a93143" Namespace="calico-system" Pod="csi-node-driver-2gxx8" WorkloadEndpoint="localhost-k8s-csi--node--driver--2gxx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2gxx8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"02736082-3e52-4d26-97e7-7ca149273f4e", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 24, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6ecbfeecc99f93ce060fc10c6c346c30a85d650ce7fa88ec49e602e8e5a93143", Pod:"csi-node-driver-2gxx8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7dd6dd02aac", MAC:"da:4e:a9:ea:58:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:24:43.182083 containerd[1612]: 2025-11-06 00:24:43.155 [INFO][4035] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6ecbfeecc99f93ce060fc10c6c346c30a85d650ce7fa88ec49e602e8e5a93143" Namespace="calico-system" Pod="csi-node-driver-2gxx8" WorkloadEndpoint="localhost-k8s-csi--node--driver--2gxx8-eth0" Nov 6 00:24:43.217893 kubelet[2840]: E1106 00:24:43.217840 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:43.239450 kubelet[2840]: I1106 00:24:43.238885 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-m8w79" podStartSLOduration=1.806333461 podStartE2EDuration="24.238866089s" podCreationTimestamp="2025-11-06 00:24:19 +0000 UTC" firstStartedPulling="2025-11-06 00:24:19.700495547 +0000 UTC m=+25.364329832" lastFinishedPulling="2025-11-06 00:24:42.133028175 +0000 UTC m=+47.796862460" observedRunningTime="2025-11-06 00:24:43.238605231 +0000 UTC m=+48.902439546" watchObservedRunningTime="2025-11-06 00:24:43.238866089 +0000 UTC m=+48.902700374" Nov 6 00:24:43.685653 kubelet[2840]: I1106 00:24:43.683874 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v55h9\" (UniqueName: \"kubernetes.io/projected/d1c893a4-aa9b-4a6a-9aff-057008af6a5e-kube-api-access-v55h9\") pod \"whisker-6d79bfcd45-zqqkv\" (UID: \"d1c893a4-aa9b-4a6a-9aff-057008af6a5e\") " pod="calico-system/whisker-6d79bfcd45-zqqkv" Nov 6 00:24:43.685653 kubelet[2840]: I1106 00:24:43.683941 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d1c893a4-aa9b-4a6a-9aff-057008af6a5e-whisker-backend-key-pair\") pod \"whisker-6d79bfcd45-zqqkv\" (UID: \"d1c893a4-aa9b-4a6a-9aff-057008af6a5e\") " pod="calico-system/whisker-6d79bfcd45-zqqkv" Nov 6 00:24:43.685653 kubelet[2840]: I1106 00:24:43.683962 2840 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1c893a4-aa9b-4a6a-9aff-057008af6a5e-whisker-ca-bundle\") pod \"whisker-6d79bfcd45-zqqkv\" (UID: \"d1c893a4-aa9b-4a6a-9aff-057008af6a5e\") " pod="calico-system/whisker-6d79bfcd45-zqqkv" Nov 6 00:24:43.686437 systemd[1]: Created slice kubepods-besteffort-podd1c893a4_aa9b_4a6a_9aff_057008af6a5e.slice - libcontainer container kubepods-besteffort-podd1c893a4_aa9b_4a6a_9aff_057008af6a5e.slice. Nov 6 00:24:43.835162 containerd[1612]: time="2025-11-06T00:24:43.835007040Z" level=info msg="connecting to shim 6ecbfeecc99f93ce060fc10c6c346c30a85d650ce7fa88ec49e602e8e5a93143" address="unix:///run/containerd/s/6ea735c44a49bd3c96dd12cf19accdfcb42a90ed66dcfbe6639c56a2cd06dfaf" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:24:43.856683 containerd[1612]: time="2025-11-06T00:24:43.856628491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-864c69c456-2zzkg,Uid:e5128615-7aa8-48b2-97ed-a5b035282b5e,Namespace:calico-system,Attempt:0,}" Nov 6 00:24:43.881025 systemd[1]: Started cri-containerd-6ecbfeecc99f93ce060fc10c6c346c30a85d650ce7fa88ec49e602e8e5a93143.scope - libcontainer container 6ecbfeecc99f93ce060fc10c6c346c30a85d650ce7fa88ec49e602e8e5a93143. Nov 6 00:24:43.958005 systemd-resolved[1380]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:24:44.008896 containerd[1612]: time="2025-11-06T00:24:44.008848969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d79bfcd45-zqqkv,Uid:d1c893a4-aa9b-4a6a-9aff-057008af6a5e,Namespace:calico-system,Attempt:0,}" Nov 6 00:24:44.118757 containerd[1612]: time="2025-11-06T00:24:44.118677156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2gxx8,Uid:02736082-3e52-4d26-97e7-7ca149273f4e,Namespace:calico-system,Attempt:0,} returns sandbox id \"6ecbfeecc99f93ce060fc10c6c346c30a85d650ce7fa88ec49e602e8e5a93143\"" Nov 6 00:24:44.123479 containerd[1612]: time="2025-11-06T00:24:44.123444888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:24:44.248108 systemd-networkd[1515]: cali51a36269b42: Link UP Nov 6 00:24:44.249261 systemd-networkd[1515]: cali51a36269b42: Gained carrier Nov 6 00:24:44.271737 containerd[1612]: 2025-11-06 00:24:44.142 [INFO][4228] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--864c69c456--2zzkg-eth0 calico-kube-controllers-864c69c456- calico-system e5128615-7aa8-48b2-97ed-a5b035282b5e 911 0 2025-11-06 00:24:19 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:864c69c456 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-864c69c456-2zzkg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali51a36269b42 [] [] }} ContainerID="ad7da50b0621ddaf9298e67dff5a572119ffe2f2a8ef1f39dcf807b0431b3c46" Namespace="calico-system" Pod="calico-kube-controllers-864c69c456-2zzkg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--864c69c456--2zzkg-" Nov 6 00:24:44.271737 containerd[1612]: 2025-11-06 00:24:44.142 [INFO][4228] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ad7da50b0621ddaf9298e67dff5a572119ffe2f2a8ef1f39dcf807b0431b3c46" Namespace="calico-system" Pod="calico-kube-controllers-864c69c456-2zzkg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--864c69c456--2zzkg-eth0" Nov 6 00:24:44.271737 containerd[1612]: 2025-11-06 00:24:44.190 [INFO][4269] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad7da50b0621ddaf9298e67dff5a572119ffe2f2a8ef1f39dcf807b0431b3c46" HandleID="k8s-pod-network.ad7da50b0621ddaf9298e67dff5a572119ffe2f2a8ef1f39dcf807b0431b3c46" Workload="localhost-k8s-calico--kube--controllers--864c69c456--2zzkg-eth0" Nov 6 00:24:44.272331 containerd[1612]: 2025-11-06 00:24:44.191 [INFO][4269] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ad7da50b0621ddaf9298e67dff5a572119ffe2f2a8ef1f39dcf807b0431b3c46" HandleID="k8s-pod-network.ad7da50b0621ddaf9298e67dff5a572119ffe2f2a8ef1f39dcf807b0431b3c46" Workload="localhost-k8s-calico--kube--controllers--864c69c456--2zzkg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7270), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-864c69c456-2zzkg", "timestamp":"2025-11-06 00:24:44.190830735 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:24:44.272331 containerd[1612]: 2025-11-06 00:24:44.191 [INFO][4269] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:24:44.272331 containerd[1612]: 2025-11-06 00:24:44.191 [INFO][4269] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:24:44.272331 containerd[1612]: 2025-11-06 00:24:44.191 [INFO][4269] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 00:24:44.272331 containerd[1612]: 2025-11-06 00:24:44.201 [INFO][4269] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ad7da50b0621ddaf9298e67dff5a572119ffe2f2a8ef1f39dcf807b0431b3c46" host="localhost" Nov 6 00:24:44.272331 containerd[1612]: 2025-11-06 00:24:44.205 [INFO][4269] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 00:24:44.272331 containerd[1612]: 2025-11-06 00:24:44.210 [INFO][4269] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 00:24:44.272331 containerd[1612]: 2025-11-06 00:24:44.213 [INFO][4269] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 00:24:44.272331 containerd[1612]: 2025-11-06 00:24:44.218 [INFO][4269] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 00:24:44.272331 containerd[1612]: 2025-11-06 00:24:44.218 [INFO][4269] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ad7da50b0621ddaf9298e67dff5a572119ffe2f2a8ef1f39dcf807b0431b3c46" host="localhost" Nov 6 00:24:44.272559 containerd[1612]: 2025-11-06 00:24:44.222 [INFO][4269] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ad7da50b0621ddaf9298e67dff5a572119ffe2f2a8ef1f39dcf807b0431b3c46 Nov 6 00:24:44.272559 containerd[1612]: 2025-11-06 00:24:44.227 [INFO][4269] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ad7da50b0621ddaf9298e67dff5a572119ffe2f2a8ef1f39dcf807b0431b3c46" host="localhost" Nov 6 00:24:44.272559 containerd[1612]: 2025-11-06 00:24:44.233 [INFO][4269] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ad7da50b0621ddaf9298e67dff5a572119ffe2f2a8ef1f39dcf807b0431b3c46" host="localhost" Nov 6 00:24:44.272559 containerd[1612]: 2025-11-06 00:24:44.233 [INFO][4269] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ad7da50b0621ddaf9298e67dff5a572119ffe2f2a8ef1f39dcf807b0431b3c46" host="localhost" Nov 6 00:24:44.272559 containerd[1612]: 2025-11-06 00:24:44.233 [INFO][4269] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:24:44.272559 containerd[1612]: 2025-11-06 00:24:44.233 [INFO][4269] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ad7da50b0621ddaf9298e67dff5a572119ffe2f2a8ef1f39dcf807b0431b3c46" HandleID="k8s-pod-network.ad7da50b0621ddaf9298e67dff5a572119ffe2f2a8ef1f39dcf807b0431b3c46" Workload="localhost-k8s-calico--kube--controllers--864c69c456--2zzkg-eth0" Nov 6 00:24:44.272680 containerd[1612]: 2025-11-06 00:24:44.243 [INFO][4228] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ad7da50b0621ddaf9298e67dff5a572119ffe2f2a8ef1f39dcf807b0431b3c46" Namespace="calico-system" Pod="calico-kube-controllers-864c69c456-2zzkg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--864c69c456--2zzkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--864c69c456--2zzkg-eth0", GenerateName:"calico-kube-controllers-864c69c456-", Namespace:"calico-system", SelfLink:"", UID:"e5128615-7aa8-48b2-97ed-a5b035282b5e", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 24, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"864c69c456", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-864c69c456-2zzkg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali51a36269b42", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:24:44.272735 containerd[1612]: 2025-11-06 00:24:44.244 [INFO][4228] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ad7da50b0621ddaf9298e67dff5a572119ffe2f2a8ef1f39dcf807b0431b3c46" Namespace="calico-system" Pod="calico-kube-controllers-864c69c456-2zzkg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--864c69c456--2zzkg-eth0" Nov 6 00:24:44.272735 containerd[1612]: 2025-11-06 00:24:44.244 [INFO][4228] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali51a36269b42 ContainerID="ad7da50b0621ddaf9298e67dff5a572119ffe2f2a8ef1f39dcf807b0431b3c46" Namespace="calico-system" Pod="calico-kube-controllers-864c69c456-2zzkg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--864c69c456--2zzkg-eth0" Nov 6 00:24:44.272735 containerd[1612]: 2025-11-06 00:24:44.248 [INFO][4228] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad7da50b0621ddaf9298e67dff5a572119ffe2f2a8ef1f39dcf807b0431b3c46" Namespace="calico-system" Pod="calico-kube-controllers-864c69c456-2zzkg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--864c69c456--2zzkg-eth0" Nov 6 00:24:44.272795 containerd[1612]: 2025-11-06 00:24:44.249 [INFO][4228] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ad7da50b0621ddaf9298e67dff5a572119ffe2f2a8ef1f39dcf807b0431b3c46" Namespace="calico-system" Pod="calico-kube-controllers-864c69c456-2zzkg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--864c69c456--2zzkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--864c69c456--2zzkg-eth0", GenerateName:"calico-kube-controllers-864c69c456-", Namespace:"calico-system", SelfLink:"", UID:"e5128615-7aa8-48b2-97ed-a5b035282b5e", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 24, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"864c69c456", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ad7da50b0621ddaf9298e67dff5a572119ffe2f2a8ef1f39dcf807b0431b3c46", Pod:"calico-kube-controllers-864c69c456-2zzkg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali51a36269b42", MAC:"4e:0a:91:cc:35:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:24:44.272870 containerd[1612]: 2025-11-06 00:24:44.265 [INFO][4228] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ad7da50b0621ddaf9298e67dff5a572119ffe2f2a8ef1f39dcf807b0431b3c46" Namespace="calico-system" Pod="calico-kube-controllers-864c69c456-2zzkg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--864c69c456--2zzkg-eth0" Nov 6 00:24:44.385568 systemd-networkd[1515]: cali7bfa46d08cb: Link UP Nov 6 00:24:44.386309 containerd[1612]: time="2025-11-06T00:24:44.386239419Z" level=info msg="connecting to shim ad7da50b0621ddaf9298e67dff5a572119ffe2f2a8ef1f39dcf807b0431b3c46" address="unix:///run/containerd/s/996d751cab5e34177b59d316aff8d6301a31dcd4e41704d4f312a10b2697d389" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:24:44.389572 systemd-networkd[1515]: cali7bfa46d08cb: Gained carrier Nov 6 00:24:44.418592 containerd[1612]: 2025-11-06 00:24:44.161 [INFO][4235] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6d79bfcd45--zqqkv-eth0 whisker-6d79bfcd45- calico-system d1c893a4-aa9b-4a6a-9aff-057008af6a5e 1033 0 2025-11-06 00:24:43 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6d79bfcd45 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6d79bfcd45-zqqkv eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7bfa46d08cb [] [] }} ContainerID="f51057cfc5c5b4127b6f293c88e56218ec6e0d713a9e4da1865c52d4c65462c1" Namespace="calico-system" Pod="whisker-6d79bfcd45-zqqkv" WorkloadEndpoint="localhost-k8s-whisker--6d79bfcd45--zqqkv-" Nov 6 00:24:44.418592 containerd[1612]: 2025-11-06 00:24:44.161 [INFO][4235] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f51057cfc5c5b4127b6f293c88e56218ec6e0d713a9e4da1865c52d4c65462c1" Namespace="calico-system" Pod="whisker-6d79bfcd45-zqqkv" WorkloadEndpoint="localhost-k8s-whisker--6d79bfcd45--zqqkv-eth0" Nov 6 00:24:44.418592 containerd[1612]: 2025-11-06 00:24:44.198 [INFO][4275] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f51057cfc5c5b4127b6f293c88e56218ec6e0d713a9e4da1865c52d4c65462c1" HandleID="k8s-pod-network.f51057cfc5c5b4127b6f293c88e56218ec6e0d713a9e4da1865c52d4c65462c1" Workload="localhost-k8s-whisker--6d79bfcd45--zqqkv-eth0" Nov 6 00:24:44.419282 containerd[1612]: 2025-11-06 00:24:44.198 [INFO][4275] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f51057cfc5c5b4127b6f293c88e56218ec6e0d713a9e4da1865c52d4c65462c1" HandleID="k8s-pod-network.f51057cfc5c5b4127b6f293c88e56218ec6e0d713a9e4da1865c52d4c65462c1" Workload="localhost-k8s-whisker--6d79bfcd45--zqqkv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001385d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6d79bfcd45-zqqkv", "timestamp":"2025-11-06 00:24:44.19833209 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:24:44.419282 containerd[1612]: 2025-11-06 00:24:44.198 [INFO][4275] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:24:44.419282 containerd[1612]: 2025-11-06 00:24:44.233 [INFO][4275] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:24:44.419282 containerd[1612]: 2025-11-06 00:24:44.234 [INFO][4275] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 00:24:44.419282 containerd[1612]: 2025-11-06 00:24:44.302 [INFO][4275] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f51057cfc5c5b4127b6f293c88e56218ec6e0d713a9e4da1865c52d4c65462c1" host="localhost" Nov 6 00:24:44.419282 containerd[1612]: 2025-11-06 00:24:44.315 [INFO][4275] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 00:24:44.419282 containerd[1612]: 2025-11-06 00:24:44.324 [INFO][4275] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 00:24:44.419282 containerd[1612]: 2025-11-06 00:24:44.329 [INFO][4275] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 00:24:44.419282 containerd[1612]: 2025-11-06 00:24:44.337 [INFO][4275] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 00:24:44.419282 containerd[1612]: 2025-11-06 00:24:44.340 [INFO][4275] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f51057cfc5c5b4127b6f293c88e56218ec6e0d713a9e4da1865c52d4c65462c1" host="localhost" Nov 6 00:24:44.419743 containerd[1612]: 2025-11-06 00:24:44.343 [INFO][4275] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f51057cfc5c5b4127b6f293c88e56218ec6e0d713a9e4da1865c52d4c65462c1 Nov 6 00:24:44.419743 containerd[1612]: 2025-11-06 00:24:44.356 [INFO][4275] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f51057cfc5c5b4127b6f293c88e56218ec6e0d713a9e4da1865c52d4c65462c1" host="localhost" Nov 6 00:24:44.419743 containerd[1612]: 2025-11-06 00:24:44.366 [INFO][4275] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.f51057cfc5c5b4127b6f293c88e56218ec6e0d713a9e4da1865c52d4c65462c1" host="localhost" Nov 6 00:24:44.419743 containerd[1612]: 2025-11-06 00:24:44.367 [INFO][4275] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.f51057cfc5c5b4127b6f293c88e56218ec6e0d713a9e4da1865c52d4c65462c1" host="localhost" Nov 6 00:24:44.419743 containerd[1612]: 2025-11-06 00:24:44.367 [INFO][4275] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:24:44.419743 containerd[1612]: 2025-11-06 00:24:44.368 [INFO][4275] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="f51057cfc5c5b4127b6f293c88e56218ec6e0d713a9e4da1865c52d4c65462c1" HandleID="k8s-pod-network.f51057cfc5c5b4127b6f293c88e56218ec6e0d713a9e4da1865c52d4c65462c1" Workload="localhost-k8s-whisker--6d79bfcd45--zqqkv-eth0" Nov 6 00:24:44.420032 containerd[1612]: 2025-11-06 00:24:44.377 [INFO][4235] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f51057cfc5c5b4127b6f293c88e56218ec6e0d713a9e4da1865c52d4c65462c1" Namespace="calico-system" Pod="whisker-6d79bfcd45-zqqkv" WorkloadEndpoint="localhost-k8s-whisker--6d79bfcd45--zqqkv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6d79bfcd45--zqqkv-eth0", GenerateName:"whisker-6d79bfcd45-", Namespace:"calico-system", SelfLink:"", UID:"d1c893a4-aa9b-4a6a-9aff-057008af6a5e", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 24, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d79bfcd45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6d79bfcd45-zqqkv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7bfa46d08cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:24:44.420032 containerd[1612]: 2025-11-06 00:24:44.378 [INFO][4235] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="f51057cfc5c5b4127b6f293c88e56218ec6e0d713a9e4da1865c52d4c65462c1" Namespace="calico-system" Pod="whisker-6d79bfcd45-zqqkv" WorkloadEndpoint="localhost-k8s-whisker--6d79bfcd45--zqqkv-eth0" Nov 6 00:24:44.420368 containerd[1612]: 2025-11-06 00:24:44.378 [INFO][4235] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7bfa46d08cb ContainerID="f51057cfc5c5b4127b6f293c88e56218ec6e0d713a9e4da1865c52d4c65462c1" Namespace="calico-system" Pod="whisker-6d79bfcd45-zqqkv" WorkloadEndpoint="localhost-k8s-whisker--6d79bfcd45--zqqkv-eth0" Nov 6 00:24:44.420368 containerd[1612]: 2025-11-06 00:24:44.391 [INFO][4235] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f51057cfc5c5b4127b6f293c88e56218ec6e0d713a9e4da1865c52d4c65462c1" Namespace="calico-system" Pod="whisker-6d79bfcd45-zqqkv" WorkloadEndpoint="localhost-k8s-whisker--6d79bfcd45--zqqkv-eth0" Nov 6 00:24:44.420599 containerd[1612]: 2025-11-06 00:24:44.392 [INFO][4235] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f51057cfc5c5b4127b6f293c88e56218ec6e0d713a9e4da1865c52d4c65462c1" Namespace="calico-system" Pod="whisker-6d79bfcd45-zqqkv" WorkloadEndpoint="localhost-k8s-whisker--6d79bfcd45--zqqkv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6d79bfcd45--zqqkv-eth0", GenerateName:"whisker-6d79bfcd45-", Namespace:"calico-system", SelfLink:"", UID:"d1c893a4-aa9b-4a6a-9aff-057008af6a5e", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 24, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d79bfcd45", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f51057cfc5c5b4127b6f293c88e56218ec6e0d713a9e4da1865c52d4c65462c1", Pod:"whisker-6d79bfcd45-zqqkv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7bfa46d08cb", MAC:"ae:a6:69:69:d7:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:24:44.420697 containerd[1612]: 2025-11-06 00:24:44.409 [INFO][4235] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f51057cfc5c5b4127b6f293c88e56218ec6e0d713a9e4da1865c52d4c65462c1" Namespace="calico-system" Pod="whisker-6d79bfcd45-zqqkv" WorkloadEndpoint="localhost-k8s-whisker--6d79bfcd45--zqqkv-eth0" Nov 6 00:24:44.452478 systemd[1]: Started cri-containerd-ad7da50b0621ddaf9298e67dff5a572119ffe2f2a8ef1f39dcf807b0431b3c46.scope - libcontainer container ad7da50b0621ddaf9298e67dff5a572119ffe2f2a8ef1f39dcf807b0431b3c46. Nov 6 00:24:44.476065 containerd[1612]: time="2025-11-06T00:24:44.474031414Z" level=info msg="connecting to shim f51057cfc5c5b4127b6f293c88e56218ec6e0d713a9e4da1865c52d4c65462c1" address="unix:///run/containerd/s/65cb747cc718348320596721c027efbe8ca5a2cc19d6b85e98fa7d13ba4f9072" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:24:44.479867 containerd[1612]: time="2025-11-06T00:24:44.478012853Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:24:44.479867 containerd[1612]: time="2025-11-06T00:24:44.479439750Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:24:44.479867 containerd[1612]: time="2025-11-06T00:24:44.479558847Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:24:44.483427 kubelet[2840]: E1106 00:24:44.483258 2840 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:24:44.484010 kubelet[2840]: E1106 00:24:44.483866 2840 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:24:44.486207 kubelet[2840]: E1106 00:24:44.485965 2840 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n4vx7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2gxx8_calico-system(02736082-3e52-4d26-97e7-7ca149273f4e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:24:44.491469 containerd[1612]: time="2025-11-06T00:24:44.491327398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:24:44.519668 systemd-resolved[1380]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:24:44.526565 systemd[1]: Started cri-containerd-f51057cfc5c5b4127b6f293c88e56218ec6e0d713a9e4da1865c52d4c65462c1.scope - libcontainer container f51057cfc5c5b4127b6f293c88e56218ec6e0d713a9e4da1865c52d4c65462c1. Nov 6 00:24:44.547979 systemd-networkd[1515]: cali7dd6dd02aac: Gained IPv6LL Nov 6 00:24:44.552392 systemd-resolved[1380]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:24:44.629487 systemd-networkd[1515]: vxlan.calico: Link UP Nov 6 00:24:44.629499 systemd-networkd[1515]: vxlan.calico: Gained carrier Nov 6 00:24:44.633312 containerd[1612]: time="2025-11-06T00:24:44.633229909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d79bfcd45-zqqkv,Uid:d1c893a4-aa9b-4a6a-9aff-057008af6a5e,Namespace:calico-system,Attempt:0,} returns sandbox id \"f51057cfc5c5b4127b6f293c88e56218ec6e0d713a9e4da1865c52d4c65462c1\"" Nov 6 00:24:44.654400 containerd[1612]: time="2025-11-06T00:24:44.654353658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-864c69c456-2zzkg,Uid:e5128615-7aa8-48b2-97ed-a5b035282b5e,Namespace:calico-system,Attempt:0,} returns sandbox id \"ad7da50b0621ddaf9298e67dff5a572119ffe2f2a8ef1f39dcf807b0431b3c46\"" Nov 6 00:24:44.855931 containerd[1612]: time="2025-11-06T00:24:44.855867560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nvmth,Uid:a4ce142d-4dd8-4bd0-9300-ec9175d131da,Namespace:calico-system,Attempt:0,}" Nov 6 00:24:44.859287 kubelet[2840]: I1106 00:24:44.859235 2840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3d35955-d96c-4c0f-8dbc-021043287219" path="/var/lib/kubelet/pods/c3d35955-d96c-4c0f-8dbc-021043287219/volumes" Nov 6 00:24:44.865073 containerd[1612]: time="2025-11-06T00:24:44.864959686Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:24:44.868348 containerd[1612]: time="2025-11-06T00:24:44.867876580Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:24:44.868348 containerd[1612]: time="2025-11-06T00:24:44.868006599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:24:44.869035 kubelet[2840]: E1106 00:24:44.868642 2840 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:24:44.869035 kubelet[2840]: E1106 00:24:44.868703 2840 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:24:44.870200 kubelet[2840]: E1106 00:24:44.870142 2840 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n4vx7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2gxx8_calico-system(02736082-3e52-4d26-97e7-7ca149273f4e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:24:44.870394 containerd[1612]: time="2025-11-06T00:24:44.870278931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:24:44.871517 kubelet[2840]: E1106 00:24:44.871449 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2gxx8" podUID="02736082-3e52-4d26-97e7-7ca149273f4e" Nov 6 00:24:45.019761 systemd-networkd[1515]: cali9baf71093d8: Link UP Nov 6 00:24:45.020021 systemd-networkd[1515]: cali9baf71093d8: Gained carrier Nov 6 00:24:45.039340 containerd[1612]: 2025-11-06 00:24:44.926 [INFO][4433] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--nvmth-eth0 goldmane-666569f655- calico-system a4ce142d-4dd8-4bd0-9300-ec9175d131da 896 0 2025-11-06 00:24:16 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-nvmth eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali9baf71093d8 [] [] }} ContainerID="5de0f75b2d84205d13908b46e6e11b65f8c7bd00c7d664473aa7d4499fef8f0b" Namespace="calico-system" Pod="goldmane-666569f655-nvmth" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nvmth-" Nov 6 00:24:45.039340 containerd[1612]: 2025-11-06 00:24:44.926 [INFO][4433] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5de0f75b2d84205d13908b46e6e11b65f8c7bd00c7d664473aa7d4499fef8f0b" Namespace="calico-system" Pod="goldmane-666569f655-nvmth" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nvmth-eth0" Nov 6 00:24:45.039340 containerd[1612]: 2025-11-06 00:24:44.962 [INFO][4452] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5de0f75b2d84205d13908b46e6e11b65f8c7bd00c7d664473aa7d4499fef8f0b" HandleID="k8s-pod-network.5de0f75b2d84205d13908b46e6e11b65f8c7bd00c7d664473aa7d4499fef8f0b" Workload="localhost-k8s-goldmane--666569f655--nvmth-eth0" Nov 6 00:24:45.039641 containerd[1612]: 2025-11-06 00:24:44.963 [INFO][4452] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5de0f75b2d84205d13908b46e6e11b65f8c7bd00c7d664473aa7d4499fef8f0b" HandleID="k8s-pod-network.5de0f75b2d84205d13908b46e6e11b65f8c7bd00c7d664473aa7d4499fef8f0b" Workload="localhost-k8s-goldmane--666569f655--nvmth-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f650), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-nvmth", "timestamp":"2025-11-06 00:24:44.962698399 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:24:45.039641 containerd[1612]: 2025-11-06 00:24:44.963 [INFO][4452] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:24:45.039641 containerd[1612]: 2025-11-06 00:24:44.963 [INFO][4452] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:24:45.039641 containerd[1612]: 2025-11-06 00:24:44.963 [INFO][4452] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 00:24:45.039641 containerd[1612]: 2025-11-06 00:24:44.973 [INFO][4452] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5de0f75b2d84205d13908b46e6e11b65f8c7bd00c7d664473aa7d4499fef8f0b" host="localhost" Nov 6 00:24:45.039641 containerd[1612]: 2025-11-06 00:24:44.982 [INFO][4452] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 00:24:45.039641 containerd[1612]: 2025-11-06 00:24:44.987 [INFO][4452] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 00:24:45.039641 containerd[1612]: 2025-11-06 00:24:44.989 [INFO][4452] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 00:24:45.039641 containerd[1612]: 2025-11-06 00:24:44.994 [INFO][4452] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 00:24:45.039641 containerd[1612]: 2025-11-06 00:24:44.994 [INFO][4452] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5de0f75b2d84205d13908b46e6e11b65f8c7bd00c7d664473aa7d4499fef8f0b" host="localhost" Nov 6 00:24:45.039962 containerd[1612]: 2025-11-06 00:24:44.996 [INFO][4452] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5de0f75b2d84205d13908b46e6e11b65f8c7bd00c7d664473aa7d4499fef8f0b Nov 6 00:24:45.039962 containerd[1612]: 2025-11-06 00:24:45.002 [INFO][4452] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5de0f75b2d84205d13908b46e6e11b65f8c7bd00c7d664473aa7d4499fef8f0b" host="localhost" Nov 6 00:24:45.039962 containerd[1612]: 2025-11-06 00:24:45.010 [INFO][4452] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.5de0f75b2d84205d13908b46e6e11b65f8c7bd00c7d664473aa7d4499fef8f0b" host="localhost" Nov 6 00:24:45.039962 containerd[1612]: 2025-11-06 00:24:45.010 [INFO][4452] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.5de0f75b2d84205d13908b46e6e11b65f8c7bd00c7d664473aa7d4499fef8f0b" host="localhost" Nov 6 00:24:45.039962 containerd[1612]: 2025-11-06 00:24:45.010 [INFO][4452] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:24:45.039962 containerd[1612]: 2025-11-06 00:24:45.010 [INFO][4452] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="5de0f75b2d84205d13908b46e6e11b65f8c7bd00c7d664473aa7d4499fef8f0b" HandleID="k8s-pod-network.5de0f75b2d84205d13908b46e6e11b65f8c7bd00c7d664473aa7d4499fef8f0b" Workload="localhost-k8s-goldmane--666569f655--nvmth-eth0" Nov 6 00:24:45.040134 containerd[1612]: 2025-11-06 00:24:45.016 [INFO][4433] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5de0f75b2d84205d13908b46e6e11b65f8c7bd00c7d664473aa7d4499fef8f0b" Namespace="calico-system" Pod="goldmane-666569f655-nvmth" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nvmth-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--nvmth-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a4ce142d-4dd8-4bd0-9300-ec9175d131da", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 24, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-nvmth", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9baf71093d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:24:45.040134 containerd[1612]: 2025-11-06 00:24:45.016 [INFO][4433] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="5de0f75b2d84205d13908b46e6e11b65f8c7bd00c7d664473aa7d4499fef8f0b" Namespace="calico-system" Pod="goldmane-666569f655-nvmth" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nvmth-eth0" Nov 6 00:24:45.040234 containerd[1612]: 2025-11-06 00:24:45.016 [INFO][4433] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9baf71093d8 ContainerID="5de0f75b2d84205d13908b46e6e11b65f8c7bd00c7d664473aa7d4499fef8f0b" Namespace="calico-system" Pod="goldmane-666569f655-nvmth" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nvmth-eth0" Nov 6 00:24:45.040234 containerd[1612]: 2025-11-06 00:24:45.020 [INFO][4433] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5de0f75b2d84205d13908b46e6e11b65f8c7bd00c7d664473aa7d4499fef8f0b" Namespace="calico-system" Pod="goldmane-666569f655-nvmth" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nvmth-eth0" Nov 6 00:24:45.040292 containerd[1612]: 2025-11-06 00:24:45.020 [INFO][4433] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5de0f75b2d84205d13908b46e6e11b65f8c7bd00c7d664473aa7d4499fef8f0b" Namespace="calico-system" Pod="goldmane-666569f655-nvmth" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nvmth-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--nvmth-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a4ce142d-4dd8-4bd0-9300-ec9175d131da", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 24, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5de0f75b2d84205d13908b46e6e11b65f8c7bd00c7d664473aa7d4499fef8f0b", Pod:"goldmane-666569f655-nvmth", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9baf71093d8", MAC:"0a:59:18:75:95:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:24:45.040349 containerd[1612]: 2025-11-06 00:24:45.034 [INFO][4433] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5de0f75b2d84205d13908b46e6e11b65f8c7bd00c7d664473aa7d4499fef8f0b" Namespace="calico-system" Pod="goldmane-666569f655-nvmth" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nvmth-eth0" Nov 6 00:24:45.070748 containerd[1612]: time="2025-11-06T00:24:45.070678981Z" level=info msg="connecting to shim 5de0f75b2d84205d13908b46e6e11b65f8c7bd00c7d664473aa7d4499fef8f0b" address="unix:///run/containerd/s/52eb72b48cb5a7433238f66e104193e77e82d0421482fc993a08cd0a5fe35886" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:24:45.100976 systemd[1]: Started cri-containerd-5de0f75b2d84205d13908b46e6e11b65f8c7bd00c7d664473aa7d4499fef8f0b.scope - libcontainer container 5de0f75b2d84205d13908b46e6e11b65f8c7bd00c7d664473aa7d4499fef8f0b. Nov 6 00:24:45.115560 systemd-resolved[1380]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:24:45.181203 containerd[1612]: time="2025-11-06T00:24:45.181149313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nvmth,Uid:a4ce142d-4dd8-4bd0-9300-ec9175d131da,Namespace:calico-system,Attempt:0,} returns sandbox id \"5de0f75b2d84205d13908b46e6e11b65f8c7bd00c7d664473aa7d4499fef8f0b\"" Nov 6 00:24:45.194600 containerd[1612]: time="2025-11-06T00:24:45.194513089Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:24:45.208820 containerd[1612]: time="2025-11-06T00:24:45.208711691Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:24:45.208894 containerd[1612]: time="2025-11-06T00:24:45.208841709Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:24:45.209128 kubelet[2840]: E1106 00:24:45.209067 2840 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:24:45.209211 kubelet[2840]: E1106 00:24:45.209140 2840 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:24:45.209566 kubelet[2840]: E1106 00:24:45.209499 2840 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:0e734bd69df1469bb7194c239cb44140,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v55h9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d79bfcd45-zqqkv_calico-system(d1c893a4-aa9b-4a6a-9aff-057008af6a5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:24:45.209824 containerd[1612]: time="2025-11-06T00:24:45.209616038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:24:45.225924 kubelet[2840]: E1106 00:24:45.225851 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2gxx8" podUID="02736082-3e52-4d26-97e7-7ca149273f4e" Nov 6 00:24:45.508017 systemd-networkd[1515]: cali7bfa46d08cb: Gained IPv6LL Nov 6 00:24:45.621321 containerd[1612]: time="2025-11-06T00:24:45.621247904Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:24:45.623227 containerd[1612]: time="2025-11-06T00:24:45.623162602Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:24:45.623227 containerd[1612]: time="2025-11-06T00:24:45.623217247Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:24:45.623556 kubelet[2840]: E1106 00:24:45.623507 2840 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:24:45.623613 kubelet[2840]: E1106 00:24:45.623569 2840 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:24:45.624244 kubelet[2840]: E1106 00:24:45.623846 2840 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tzr9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-864c69c456-2zzkg_calico-system(e5128615-7aa8-48b2-97ed-a5b035282b5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:24:45.624496 containerd[1612]: time="2025-11-06T00:24:45.623973681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:24:45.625240 kubelet[2840]: E1106 00:24:45.625187 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-864c69c456-2zzkg" podUID="e5128615-7aa8-48b2-97ed-a5b035282b5e" Nov 6 00:24:45.854607 kubelet[2840]: E1106 00:24:45.854554 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:45.856856 containerd[1612]: time="2025-11-06T00:24:45.856747468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79747456c8-w2vxq,Uid:00228e39-7e54-4a3f-b428-59bfdf4f00aa,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:24:45.857118 containerd[1612]: time="2025-11-06T00:24:45.856751405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6jnrw,Uid:83dbbda3-67a2-4589-b3f0-66ca7b03029b,Namespace:kube-system,Attempt:0,}" Nov 6 00:24:45.857118 containerd[1612]: time="2025-11-06T00:24:45.856773648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79747456c8-9kz7k,Uid:04a82cf5-90fc-40d4-9038-65add7f7f20f,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:24:45.892026 systemd-networkd[1515]: cali51a36269b42: Gained IPv6LL Nov 6 00:24:46.029311 systemd-networkd[1515]: cali3a44a125676: Link UP Nov 6 00:24:46.030323 systemd-networkd[1515]: cali3a44a125676: Gained carrier Nov 6 00:24:46.045153 containerd[1612]: 2025-11-06 00:24:45.929 [INFO][4551] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--79747456c8--w2vxq-eth0 calico-apiserver-79747456c8- calico-apiserver 00228e39-7e54-4a3f-b428-59bfdf4f00aa 909 0 2025-11-06 00:24:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79747456c8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-79747456c8-w2vxq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3a44a125676 [] [] }} ContainerID="3f6b7bd66af9b9f102f204576978871f340e010bdd48308738839306aa3c3906" Namespace="calico-apiserver" Pod="calico-apiserver-79747456c8-w2vxq" WorkloadEndpoint="localhost-k8s-calico--apiserver--79747456c8--w2vxq-" Nov 6 00:24:46.045153 containerd[1612]: 2025-11-06 00:24:45.930 [INFO][4551] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3f6b7bd66af9b9f102f204576978871f340e010bdd48308738839306aa3c3906" Namespace="calico-apiserver" Pod="calico-apiserver-79747456c8-w2vxq" WorkloadEndpoint="localhost-k8s-calico--apiserver--79747456c8--w2vxq-eth0" Nov 6 00:24:46.045153 containerd[1612]: 2025-11-06 00:24:45.980 [INFO][4604] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f6b7bd66af9b9f102f204576978871f340e010bdd48308738839306aa3c3906" HandleID="k8s-pod-network.3f6b7bd66af9b9f102f204576978871f340e010bdd48308738839306aa3c3906" Workload="localhost-k8s-calico--apiserver--79747456c8--w2vxq-eth0" Nov 6 00:24:46.045386 containerd[1612]: 2025-11-06 00:24:45.980 [INFO][4604] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3f6b7bd66af9b9f102f204576978871f340e010bdd48308738839306aa3c3906" HandleID="k8s-pod-network.3f6b7bd66af9b9f102f204576978871f340e010bdd48308738839306aa3c3906" Workload="localhost-k8s-calico--apiserver--79747456c8--w2vxq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c70f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-79747456c8-w2vxq", "timestamp":"2025-11-06 00:24:45.980062545 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:24:46.045386 containerd[1612]: 2025-11-06 00:24:45.980 [INFO][4604] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:24:46.045386 containerd[1612]: 2025-11-06 00:24:45.980 [INFO][4604] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:24:46.045386 containerd[1612]: 2025-11-06 00:24:45.980 [INFO][4604] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 00:24:46.045386 containerd[1612]: 2025-11-06 00:24:45.990 [INFO][4604] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3f6b7bd66af9b9f102f204576978871f340e010bdd48308738839306aa3c3906" host="localhost" Nov 6 00:24:46.045386 containerd[1612]: 2025-11-06 00:24:45.995 [INFO][4604] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 00:24:46.045386 containerd[1612]: 2025-11-06 00:24:46.001 [INFO][4604] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 00:24:46.045386 containerd[1612]: 2025-11-06 00:24:46.004 [INFO][4604] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 00:24:46.045386 containerd[1612]: 2025-11-06 00:24:46.007 [INFO][4604] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 00:24:46.045386 containerd[1612]: 2025-11-06 00:24:46.007 [INFO][4604] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3f6b7bd66af9b9f102f204576978871f340e010bdd48308738839306aa3c3906" host="localhost" Nov 6 00:24:46.045625 containerd[1612]: 2025-11-06 00:24:46.009 [INFO][4604] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3f6b7bd66af9b9f102f204576978871f340e010bdd48308738839306aa3c3906 Nov 6 00:24:46.045625 containerd[1612]: 2025-11-06 00:24:46.013 [INFO][4604] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3f6b7bd66af9b9f102f204576978871f340e010bdd48308738839306aa3c3906" host="localhost" Nov 6 00:24:46.045625 containerd[1612]: 2025-11-06 00:24:46.022 [INFO][4604] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.3f6b7bd66af9b9f102f204576978871f340e010bdd48308738839306aa3c3906" host="localhost" Nov 6 00:24:46.045625 containerd[1612]: 2025-11-06 00:24:46.022 [INFO][4604] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.3f6b7bd66af9b9f102f204576978871f340e010bdd48308738839306aa3c3906" host="localhost" Nov 6 00:24:46.045625 containerd[1612]: 2025-11-06 00:24:46.022 [INFO][4604] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:24:46.045625 containerd[1612]: 2025-11-06 00:24:46.022 [INFO][4604] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="3f6b7bd66af9b9f102f204576978871f340e010bdd48308738839306aa3c3906" HandleID="k8s-pod-network.3f6b7bd66af9b9f102f204576978871f340e010bdd48308738839306aa3c3906" Workload="localhost-k8s-calico--apiserver--79747456c8--w2vxq-eth0" Nov 6 00:24:46.045750 containerd[1612]: 2025-11-06 00:24:46.026 [INFO][4551] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3f6b7bd66af9b9f102f204576978871f340e010bdd48308738839306aa3c3906" Namespace="calico-apiserver" Pod="calico-apiserver-79747456c8-w2vxq" WorkloadEndpoint="localhost-k8s-calico--apiserver--79747456c8--w2vxq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79747456c8--w2vxq-eth0", GenerateName:"calico-apiserver-79747456c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"00228e39-7e54-4a3f-b428-59bfdf4f00aa", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 24, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79747456c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-79747456c8-w2vxq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3a44a125676", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:24:46.045801 containerd[1612]: 2025-11-06 00:24:46.026 [INFO][4551] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="3f6b7bd66af9b9f102f204576978871f340e010bdd48308738839306aa3c3906" Namespace="calico-apiserver" Pod="calico-apiserver-79747456c8-w2vxq" WorkloadEndpoint="localhost-k8s-calico--apiserver--79747456c8--w2vxq-eth0" Nov 6 00:24:46.045801 containerd[1612]: 2025-11-06 00:24:46.026 [INFO][4551] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3a44a125676 ContainerID="3f6b7bd66af9b9f102f204576978871f340e010bdd48308738839306aa3c3906" Namespace="calico-apiserver" Pod="calico-apiserver-79747456c8-w2vxq" WorkloadEndpoint="localhost-k8s-calico--apiserver--79747456c8--w2vxq-eth0" Nov 6 00:24:46.045801 containerd[1612]: 2025-11-06 00:24:46.031 [INFO][4551] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f6b7bd66af9b9f102f204576978871f340e010bdd48308738839306aa3c3906" Namespace="calico-apiserver" Pod="calico-apiserver-79747456c8-w2vxq" WorkloadEndpoint="localhost-k8s-calico--apiserver--79747456c8--w2vxq-eth0" Nov 6 00:24:46.045942 containerd[1612]: 2025-11-06 00:24:46.031 [INFO][4551] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3f6b7bd66af9b9f102f204576978871f340e010bdd48308738839306aa3c3906" Namespace="calico-apiserver" Pod="calico-apiserver-79747456c8-w2vxq" WorkloadEndpoint="localhost-k8s-calico--apiserver--79747456c8--w2vxq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79747456c8--w2vxq-eth0", GenerateName:"calico-apiserver-79747456c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"00228e39-7e54-4a3f-b428-59bfdf4f00aa", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 24, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79747456c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3f6b7bd66af9b9f102f204576978871f340e010bdd48308738839306aa3c3906", Pod:"calico-apiserver-79747456c8-w2vxq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3a44a125676", MAC:"62:fd:9c:b6:a2:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:24:46.045996 containerd[1612]: 2025-11-06 00:24:46.041 [INFO][4551] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3f6b7bd66af9b9f102f204576978871f340e010bdd48308738839306aa3c3906" Namespace="calico-apiserver" Pod="calico-apiserver-79747456c8-w2vxq" WorkloadEndpoint="localhost-k8s-calico--apiserver--79747456c8--w2vxq-eth0" Nov 6 00:24:46.113053 containerd[1612]: time="2025-11-06T00:24:46.112930469Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:24:46.115697 containerd[1612]: time="2025-11-06T00:24:46.115653188Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:24:46.115763 containerd[1612]: time="2025-11-06T00:24:46.115728411Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:24:46.115975 kubelet[2840]: E1106 00:24:46.115934 2840 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:24:46.116449 kubelet[2840]: E1106 00:24:46.115990 2840 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:24:46.116449 kubelet[2840]: E1106 00:24:46.116181 2840 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m6cf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nvmth_calico-system(a4ce142d-4dd8-4bd0-9300-ec9175d131da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:24:46.116624 containerd[1612]: time="2025-11-06T00:24:46.116363644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:24:46.117667 kubelet[2840]: E1106 00:24:46.117641 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nvmth" podUID="a4ce142d-4dd8-4bd0-9300-ec9175d131da" Nov 6 00:24:46.132922 containerd[1612]: time="2025-11-06T00:24:46.132857203Z" level=info msg="connecting to shim 3f6b7bd66af9b9f102f204576978871f340e010bdd48308738839306aa3c3906" address="unix:///run/containerd/s/29821687c24b6b8280b87f2d33a8f7bac7295b115c9a15c39845322273e6481f" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:24:46.137609 systemd-networkd[1515]: cali120d09e05df: Link UP Nov 6 00:24:46.139701 systemd-networkd[1515]: cali120d09e05df: Gained carrier Nov 6 00:24:46.164006 containerd[1612]: 2025-11-06 00:24:45.933 [INFO][4575] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--79747456c8--9kz7k-eth0 calico-apiserver-79747456c8- calico-apiserver 04a82cf5-90fc-40d4-9038-65add7f7f20f 910 0 2025-11-06 00:24:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79747456c8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-79747456c8-9kz7k eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali120d09e05df [] [] }} ContainerID="0a95a5510e41366508c00f4a48e9f888448a8da44281804e471134777f15e0e7" Namespace="calico-apiserver" Pod="calico-apiserver-79747456c8-9kz7k" WorkloadEndpoint="localhost-k8s-calico--apiserver--79747456c8--9kz7k-" Nov 6 00:24:46.164006 containerd[1612]: 2025-11-06 00:24:45.933 [INFO][4575] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0a95a5510e41366508c00f4a48e9f888448a8da44281804e471134777f15e0e7" Namespace="calico-apiserver" Pod="calico-apiserver-79747456c8-9kz7k" WorkloadEndpoint="localhost-k8s-calico--apiserver--79747456c8--9kz7k-eth0" Nov 6 00:24:46.164006 containerd[1612]: 2025-11-06 00:24:45.985 [INFO][4598] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a95a5510e41366508c00f4a48e9f888448a8da44281804e471134777f15e0e7" HandleID="k8s-pod-network.0a95a5510e41366508c00f4a48e9f888448a8da44281804e471134777f15e0e7" Workload="localhost-k8s-calico--apiserver--79747456c8--9kz7k-eth0" Nov 6 00:24:46.164277 containerd[1612]: 2025-11-06 00:24:45.985 [INFO][4598] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0a95a5510e41366508c00f4a48e9f888448a8da44281804e471134777f15e0e7" HandleID="k8s-pod-network.0a95a5510e41366508c00f4a48e9f888448a8da44281804e471134777f15e0e7" Workload="localhost-k8s-calico--apiserver--79747456c8--9kz7k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e60d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-79747456c8-9kz7k", "timestamp":"2025-11-06 00:24:45.985741973 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:24:46.164277 containerd[1612]: 2025-11-06 00:24:45.986 [INFO][4598] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:24:46.164277 containerd[1612]: 2025-11-06 00:24:46.022 [INFO][4598] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:24:46.164277 containerd[1612]: 2025-11-06 00:24:46.022 [INFO][4598] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 00:24:46.164277 containerd[1612]: 2025-11-06 00:24:46.091 [INFO][4598] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0a95a5510e41366508c00f4a48e9f888448a8da44281804e471134777f15e0e7" host="localhost" Nov 6 00:24:46.164277 containerd[1612]: 2025-11-06 00:24:46.096 [INFO][4598] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 00:24:46.164277 containerd[1612]: 2025-11-06 00:24:46.100 [INFO][4598] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 00:24:46.164277 containerd[1612]: 2025-11-06 00:24:46.102 [INFO][4598] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 00:24:46.164277 containerd[1612]: 2025-11-06 00:24:46.104 [INFO][4598] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 00:24:46.164277 containerd[1612]: 2025-11-06 00:24:46.104 [INFO][4598] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0a95a5510e41366508c00f4a48e9f888448a8da44281804e471134777f15e0e7" host="localhost" Nov 6 00:24:46.164624 containerd[1612]: 2025-11-06 00:24:46.106 [INFO][4598] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0a95a5510e41366508c00f4a48e9f888448a8da44281804e471134777f15e0e7 Nov 6 00:24:46.164624 containerd[1612]: 2025-11-06 00:24:46.110 [INFO][4598] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0a95a5510e41366508c00f4a48e9f888448a8da44281804e471134777f15e0e7" host="localhost" Nov 6 00:24:46.164624 containerd[1612]: 2025-11-06 00:24:46.123 [INFO][4598] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.0a95a5510e41366508c00f4a48e9f888448a8da44281804e471134777f15e0e7" host="localhost" Nov 6 00:24:46.164624 containerd[1612]: 2025-11-06 00:24:46.123 [INFO][4598] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.0a95a5510e41366508c00f4a48e9f888448a8da44281804e471134777f15e0e7" host="localhost" Nov 6 00:24:46.164624 containerd[1612]: 2025-11-06 00:24:46.123 [INFO][4598] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:24:46.164624 containerd[1612]: 2025-11-06 00:24:46.123 [INFO][4598] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="0a95a5510e41366508c00f4a48e9f888448a8da44281804e471134777f15e0e7" HandleID="k8s-pod-network.0a95a5510e41366508c00f4a48e9f888448a8da44281804e471134777f15e0e7" Workload="localhost-k8s-calico--apiserver--79747456c8--9kz7k-eth0" Nov 6 00:24:46.164794 containerd[1612]: 2025-11-06 00:24:46.128 [INFO][4575] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0a95a5510e41366508c00f4a48e9f888448a8da44281804e471134777f15e0e7" Namespace="calico-apiserver" Pod="calico-apiserver-79747456c8-9kz7k" WorkloadEndpoint="localhost-k8s-calico--apiserver--79747456c8--9kz7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79747456c8--9kz7k-eth0", GenerateName:"calico-apiserver-79747456c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"04a82cf5-90fc-40d4-9038-65add7f7f20f", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 24, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79747456c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-79747456c8-9kz7k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali120d09e05df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:24:46.164902 containerd[1612]: 2025-11-06 00:24:46.129 [INFO][4575] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="0a95a5510e41366508c00f4a48e9f888448a8da44281804e471134777f15e0e7" Namespace="calico-apiserver" Pod="calico-apiserver-79747456c8-9kz7k" WorkloadEndpoint="localhost-k8s-calico--apiserver--79747456c8--9kz7k-eth0" Nov 6 00:24:46.164902 containerd[1612]: 2025-11-06 00:24:46.129 [INFO][4575] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali120d09e05df ContainerID="0a95a5510e41366508c00f4a48e9f888448a8da44281804e471134777f15e0e7" Namespace="calico-apiserver" Pod="calico-apiserver-79747456c8-9kz7k" WorkloadEndpoint="localhost-k8s-calico--apiserver--79747456c8--9kz7k-eth0" Nov 6 00:24:46.164902 containerd[1612]: 2025-11-06 00:24:46.139 [INFO][4575] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a95a5510e41366508c00f4a48e9f888448a8da44281804e471134777f15e0e7" Namespace="calico-apiserver" Pod="calico-apiserver-79747456c8-9kz7k" WorkloadEndpoint="localhost-k8s-calico--apiserver--79747456c8--9kz7k-eth0" Nov 6 00:24:46.164997 containerd[1612]: 2025-11-06 00:24:46.140 [INFO][4575] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0a95a5510e41366508c00f4a48e9f888448a8da44281804e471134777f15e0e7" Namespace="calico-apiserver" Pod="calico-apiserver-79747456c8-9kz7k" WorkloadEndpoint="localhost-k8s-calico--apiserver--79747456c8--9kz7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79747456c8--9kz7k-eth0", GenerateName:"calico-apiserver-79747456c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"04a82cf5-90fc-40d4-9038-65add7f7f20f", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 24, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79747456c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0a95a5510e41366508c00f4a48e9f888448a8da44281804e471134777f15e0e7", Pod:"calico-apiserver-79747456c8-9kz7k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali120d09e05df", MAC:"96:a6:b7:99:6d:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:24:46.165065 containerd[1612]: 2025-11-06 00:24:46.158 [INFO][4575] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0a95a5510e41366508c00f4a48e9f888448a8da44281804e471134777f15e0e7" Namespace="calico-apiserver" Pod="calico-apiserver-79747456c8-9kz7k" WorkloadEndpoint="localhost-k8s-calico--apiserver--79747456c8--9kz7k-eth0" Nov 6 00:24:46.172036 systemd[1]: Started cri-containerd-3f6b7bd66af9b9f102f204576978871f340e010bdd48308738839306aa3c3906.scope - libcontainer container 3f6b7bd66af9b9f102f204576978871f340e010bdd48308738839306aa3c3906. Nov 6 00:24:46.188625 systemd-resolved[1380]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:24:46.200717 containerd[1612]: time="2025-11-06T00:24:46.200649705Z" level=info msg="connecting to shim 0a95a5510e41366508c00f4a48e9f888448a8da44281804e471134777f15e0e7" address="unix:///run/containerd/s/43540dec2cb3229124b0f95db2bc70bc0b91a0e681ac8b8d09e97de7f1f4413c" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:24:46.230998 kubelet[2840]: E1106 00:24:46.230952 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nvmth" podUID="a4ce142d-4dd8-4bd0-9300-ec9175d131da" Nov 6 00:24:46.231406 kubelet[2840]: E1106 00:24:46.231371 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-864c69c456-2zzkg" podUID="e5128615-7aa8-48b2-97ed-a5b035282b5e" Nov 6 00:24:46.240148 systemd[1]: Started cri-containerd-0a95a5510e41366508c00f4a48e9f888448a8da44281804e471134777f15e0e7.scope - libcontainer container 0a95a5510e41366508c00f4a48e9f888448a8da44281804e471134777f15e0e7. Nov 6 00:24:46.265818 systemd-resolved[1380]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:24:46.284824 containerd[1612]: time="2025-11-06T00:24:46.284704464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79747456c8-w2vxq,Uid:00228e39-7e54-4a3f-b428-59bfdf4f00aa,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3f6b7bd66af9b9f102f204576978871f340e010bdd48308738839306aa3c3906\"" Nov 6 00:24:46.306491 systemd-networkd[1515]: cali5a59916d6b7: Link UP Nov 6 00:24:46.308166 systemd-networkd[1515]: cali5a59916d6b7: Gained carrier Nov 6 00:24:46.316393 containerd[1612]: time="2025-11-06T00:24:46.316269469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79747456c8-9kz7k,Uid:04a82cf5-90fc-40d4-9038-65add7f7f20f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0a95a5510e41366508c00f4a48e9f888448a8da44281804e471134777f15e0e7\"" Nov 6 00:24:46.330895 containerd[1612]: 2025-11-06 00:24:45.932 [INFO][4561] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--6jnrw-eth0 coredns-674b8bbfcf- kube-system 83dbbda3-67a2-4589-b3f0-66ca7b03029b 901 0 2025-11-06 00:23:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-6jnrw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5a59916d6b7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e70bef78f7f716942c3ef11eedc9f4443af2204b4de9b840216a6cd066ae508d" Namespace="kube-system" Pod="coredns-674b8bbfcf-6jnrw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6jnrw-" Nov 6 00:24:46.330895 containerd[1612]: 2025-11-06 00:24:45.932 [INFO][4561] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e70bef78f7f716942c3ef11eedc9f4443af2204b4de9b840216a6cd066ae508d" Namespace="kube-system" Pod="coredns-674b8bbfcf-6jnrw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6jnrw-eth0" Nov 6 00:24:46.330895 containerd[1612]: 2025-11-06 00:24:45.987 [INFO][4595] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e70bef78f7f716942c3ef11eedc9f4443af2204b4de9b840216a6cd066ae508d" HandleID="k8s-pod-network.e70bef78f7f716942c3ef11eedc9f4443af2204b4de9b840216a6cd066ae508d" Workload="localhost-k8s-coredns--674b8bbfcf--6jnrw-eth0" Nov 6 00:24:46.331143 containerd[1612]: 2025-11-06 00:24:45.987 [INFO][4595] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e70bef78f7f716942c3ef11eedc9f4443af2204b4de9b840216a6cd066ae508d" HandleID="k8s-pod-network.e70bef78f7f716942c3ef11eedc9f4443af2204b4de9b840216a6cd066ae508d" Workload="localhost-k8s-coredns--674b8bbfcf--6jnrw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b41b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-6jnrw", "timestamp":"2025-11-06 00:24:45.987623456 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:24:46.331143 containerd[1612]: 2025-11-06 00:24:45.987 [INFO][4595] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:24:46.331143 containerd[1612]: 2025-11-06 00:24:46.123 [INFO][4595] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:24:46.331143 containerd[1612]: 2025-11-06 00:24:46.123 [INFO][4595] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 00:24:46.331143 containerd[1612]: 2025-11-06 00:24:46.193 [INFO][4595] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e70bef78f7f716942c3ef11eedc9f4443af2204b4de9b840216a6cd066ae508d" host="localhost" Nov 6 00:24:46.331143 containerd[1612]: 2025-11-06 00:24:46.207 [INFO][4595] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 00:24:46.331143 containerd[1612]: 2025-11-06 00:24:46.218 [INFO][4595] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 00:24:46.331143 containerd[1612]: 2025-11-06 00:24:46.221 [INFO][4595] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 00:24:46.331143 containerd[1612]: 2025-11-06 00:24:46.224 [INFO][4595] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 00:24:46.331143 containerd[1612]: 2025-11-06 00:24:46.225 [INFO][4595] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e70bef78f7f716942c3ef11eedc9f4443af2204b4de9b840216a6cd066ae508d" host="localhost" Nov 6 00:24:46.331363 containerd[1612]: 2025-11-06 00:24:46.228 [INFO][4595] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e70bef78f7f716942c3ef11eedc9f4443af2204b4de9b840216a6cd066ae508d Nov 6 00:24:46.331363 containerd[1612]: 2025-11-06 00:24:46.246 [INFO][4595] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e70bef78f7f716942c3ef11eedc9f4443af2204b4de9b840216a6cd066ae508d" host="localhost" Nov 6 00:24:46.331363 containerd[1612]: 2025-11-06 00:24:46.287 [INFO][4595] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.e70bef78f7f716942c3ef11eedc9f4443af2204b4de9b840216a6cd066ae508d" host="localhost" Nov 6 00:24:46.331363 containerd[1612]: 2025-11-06 00:24:46.287 [INFO][4595] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.e70bef78f7f716942c3ef11eedc9f4443af2204b4de9b840216a6cd066ae508d" host="localhost" Nov 6 00:24:46.331363 containerd[1612]: 2025-11-06 00:24:46.287 [INFO][4595] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:24:46.331363 containerd[1612]: 2025-11-06 00:24:46.287 [INFO][4595] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="e70bef78f7f716942c3ef11eedc9f4443af2204b4de9b840216a6cd066ae508d" HandleID="k8s-pod-network.e70bef78f7f716942c3ef11eedc9f4443af2204b4de9b840216a6cd066ae508d" Workload="localhost-k8s-coredns--674b8bbfcf--6jnrw-eth0" Nov 6 00:24:46.331502 containerd[1612]: 2025-11-06 00:24:46.297 [INFO][4561] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e70bef78f7f716942c3ef11eedc9f4443af2204b4de9b840216a6cd066ae508d" Namespace="kube-system" Pod="coredns-674b8bbfcf-6jnrw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6jnrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--6jnrw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"83dbbda3-67a2-4589-b3f0-66ca7b03029b", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 23, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-6jnrw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5a59916d6b7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:24:46.331560 containerd[1612]: 2025-11-06 00:24:46.300 [INFO][4561] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="e70bef78f7f716942c3ef11eedc9f4443af2204b4de9b840216a6cd066ae508d" Namespace="kube-system" Pod="coredns-674b8bbfcf-6jnrw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6jnrw-eth0" Nov 6 00:24:46.331560 containerd[1612]: 2025-11-06 00:24:46.300 [INFO][4561] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5a59916d6b7 ContainerID="e70bef78f7f716942c3ef11eedc9f4443af2204b4de9b840216a6cd066ae508d" Namespace="kube-system" Pod="coredns-674b8bbfcf-6jnrw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6jnrw-eth0" Nov 6 00:24:46.331560 containerd[1612]: 2025-11-06 00:24:46.309 [INFO][4561] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e70bef78f7f716942c3ef11eedc9f4443af2204b4de9b840216a6cd066ae508d" Namespace="kube-system" Pod="coredns-674b8bbfcf-6jnrw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6jnrw-eth0" Nov 6 00:24:46.331627 containerd[1612]: 2025-11-06 00:24:46.310 [INFO][4561] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e70bef78f7f716942c3ef11eedc9f4443af2204b4de9b840216a6cd066ae508d" Namespace="kube-system" Pod="coredns-674b8bbfcf-6jnrw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6jnrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--6jnrw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"83dbbda3-67a2-4589-b3f0-66ca7b03029b", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 23, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e70bef78f7f716942c3ef11eedc9f4443af2204b4de9b840216a6cd066ae508d", Pod:"coredns-674b8bbfcf-6jnrw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5a59916d6b7", MAC:"be:9a:bc:99:8d:2c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:24:46.331627 containerd[1612]: 2025-11-06 00:24:46.325 [INFO][4561] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e70bef78f7f716942c3ef11eedc9f4443af2204b4de9b840216a6cd066ae508d" Namespace="kube-system" Pod="coredns-674b8bbfcf-6jnrw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6jnrw-eth0" Nov 6 00:24:46.340273 systemd-networkd[1515]: vxlan.calico: Gained IPv6LL Nov 6 00:24:46.361958 containerd[1612]: time="2025-11-06T00:24:46.361855964Z" level=info msg="connecting to shim e70bef78f7f716942c3ef11eedc9f4443af2204b4de9b840216a6cd066ae508d" address="unix:///run/containerd/s/d6b0e735954a38aa321997ccea35c05c85dccc0e50b2ddc825a1da595864119a" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:24:46.397197 systemd[1]: Started cri-containerd-e70bef78f7f716942c3ef11eedc9f4443af2204b4de9b840216a6cd066ae508d.scope - libcontainer container e70bef78f7f716942c3ef11eedc9f4443af2204b4de9b840216a6cd066ae508d. Nov 6 00:24:46.415832 systemd-resolved[1380]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:24:46.450083 containerd[1612]: time="2025-11-06T00:24:46.450015902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6jnrw,Uid:83dbbda3-67a2-4589-b3f0-66ca7b03029b,Namespace:kube-system,Attempt:0,} returns sandbox id \"e70bef78f7f716942c3ef11eedc9f4443af2204b4de9b840216a6cd066ae508d\"" Nov 6 00:24:46.450983 kubelet[2840]: E1106 00:24:46.450945 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:46.455686 containerd[1612]: time="2025-11-06T00:24:46.455643376Z" level=info msg="CreateContainer within sandbox \"e70bef78f7f716942c3ef11eedc9f4443af2204b4de9b840216a6cd066ae508d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 00:24:46.467988 systemd-networkd[1515]: cali9baf71093d8: Gained IPv6LL Nov 6 00:24:46.470713 containerd[1612]: time="2025-11-06T00:24:46.470657982Z" level=info msg="Container 5bf4283ad60d0abcacace58050fbca151c02da8a5f4060ab8ad8bfbfcb868c1f: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:24:46.478208 containerd[1612]: time="2025-11-06T00:24:46.478160506Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:24:46.479042 containerd[1612]: time="2025-11-06T00:24:46.478998025Z" level=info msg="CreateContainer within sandbox \"e70bef78f7f716942c3ef11eedc9f4443af2204b4de9b840216a6cd066ae508d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5bf4283ad60d0abcacace58050fbca151c02da8a5f4060ab8ad8bfbfcb868c1f\"" Nov 6 00:24:46.479648 containerd[1612]: time="2025-11-06T00:24:46.479611867Z" level=info msg="StartContainer for \"5bf4283ad60d0abcacace58050fbca151c02da8a5f4060ab8ad8bfbfcb868c1f\"" Nov 6 00:24:46.480469 containerd[1612]: time="2025-11-06T00:24:46.480405512Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:24:46.480542 containerd[1612]: time="2025-11-06T00:24:46.480510513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:24:46.480732 containerd[1612]: time="2025-11-06T00:24:46.480657955Z" level=info msg="connecting to shim 5bf4283ad60d0abcacace58050fbca151c02da8a5f4060ab8ad8bfbfcb868c1f" address="unix:///run/containerd/s/d6b0e735954a38aa321997ccea35c05c85dccc0e50b2ddc825a1da595864119a" protocol=ttrpc version=3 Nov 6 00:24:46.480779 kubelet[2840]: E1106 00:24:46.480678 2840 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:24:46.480779 kubelet[2840]: E1106 00:24:46.480724 2840 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:24:46.480977 kubelet[2840]: E1106 00:24:46.480922 2840 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v55h9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d79bfcd45-zqqkv_calico-system(d1c893a4-aa9b-4a6a-9aff-057008af6a5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:24:46.481125 containerd[1612]: time="2025-11-06T00:24:46.481017020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:24:46.482414 kubelet[2840]: E1106 00:24:46.482356 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d79bfcd45-zqqkv" podUID="d1c893a4-aa9b-4a6a-9aff-057008af6a5e" Nov 6 00:24:46.518049 systemd[1]: Started cri-containerd-5bf4283ad60d0abcacace58050fbca151c02da8a5f4060ab8ad8bfbfcb868c1f.scope - libcontainer container 5bf4283ad60d0abcacace58050fbca151c02da8a5f4060ab8ad8bfbfcb868c1f. Nov 6 00:24:46.564896 containerd[1612]: time="2025-11-06T00:24:46.564848814Z" level=info msg="StartContainer for \"5bf4283ad60d0abcacace58050fbca151c02da8a5f4060ab8ad8bfbfcb868c1f\" returns successfully" Nov 6 00:24:46.838971 containerd[1612]: time="2025-11-06T00:24:46.838909408Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:24:46.845002 containerd[1612]: time="2025-11-06T00:24:46.844958146Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:24:46.845002 containerd[1612]: time="2025-11-06T00:24:46.844991871Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:24:46.845324 kubelet[2840]: E1106 00:24:46.845257 2840 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:24:46.845565 kubelet[2840]: E1106 00:24:46.845331 2840 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:24:46.845788 kubelet[2840]: E1106 00:24:46.845722 2840 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fdg5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-79747456c8-w2vxq_calico-apiserver(00228e39-7e54-4a3f-b428-59bfdf4f00aa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:24:46.846059 containerd[1612]: time="2025-11-06T00:24:46.846031525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:24:46.848512 kubelet[2840]: E1106 00:24:46.847874 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79747456c8-w2vxq" podUID="00228e39-7e54-4a3f-b428-59bfdf4f00aa" Nov 6 00:24:46.854079 kubelet[2840]: E1106 00:24:46.853974 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:46.854639 containerd[1612]: time="2025-11-06T00:24:46.854602469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-96c9f,Uid:5aa1b76a-bf42-444c-9590-6b40021950af,Namespace:kube-system,Attempt:0,}" Nov 6 00:24:46.963267 systemd-networkd[1515]: cali5ce95222dfc: Link UP Nov 6 00:24:46.963977 systemd-networkd[1515]: cali5ce95222dfc: Gained carrier Nov 6 00:24:46.978993 containerd[1612]: 2025-11-06 00:24:46.896 [INFO][4822] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--96c9f-eth0 coredns-674b8bbfcf- kube-system 5aa1b76a-bf42-444c-9590-6b40021950af 906 0 2025-11-06 00:23:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-96c9f eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5ce95222dfc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ab7fcd9f78a5e48293f2108e21299993ed4afd021f9777217d573cf016df3fd6" Namespace="kube-system" Pod="coredns-674b8bbfcf-96c9f" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--96c9f-" Nov 6 00:24:46.978993 containerd[1612]: 2025-11-06 00:24:46.897 [INFO][4822] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ab7fcd9f78a5e48293f2108e21299993ed4afd021f9777217d573cf016df3fd6" Namespace="kube-system" Pod="coredns-674b8bbfcf-96c9f" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--96c9f-eth0" Nov 6 00:24:46.978993 containerd[1612]: 2025-11-06 00:24:46.924 [INFO][4836] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ab7fcd9f78a5e48293f2108e21299993ed4afd021f9777217d573cf016df3fd6" HandleID="k8s-pod-network.ab7fcd9f78a5e48293f2108e21299993ed4afd021f9777217d573cf016df3fd6" Workload="localhost-k8s-coredns--674b8bbfcf--96c9f-eth0" Nov 6 00:24:46.978993 containerd[1612]: 2025-11-06 00:24:46.924 [INFO][4836] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ab7fcd9f78a5e48293f2108e21299993ed4afd021f9777217d573cf016df3fd6" HandleID="k8s-pod-network.ab7fcd9f78a5e48293f2108e21299993ed4afd021f9777217d573cf016df3fd6" Workload="localhost-k8s-coredns--674b8bbfcf--96c9f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7f30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-96c9f", "timestamp":"2025-11-06 00:24:46.924401532 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:24:46.978993 containerd[1612]: 2025-11-06 00:24:46.924 [INFO][4836] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:24:46.978993 containerd[1612]: 2025-11-06 00:24:46.924 [INFO][4836] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:24:46.978993 containerd[1612]: 2025-11-06 00:24:46.924 [INFO][4836] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 6 00:24:46.978993 containerd[1612]: 2025-11-06 00:24:46.931 [INFO][4836] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ab7fcd9f78a5e48293f2108e21299993ed4afd021f9777217d573cf016df3fd6" host="localhost" Nov 6 00:24:46.978993 containerd[1612]: 2025-11-06 00:24:46.937 [INFO][4836] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 6 00:24:46.978993 containerd[1612]: 2025-11-06 00:24:46.941 [INFO][4836] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 6 00:24:46.978993 containerd[1612]: 2025-11-06 00:24:46.942 [INFO][4836] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 6 00:24:46.978993 containerd[1612]: 2025-11-06 00:24:46.944 [INFO][4836] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 6 00:24:46.978993 containerd[1612]: 2025-11-06 00:24:46.944 [INFO][4836] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ab7fcd9f78a5e48293f2108e21299993ed4afd021f9777217d573cf016df3fd6" host="localhost" Nov 6 00:24:46.978993 containerd[1612]: 2025-11-06 00:24:46.946 [INFO][4836] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ab7fcd9f78a5e48293f2108e21299993ed4afd021f9777217d573cf016df3fd6 Nov 6 00:24:46.978993 containerd[1612]: 2025-11-06 00:24:46.950 [INFO][4836] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ab7fcd9f78a5e48293f2108e21299993ed4afd021f9777217d573cf016df3fd6" host="localhost" Nov 6 00:24:46.978993 containerd[1612]: 2025-11-06 00:24:46.956 [INFO][4836] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.ab7fcd9f78a5e48293f2108e21299993ed4afd021f9777217d573cf016df3fd6" host="localhost" Nov 6 00:24:46.978993 containerd[1612]: 2025-11-06 00:24:46.956 [INFO][4836] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.ab7fcd9f78a5e48293f2108e21299993ed4afd021f9777217d573cf016df3fd6" host="localhost" Nov 6 00:24:46.978993 containerd[1612]: 2025-11-06 00:24:46.956 [INFO][4836] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:24:46.978993 containerd[1612]: 2025-11-06 00:24:46.956 [INFO][4836] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="ab7fcd9f78a5e48293f2108e21299993ed4afd021f9777217d573cf016df3fd6" HandleID="k8s-pod-network.ab7fcd9f78a5e48293f2108e21299993ed4afd021f9777217d573cf016df3fd6" Workload="localhost-k8s-coredns--674b8bbfcf--96c9f-eth0" Nov 6 00:24:46.979658 containerd[1612]: 2025-11-06 00:24:46.960 [INFO][4822] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ab7fcd9f78a5e48293f2108e21299993ed4afd021f9777217d573cf016df3fd6" Namespace="kube-system" Pod="coredns-674b8bbfcf-96c9f" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--96c9f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--96c9f-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5aa1b76a-bf42-444c-9590-6b40021950af", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 23, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-96c9f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ce95222dfc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:24:46.979658 containerd[1612]: 2025-11-06 00:24:46.960 [INFO][4822] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="ab7fcd9f78a5e48293f2108e21299993ed4afd021f9777217d573cf016df3fd6" Namespace="kube-system" Pod="coredns-674b8bbfcf-96c9f" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--96c9f-eth0" Nov 6 00:24:46.979658 containerd[1612]: 2025-11-06 00:24:46.960 [INFO][4822] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ce95222dfc ContainerID="ab7fcd9f78a5e48293f2108e21299993ed4afd021f9777217d573cf016df3fd6" Namespace="kube-system" Pod="coredns-674b8bbfcf-96c9f" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--96c9f-eth0" Nov 6 00:24:46.979658 containerd[1612]: 2025-11-06 00:24:46.964 [INFO][4822] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ab7fcd9f78a5e48293f2108e21299993ed4afd021f9777217d573cf016df3fd6" Namespace="kube-system" Pod="coredns-674b8bbfcf-96c9f" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--96c9f-eth0" Nov 6 00:24:46.979658 containerd[1612]: 2025-11-06 00:24:46.965 [INFO][4822] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ab7fcd9f78a5e48293f2108e21299993ed4afd021f9777217d573cf016df3fd6" Namespace="kube-system" Pod="coredns-674b8bbfcf-96c9f" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--96c9f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--96c9f-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5aa1b76a-bf42-444c-9590-6b40021950af", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 23, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ab7fcd9f78a5e48293f2108e21299993ed4afd021f9777217d573cf016df3fd6", Pod:"coredns-674b8bbfcf-96c9f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ce95222dfc", MAC:"b2:fa:2e:83:ab:7c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:24:46.979658 containerd[1612]: 2025-11-06 00:24:46.974 [INFO][4822] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ab7fcd9f78a5e48293f2108e21299993ed4afd021f9777217d573cf016df3fd6" Namespace="kube-system" Pod="coredns-674b8bbfcf-96c9f" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--96c9f-eth0" Nov 6 00:24:47.002265 containerd[1612]: time="2025-11-06T00:24:47.002213963Z" level=info msg="connecting to shim ab7fcd9f78a5e48293f2108e21299993ed4afd021f9777217d573cf016df3fd6" address="unix:///run/containerd/s/89122147156cb4957e1bfbc3f4362fc76d1d448a0e27bf81e429b9a50f3dcb8a" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:24:47.032993 systemd[1]: Started cri-containerd-ab7fcd9f78a5e48293f2108e21299993ed4afd021f9777217d573cf016df3fd6.scope - libcontainer container ab7fcd9f78a5e48293f2108e21299993ed4afd021f9777217d573cf016df3fd6. Nov 6 00:24:47.052139 systemd-resolved[1380]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 00:24:47.091311 containerd[1612]: time="2025-11-06T00:24:47.091155877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-96c9f,Uid:5aa1b76a-bf42-444c-9590-6b40021950af,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab7fcd9f78a5e48293f2108e21299993ed4afd021f9777217d573cf016df3fd6\"" Nov 6 00:24:47.092340 kubelet[2840]: E1106 00:24:47.092316 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:47.099005 containerd[1612]: time="2025-11-06T00:24:47.098956162Z" level=info msg="CreateContainer within sandbox \"ab7fcd9f78a5e48293f2108e21299993ed4afd021f9777217d573cf016df3fd6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 00:24:47.118241 containerd[1612]: time="2025-11-06T00:24:47.118186892Z" level=info msg="Container 66a6c8b922691a4419e5906cf3a9de79a62921d5fabe545ff0bedf7427d5951b: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:24:47.124179 containerd[1612]: time="2025-11-06T00:24:47.124113290Z" level=info msg="CreateContainer within sandbox \"ab7fcd9f78a5e48293f2108e21299993ed4afd021f9777217d573cf016df3fd6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"66a6c8b922691a4419e5906cf3a9de79a62921d5fabe545ff0bedf7427d5951b\"" Nov 6 00:24:47.124831 containerd[1612]: time="2025-11-06T00:24:47.124758853Z" level=info msg="StartContainer for \"66a6c8b922691a4419e5906cf3a9de79a62921d5fabe545ff0bedf7427d5951b\"" Nov 6 00:24:47.125798 containerd[1612]: time="2025-11-06T00:24:47.125752389Z" level=info msg="connecting to shim 66a6c8b922691a4419e5906cf3a9de79a62921d5fabe545ff0bedf7427d5951b" address="unix:///run/containerd/s/89122147156cb4957e1bfbc3f4362fc76d1d448a0e27bf81e429b9a50f3dcb8a" protocol=ttrpc version=3 Nov 6 00:24:47.154987 systemd[1]: Started cri-containerd-66a6c8b922691a4419e5906cf3a9de79a62921d5fabe545ff0bedf7427d5951b.scope - libcontainer container 66a6c8b922691a4419e5906cf3a9de79a62921d5fabe545ff0bedf7427d5951b. Nov 6 00:24:47.184657 containerd[1612]: time="2025-11-06T00:24:47.184611964Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:24:47.185938 containerd[1612]: time="2025-11-06T00:24:47.185888069Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:24:47.185938 containerd[1612]: time="2025-11-06T00:24:47.185916814Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:24:47.186302 kubelet[2840]: E1106 00:24:47.186228 2840 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:24:47.186698 kubelet[2840]: E1106 00:24:47.186385 2840 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:24:47.186698 kubelet[2840]: E1106 00:24:47.186625 2840 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jvdc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-79747456c8-9kz7k_calico-apiserver(04a82cf5-90fc-40d4-9038-65add7f7f20f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:24:47.187964 kubelet[2840]: E1106 00:24:47.187909 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79747456c8-9kz7k" podUID="04a82cf5-90fc-40d4-9038-65add7f7f20f" Nov 6 00:24:47.196160 containerd[1612]: time="2025-11-06T00:24:47.195582530Z" level=info msg="StartContainer for \"66a6c8b922691a4419e5906cf3a9de79a62921d5fabe545ff0bedf7427d5951b\" returns successfully" Nov 6 00:24:47.232384 kubelet[2840]: E1106 00:24:47.232319 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79747456c8-w2vxq" podUID="00228e39-7e54-4a3f-b428-59bfdf4f00aa" Nov 6 00:24:47.238838 kubelet[2840]: E1106 00:24:47.238324 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79747456c8-9kz7k" podUID="04a82cf5-90fc-40d4-9038-65add7f7f20f" Nov 6 00:24:47.242472 kubelet[2840]: E1106 00:24:47.242435 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:47.247329 kubelet[2840]: E1106 00:24:47.247277 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:47.248718 kubelet[2840]: E1106 00:24:47.248668 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d79bfcd45-zqqkv" podUID="d1c893a4-aa9b-4a6a-9aff-057008af6a5e" Nov 6 00:24:47.276089 kubelet[2840]: I1106 00:24:47.276016 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-96c9f" podStartSLOduration=48.275997992 podStartE2EDuration="48.275997992s" podCreationTimestamp="2025-11-06 00:23:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:24:47.274261047 +0000 UTC m=+52.938095322" watchObservedRunningTime="2025-11-06 00:24:47.275997992 +0000 UTC m=+52.939832277" Nov 6 00:24:47.300422 kubelet[2840]: I1106 00:24:47.299830 2840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6jnrw" podStartSLOduration=48.299814452 podStartE2EDuration="48.299814452s" podCreationTimestamp="2025-11-06 00:23:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:24:47.299063509 +0000 UTC m=+52.962897784" watchObservedRunningTime="2025-11-06 00:24:47.299814452 +0000 UTC m=+52.963648737" Nov 6 00:24:47.300941 systemd-networkd[1515]: cali120d09e05df: Gained IPv6LL Nov 6 00:24:47.301373 systemd-networkd[1515]: cali3a44a125676: Gained IPv6LL Nov 6 00:24:47.413590 systemd[1]: Started sshd@10-10.0.0.58:22-10.0.0.1:47272.service - OpenSSH per-connection server daemon (10.0.0.1:47272). Nov 6 00:24:47.492097 systemd-networkd[1515]: cali5a59916d6b7: Gained IPv6LL Nov 6 00:24:47.493997 sshd[4943]: Accepted publickey for core from 10.0.0.1 port 47272 ssh2: RSA SHA256:L/X2hZ0RfhoRT0scDEdxSx2ppA9YR+iZOac1cX0yhcQ Nov 6 00:24:47.496099 sshd-session[4943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:24:47.501390 systemd-logind[1599]: New session 11 of user core. Nov 6 00:24:47.512054 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 6 00:24:47.658474 sshd[4946]: Connection closed by 10.0.0.1 port 47272 Nov 6 00:24:47.658899 sshd-session[4943]: pam_unix(sshd:session): session closed for user core Nov 6 00:24:47.663346 systemd[1]: sshd@10-10.0.0.58:22-10.0.0.1:47272.service: Deactivated successfully. Nov 6 00:24:47.665490 systemd[1]: session-11.scope: Deactivated successfully. Nov 6 00:24:47.666693 systemd-logind[1599]: Session 11 logged out. Waiting for processes to exit. Nov 6 00:24:47.667968 systemd-logind[1599]: Removed session 11. Nov 6 00:24:48.004059 systemd-networkd[1515]: cali5ce95222dfc: Gained IPv6LL Nov 6 00:24:48.079333 kubelet[2840]: I1106 00:24:48.079290 2840 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 6 00:24:48.080023 kubelet[2840]: E1106 00:24:48.079967 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:48.228177 containerd[1612]: time="2025-11-06T00:24:48.228133789Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae64c1d8601a83c5414344192f698b931f94d3b78c3e14d03c9f66ad2fec9f10\" id:\"7083d4448a1964877e67706361bcf922b3b99050a3b278342a30ac185aa8921a\" pid:4971 exited_at:{seconds:1762388688 nanos:227841662}" Nov 6 00:24:48.248277 kubelet[2840]: E1106 00:24:48.248178 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:48.248277 kubelet[2840]: E1106 00:24:48.248234 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:48.250625 kubelet[2840]: E1106 00:24:48.248726 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79747456c8-9kz7k" podUID="04a82cf5-90fc-40d4-9038-65add7f7f20f" Nov 6 00:24:48.250625 kubelet[2840]: E1106 00:24:48.248734 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79747456c8-w2vxq" podUID="00228e39-7e54-4a3f-b428-59bfdf4f00aa" Nov 6 00:24:48.250625 kubelet[2840]: E1106 00:24:48.249921 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:48.339401 containerd[1612]: time="2025-11-06T00:24:48.339330520Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae64c1d8601a83c5414344192f698b931f94d3b78c3e14d03c9f66ad2fec9f10\" id:\"9626ee7a0ad7daac27419f53c94d465028629565ea280e9a4839c0aa4e62dbc7\" pid:4995 exited_at:{seconds:1762388688 nanos:338990782}" Nov 6 00:24:49.250264 kubelet[2840]: E1106 00:24:49.250220 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:24:52.676685 systemd[1]: Started sshd@11-10.0.0.58:22-10.0.0.1:47276.service - OpenSSH per-connection server daemon (10.0.0.1:47276). Nov 6 00:24:52.733636 sshd[5017]: Accepted publickey for core from 10.0.0.1 port 47276 ssh2: RSA SHA256:L/X2hZ0RfhoRT0scDEdxSx2ppA9YR+iZOac1cX0yhcQ Nov 6 00:24:52.735565 sshd-session[5017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:24:52.740536 systemd-logind[1599]: New session 12 of user core. Nov 6 00:24:52.750979 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 6 00:24:53.253506 sshd[5020]: Connection closed by 10.0.0.1 port 47276 Nov 6 00:24:53.254608 sshd-session[5017]: pam_unix(sshd:session): session closed for user core Nov 6 00:24:53.261201 systemd-logind[1599]: Session 12 logged out. Waiting for processes to exit. Nov 6 00:24:53.261788 systemd[1]: sshd@11-10.0.0.58:22-10.0.0.1:47276.service: Deactivated successfully. Nov 6 00:24:53.265772 systemd[1]: session-12.scope: Deactivated successfully. Nov 6 00:24:53.270761 systemd-logind[1599]: Removed session 12. Nov 6 00:24:56.876071 containerd[1612]: time="2025-11-06T00:24:56.875998187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:24:57.260933 containerd[1612]: time="2025-11-06T00:24:57.260763175Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:24:57.265254 containerd[1612]: time="2025-11-06T00:24:57.264089447Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:24:57.265254 containerd[1612]: time="2025-11-06T00:24:57.264210608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:24:57.265448 kubelet[2840]: E1106 00:24:57.264407 2840 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:24:57.265448 kubelet[2840]: E1106 00:24:57.264472 2840 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:24:57.265448 kubelet[2840]: E1106 00:24:57.264686 2840 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tzr9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-864c69c456-2zzkg_calico-system(e5128615-7aa8-48b2-97ed-a5b035282b5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:24:57.269360 kubelet[2840]: E1106 00:24:57.267951 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-864c69c456-2zzkg" podUID="e5128615-7aa8-48b2-97ed-a5b035282b5e" Nov 6 00:24:57.859418 containerd[1612]: time="2025-11-06T00:24:57.859103941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:24:58.259025 containerd[1612]: time="2025-11-06T00:24:58.257386892Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:24:58.295085 systemd[1]: Started sshd@12-10.0.0.58:22-10.0.0.1:43442.service - OpenSSH per-connection server daemon (10.0.0.1:43442). Nov 6 00:24:58.340023 containerd[1612]: time="2025-11-06T00:24:58.339945260Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:24:58.340319 containerd[1612]: time="2025-11-06T00:24:58.340236342Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:24:58.340766 kubelet[2840]: E1106 00:24:58.340543 2840 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:24:58.340766 kubelet[2840]: E1106 00:24:58.340635 2840 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:24:58.342215 kubelet[2840]: E1106 00:24:58.341416 2840 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n4vx7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2gxx8_calico-system(02736082-3e52-4d26-97e7-7ca149273f4e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:24:58.346241 containerd[1612]: time="2025-11-06T00:24:58.346193734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:24:58.447449 sshd[5044]: Accepted publickey for core from 10.0.0.1 port 43442 ssh2: RSA SHA256:L/X2hZ0RfhoRT0scDEdxSx2ppA9YR+iZOac1cX0yhcQ Nov 6 00:24:58.448695 sshd-session[5044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:24:58.466400 systemd-logind[1599]: New session 13 of user core. Nov 6 00:24:58.529950 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 6 00:24:58.748327 containerd[1612]: time="2025-11-06T00:24:58.746777169Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:24:58.751406 containerd[1612]: time="2025-11-06T00:24:58.751278133Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:24:58.751697 containerd[1612]: time="2025-11-06T00:24:58.751668706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:24:58.755393 kubelet[2840]: E1106 00:24:58.753951 2840 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:24:58.755393 kubelet[2840]: E1106 00:24:58.754017 2840 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:24:58.755393 kubelet[2840]: E1106 00:24:58.754159 2840 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n4vx7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2gxx8_calico-system(02736082-3e52-4d26-97e7-7ca149273f4e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:24:58.757594 kubelet[2840]: E1106 00:24:58.757284 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2gxx8" podUID="02736082-3e52-4d26-97e7-7ca149273f4e" Nov 6 00:24:58.961064 sshd[5047]: Connection closed by 10.0.0.1 port 43442 Nov 6 00:24:58.959553 sshd-session[5044]: pam_unix(sshd:session): session closed for user core Nov 6 00:24:59.000470 systemd[1]: sshd@12-10.0.0.58:22-10.0.0.1:43442.service: Deactivated successfully. Nov 6 00:24:59.005687 systemd[1]: session-13.scope: Deactivated successfully. Nov 6 00:24:59.007337 systemd-logind[1599]: Session 13 logged out. Waiting for processes to exit. Nov 6 00:24:59.028598 systemd[1]: Started sshd@13-10.0.0.58:22-10.0.0.1:43452.service - OpenSSH per-connection server daemon (10.0.0.1:43452). Nov 6 00:24:59.032780 systemd-logind[1599]: Removed session 13. Nov 6 00:24:59.141414 sshd[5062]: Accepted publickey for core from 10.0.0.1 port 43452 ssh2: RSA SHA256:L/X2hZ0RfhoRT0scDEdxSx2ppA9YR+iZOac1cX0yhcQ Nov 6 00:24:59.143654 sshd-session[5062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:24:59.150190 systemd-logind[1599]: New session 14 of user core. Nov 6 00:24:59.160279 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 6 00:24:59.383931 sshd[5065]: Connection closed by 10.0.0.1 port 43452 Nov 6 00:24:59.385779 sshd-session[5062]: pam_unix(sshd:session): session closed for user core Nov 6 00:24:59.403165 systemd[1]: sshd@13-10.0.0.58:22-10.0.0.1:43452.service: Deactivated successfully. Nov 6 00:24:59.406766 systemd[1]: session-14.scope: Deactivated successfully. Nov 6 00:24:59.407681 systemd-logind[1599]: Session 14 logged out. Waiting for processes to exit. Nov 6 00:24:59.412332 systemd[1]: Started sshd@14-10.0.0.58:22-10.0.0.1:43464.service - OpenSSH per-connection server daemon (10.0.0.1:43464). Nov 6 00:24:59.414427 systemd-logind[1599]: Removed session 14. Nov 6 00:24:59.500079 sshd[5076]: Accepted publickey for core from 10.0.0.1 port 43464 ssh2: RSA SHA256:L/X2hZ0RfhoRT0scDEdxSx2ppA9YR+iZOac1cX0yhcQ Nov 6 00:24:59.501892 sshd-session[5076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:24:59.507477 systemd-logind[1599]: New session 15 of user core. Nov 6 00:24:59.521245 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 6 00:24:59.645570 sshd[5079]: Connection closed by 10.0.0.1 port 43464 Nov 6 00:24:59.645784 sshd-session[5076]: pam_unix(sshd:session): session closed for user core Nov 6 00:24:59.651092 systemd[1]: sshd@14-10.0.0.58:22-10.0.0.1:43464.service: Deactivated successfully. Nov 6 00:24:59.653334 systemd[1]: session-15.scope: Deactivated successfully. Nov 6 00:24:59.654283 systemd-logind[1599]: Session 15 logged out. Waiting for processes to exit. Nov 6 00:24:59.655470 systemd-logind[1599]: Removed session 15. Nov 6 00:24:59.855502 containerd[1612]: time="2025-11-06T00:24:59.855450793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:25:00.197791 containerd[1612]: time="2025-11-06T00:25:00.197717105Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:00.199279 containerd[1612]: time="2025-11-06T00:25:00.199216813Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:25:00.199335 containerd[1612]: time="2025-11-06T00:25:00.199280854Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:25:00.199501 kubelet[2840]: E1106 00:25:00.199445 2840 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:25:00.199904 kubelet[2840]: E1106 00:25:00.199521 2840 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:25:00.199904 kubelet[2840]: E1106 00:25:00.199677 2840 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:0e734bd69df1469bb7194c239cb44140,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v55h9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d79bfcd45-zqqkv_calico-system(d1c893a4-aa9b-4a6a-9aff-057008af6a5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:00.201720 containerd[1612]: time="2025-11-06T00:25:00.201689349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:25:00.550222 containerd[1612]: time="2025-11-06T00:25:00.548258559Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:00.550222 containerd[1612]: time="2025-11-06T00:25:00.549900459Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:25:00.550222 containerd[1612]: time="2025-11-06T00:25:00.549934203Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:25:00.550442 kubelet[2840]: E1106 00:25:00.550117 2840 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:25:00.550442 kubelet[2840]: E1106 00:25:00.550167 2840 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:25:00.550442 kubelet[2840]: E1106 00:25:00.550286 2840 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v55h9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d79bfcd45-zqqkv_calico-system(d1c893a4-aa9b-4a6a-9aff-057008af6a5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:00.552019 kubelet[2840]: E1106 00:25:00.551941 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d79bfcd45-zqqkv" podUID="d1c893a4-aa9b-4a6a-9aff-057008af6a5e" Nov 6 00:25:00.854051 kubelet[2840]: E1106 00:25:00.853976 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:00.855269 containerd[1612]: time="2025-11-06T00:25:00.854714275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:25:01.308537 containerd[1612]: time="2025-11-06T00:25:01.308381242Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:01.309740 containerd[1612]: time="2025-11-06T00:25:01.309702942Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:25:01.309893 containerd[1612]: time="2025-11-06T00:25:01.309756383Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:25:01.310000 kubelet[2840]: E1106 00:25:01.309945 2840 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:25:01.310000 kubelet[2840]: E1106 00:25:01.309995 2840 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:25:01.310256 kubelet[2840]: E1106 00:25:01.310163 2840 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m6cf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nvmth_calico-system(a4ce142d-4dd8-4bd0-9300-ec9175d131da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:01.311407 kubelet[2840]: E1106 00:25:01.311363 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nvmth" podUID="a4ce142d-4dd8-4bd0-9300-ec9175d131da" Nov 6 00:25:01.856462 kubelet[2840]: E1106 00:25:01.856402 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:01.857310 containerd[1612]: time="2025-11-06T00:25:01.857276340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:25:02.241630 containerd[1612]: time="2025-11-06T00:25:02.241452354Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:02.267035 containerd[1612]: time="2025-11-06T00:25:02.266947187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:25:02.267035 containerd[1612]: time="2025-11-06T00:25:02.266996461Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:25:02.267371 kubelet[2840]: E1106 00:25:02.267310 2840 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:25:02.267437 kubelet[2840]: E1106 00:25:02.267375 2840 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:25:02.268135 kubelet[2840]: E1106 00:25:02.267541 2840 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jvdc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-79747456c8-9kz7k_calico-apiserver(04a82cf5-90fc-40d4-9038-65add7f7f20f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:02.268924 kubelet[2840]: E1106 00:25:02.268869 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79747456c8-9kz7k" podUID="04a82cf5-90fc-40d4-9038-65add7f7f20f" Nov 6 00:25:02.855533 containerd[1612]: time="2025-11-06T00:25:02.855468337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:25:03.218277 containerd[1612]: time="2025-11-06T00:25:03.218112970Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:03.220549 containerd[1612]: time="2025-11-06T00:25:03.220462849Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:25:03.220715 containerd[1612]: time="2025-11-06T00:25:03.220598597Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:25:03.220963 kubelet[2840]: E1106 00:25:03.220905 2840 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:25:03.221353 kubelet[2840]: E1106 00:25:03.220982 2840 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:25:03.221353 kubelet[2840]: E1106 00:25:03.221187 2840 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fdg5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-79747456c8-w2vxq_calico-apiserver(00228e39-7e54-4a3f-b428-59bfdf4f00aa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:03.222437 kubelet[2840]: E1106 00:25:03.222392 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79747456c8-w2vxq" podUID="00228e39-7e54-4a3f-b428-59bfdf4f00aa" Nov 6 00:25:04.666641 systemd[1]: Started sshd@15-10.0.0.58:22-10.0.0.1:43472.service - OpenSSH per-connection server daemon (10.0.0.1:43472). Nov 6 00:25:04.737015 sshd[5096]: Accepted publickey for core from 10.0.0.1 port 43472 ssh2: RSA SHA256:L/X2hZ0RfhoRT0scDEdxSx2ppA9YR+iZOac1cX0yhcQ Nov 6 00:25:04.738942 sshd-session[5096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:25:04.745844 systemd-logind[1599]: New session 16 of user core. Nov 6 00:25:04.752096 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 6 00:25:04.874939 sshd[5099]: Connection closed by 10.0.0.1 port 43472 Nov 6 00:25:04.875248 sshd-session[5096]: pam_unix(sshd:session): session closed for user core Nov 6 00:25:04.879715 systemd[1]: sshd@15-10.0.0.58:22-10.0.0.1:43472.service: Deactivated successfully. Nov 6 00:25:04.881872 systemd[1]: session-16.scope: Deactivated successfully. Nov 6 00:25:04.882758 systemd-logind[1599]: Session 16 logged out. Waiting for processes to exit. Nov 6 00:25:04.884099 systemd-logind[1599]: Removed session 16. Nov 6 00:25:09.855309 kubelet[2840]: E1106 00:25:09.855231 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-864c69c456-2zzkg" podUID="e5128615-7aa8-48b2-97ed-a5b035282b5e" Nov 6 00:25:09.894191 systemd[1]: Started sshd@16-10.0.0.58:22-10.0.0.1:53628.service - OpenSSH per-connection server daemon (10.0.0.1:53628). Nov 6 00:25:09.951265 sshd[5126]: Accepted publickey for core from 10.0.0.1 port 53628 ssh2: RSA SHA256:L/X2hZ0RfhoRT0scDEdxSx2ppA9YR+iZOac1cX0yhcQ Nov 6 00:25:09.953870 sshd-session[5126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:25:09.959724 systemd-logind[1599]: New session 17 of user core. Nov 6 00:25:09.973977 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 6 00:25:10.111140 sshd[5129]: Connection closed by 10.0.0.1 port 53628 Nov 6 00:25:10.113054 sshd-session[5126]: pam_unix(sshd:session): session closed for user core Nov 6 00:25:10.118319 systemd[1]: sshd@16-10.0.0.58:22-10.0.0.1:53628.service: Deactivated successfully. Nov 6 00:25:10.120662 systemd[1]: session-17.scope: Deactivated successfully. Nov 6 00:25:10.121702 systemd-logind[1599]: Session 17 logged out. Waiting for processes to exit. Nov 6 00:25:10.123497 systemd-logind[1599]: Removed session 17. Nov 6 00:25:12.854745 kubelet[2840]: E1106 00:25:12.854686 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nvmth" podUID="a4ce142d-4dd8-4bd0-9300-ec9175d131da" Nov 6 00:25:12.855587 kubelet[2840]: E1106 00:25:12.855190 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2gxx8" podUID="02736082-3e52-4d26-97e7-7ca149273f4e" Nov 6 00:25:13.855545 kubelet[2840]: E1106 00:25:13.855484 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d79bfcd45-zqqkv" podUID="d1c893a4-aa9b-4a6a-9aff-057008af6a5e" Nov 6 00:25:15.132358 systemd[1]: Started sshd@17-10.0.0.58:22-10.0.0.1:53630.service - OpenSSH per-connection server daemon (10.0.0.1:53630). Nov 6 00:25:15.227109 sshd[5142]: Accepted publickey for core from 10.0.0.1 port 53630 ssh2: RSA SHA256:L/X2hZ0RfhoRT0scDEdxSx2ppA9YR+iZOac1cX0yhcQ Nov 6 00:25:15.230314 sshd-session[5142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:25:15.239793 systemd-logind[1599]: New session 18 of user core. Nov 6 00:25:15.244017 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 6 00:25:15.453945 sshd[5145]: Connection closed by 10.0.0.1 port 53630 Nov 6 00:25:15.455249 sshd-session[5142]: pam_unix(sshd:session): session closed for user core Nov 6 00:25:15.465672 systemd[1]: sshd@17-10.0.0.58:22-10.0.0.1:53630.service: Deactivated successfully. Nov 6 00:25:15.466215 systemd-logind[1599]: Session 18 logged out. Waiting for processes to exit. Nov 6 00:25:15.470425 systemd[1]: session-18.scope: Deactivated successfully. Nov 6 00:25:15.474711 systemd-logind[1599]: Removed session 18. Nov 6 00:25:16.858151 kubelet[2840]: E1106 00:25:16.858091 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79747456c8-9kz7k" podUID="04a82cf5-90fc-40d4-9038-65add7f7f20f" Nov 6 00:25:17.855128 kubelet[2840]: E1106 00:25:17.855074 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79747456c8-w2vxq" podUID="00228e39-7e54-4a3f-b428-59bfdf4f00aa" Nov 6 00:25:18.330823 containerd[1612]: time="2025-11-06T00:25:18.330749886Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae64c1d8601a83c5414344192f698b931f94d3b78c3e14d03c9f66ad2fec9f10\" id:\"211062df39cc63634d3e535391e01218d85b377a1e6f70a15d783eb6eeb0d7b9\" pid:5171 exited_at:{seconds:1762388718 nanos:330331775}" Nov 6 00:25:18.854357 kubelet[2840]: E1106 00:25:18.853955 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:18.854877 kubelet[2840]: E1106 00:25:18.854118 2840 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 00:25:20.469378 systemd[1]: Started sshd@18-10.0.0.58:22-10.0.0.1:57472.service - OpenSSH per-connection server daemon (10.0.0.1:57472). Nov 6 00:25:20.539140 sshd[5184]: Accepted publickey for core from 10.0.0.1 port 57472 ssh2: RSA SHA256:L/X2hZ0RfhoRT0scDEdxSx2ppA9YR+iZOac1cX0yhcQ Nov 6 00:25:20.541237 sshd-session[5184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:25:20.547130 systemd-logind[1599]: New session 19 of user core. Nov 6 00:25:20.554067 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 6 00:25:20.693976 sshd[5187]: Connection closed by 10.0.0.1 port 57472 Nov 6 00:25:20.694421 sshd-session[5184]: pam_unix(sshd:session): session closed for user core Nov 6 00:25:20.699620 systemd[1]: sshd@18-10.0.0.58:22-10.0.0.1:57472.service: Deactivated successfully. Nov 6 00:25:20.701647 systemd[1]: session-19.scope: Deactivated successfully. Nov 6 00:25:20.702583 systemd-logind[1599]: Session 19 logged out. Waiting for processes to exit. Nov 6 00:25:20.703719 systemd-logind[1599]: Removed session 19. Nov 6 00:25:21.855435 containerd[1612]: time="2025-11-06T00:25:21.855369259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:25:22.324116 containerd[1612]: time="2025-11-06T00:25:22.324050558Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:22.353274 containerd[1612]: time="2025-11-06T00:25:22.353201723Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:25:22.353274 containerd[1612]: time="2025-11-06T00:25:22.353242711Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:25:22.353619 kubelet[2840]: E1106 00:25:22.353565 2840 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:25:22.354039 kubelet[2840]: E1106 00:25:22.353634 2840 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:25:22.354039 kubelet[2840]: E1106 00:25:22.353854 2840 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tzr9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-864c69c456-2zzkg_calico-system(e5128615-7aa8-48b2-97ed-a5b035282b5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:22.355085 kubelet[2840]: E1106 00:25:22.355047 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-864c69c456-2zzkg" podUID="e5128615-7aa8-48b2-97ed-a5b035282b5e" Nov 6 00:25:25.716329 systemd[1]: Started sshd@19-10.0.0.58:22-10.0.0.1:57474.service - OpenSSH per-connection server daemon (10.0.0.1:57474). Nov 6 00:25:25.790157 sshd[5207]: Accepted publickey for core from 10.0.0.1 port 57474 ssh2: RSA SHA256:L/X2hZ0RfhoRT0scDEdxSx2ppA9YR+iZOac1cX0yhcQ Nov 6 00:25:25.796053 sshd-session[5207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:25:25.818204 systemd-logind[1599]: New session 20 of user core. Nov 6 00:25:25.820756 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 6 00:25:25.858269 containerd[1612]: time="2025-11-06T00:25:25.858219698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:25:26.046769 sshd[5210]: Connection closed by 10.0.0.1 port 57474 Nov 6 00:25:26.048608 sshd-session[5207]: pam_unix(sshd:session): session closed for user core Nov 6 00:25:26.056936 systemd[1]: sshd@19-10.0.0.58:22-10.0.0.1:57474.service: Deactivated successfully. Nov 6 00:25:26.059027 systemd[1]: session-20.scope: Deactivated successfully. Nov 6 00:25:26.060173 systemd-logind[1599]: Session 20 logged out. Waiting for processes to exit. Nov 6 00:25:26.062972 systemd-logind[1599]: Removed session 20. Nov 6 00:25:26.065051 systemd[1]: Started sshd@20-10.0.0.58:22-10.0.0.1:33422.service - OpenSSH per-connection server daemon (10.0.0.1:33422). Nov 6 00:25:26.132528 sshd[5224]: Accepted publickey for core from 10.0.0.1 port 33422 ssh2: RSA SHA256:L/X2hZ0RfhoRT0scDEdxSx2ppA9YR+iZOac1cX0yhcQ Nov 6 00:25:26.134255 sshd-session[5224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:25:26.139404 systemd-logind[1599]: New session 21 of user core. Nov 6 00:25:26.144017 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 6 00:25:26.236500 containerd[1612]: time="2025-11-06T00:25:26.236425022Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:26.317317 containerd[1612]: time="2025-11-06T00:25:26.317231778Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:25:26.317514 containerd[1612]: time="2025-11-06T00:25:26.317381791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:25:26.317726 kubelet[2840]: E1106 00:25:26.317665 2840 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:25:26.318259 kubelet[2840]: E1106 00:25:26.317738 2840 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:25:26.318671 kubelet[2840]: E1106 00:25:26.318593 2840 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m6cf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nvmth_calico-system(a4ce142d-4dd8-4bd0-9300-ec9175d131da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:26.319832 kubelet[2840]: E1106 00:25:26.319777 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nvmth" podUID="a4ce142d-4dd8-4bd0-9300-ec9175d131da" Nov 6 00:25:26.855223 containerd[1612]: time="2025-11-06T00:25:26.855146663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:25:27.279118 containerd[1612]: time="2025-11-06T00:25:27.278886400Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:27.358840 containerd[1612]: time="2025-11-06T00:25:27.358742101Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:25:27.359014 containerd[1612]: time="2025-11-06T00:25:27.358917211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:25:27.359206 kubelet[2840]: E1106 00:25:27.359124 2840 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:25:27.359206 kubelet[2840]: E1106 00:25:27.359203 2840 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:25:27.359698 kubelet[2840]: E1106 00:25:27.359401 2840 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n4vx7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2gxx8_calico-system(02736082-3e52-4d26-97e7-7ca149273f4e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:27.362278 containerd[1612]: time="2025-11-06T00:25:27.361989075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:25:27.579294 sshd[5227]: Connection closed by 10.0.0.1 port 33422 Nov 6 00:25:27.579668 sshd-session[5224]: pam_unix(sshd:session): session closed for user core Nov 6 00:25:27.591986 systemd[1]: sshd@20-10.0.0.58:22-10.0.0.1:33422.service: Deactivated successfully. Nov 6 00:25:27.594086 systemd[1]: session-21.scope: Deactivated successfully. Nov 6 00:25:27.594989 systemd-logind[1599]: Session 21 logged out. Waiting for processes to exit. Nov 6 00:25:27.598771 systemd[1]: Started sshd@21-10.0.0.58:22-10.0.0.1:33428.service - OpenSSH per-connection server daemon (10.0.0.1:33428). Nov 6 00:25:27.599681 systemd-logind[1599]: Removed session 21. Nov 6 00:25:27.663401 sshd[5241]: Accepted publickey for core from 10.0.0.1 port 33428 ssh2: RSA SHA256:L/X2hZ0RfhoRT0scDEdxSx2ppA9YR+iZOac1cX0yhcQ Nov 6 00:25:27.665253 sshd-session[5241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:25:27.670375 systemd-logind[1599]: New session 22 of user core. Nov 6 00:25:27.680989 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 6 00:25:27.759369 containerd[1612]: time="2025-11-06T00:25:27.759161750Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:28.009517 containerd[1612]: time="2025-11-06T00:25:28.009259543Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:25:28.009517 containerd[1612]: time="2025-11-06T00:25:28.009390310Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:25:28.010183 kubelet[2840]: E1106 00:25:28.010075 2840 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:25:28.010183 kubelet[2840]: E1106 00:25:28.010164 2840 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:25:28.011450 kubelet[2840]: E1106 00:25:28.011388 2840 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n4vx7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2gxx8_calico-system(02736082-3e52-4d26-97e7-7ca149273f4e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:28.012616 kubelet[2840]: E1106 00:25:28.012571 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2gxx8" podUID="02736082-3e52-4d26-97e7-7ca149273f4e" Nov 6 00:25:28.619858 sshd[5244]: Connection closed by 10.0.0.1 port 33428 Nov 6 00:25:28.619474 sshd-session[5241]: pam_unix(sshd:session): session closed for user core Nov 6 00:25:28.632014 systemd[1]: sshd@21-10.0.0.58:22-10.0.0.1:33428.service: Deactivated successfully. Nov 6 00:25:28.635163 systemd[1]: session-22.scope: Deactivated successfully. Nov 6 00:25:28.636475 systemd-logind[1599]: Session 22 logged out. Waiting for processes to exit. Nov 6 00:25:28.641614 systemd[1]: Started sshd@22-10.0.0.58:22-10.0.0.1:33438.service - OpenSSH per-connection server daemon (10.0.0.1:33438). Nov 6 00:25:28.642353 systemd-logind[1599]: Removed session 22. Nov 6 00:25:28.712105 sshd[5282]: Accepted publickey for core from 10.0.0.1 port 33438 ssh2: RSA SHA256:L/X2hZ0RfhoRT0scDEdxSx2ppA9YR+iZOac1cX0yhcQ Nov 6 00:25:28.713588 sshd-session[5282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:25:28.718116 systemd-logind[1599]: New session 23 of user core. Nov 6 00:25:28.727961 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 6 00:25:28.855472 containerd[1612]: time="2025-11-06T00:25:28.855304978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:25:29.005624 sshd[5285]: Connection closed by 10.0.0.1 port 33438 Nov 6 00:25:29.007137 sshd-session[5282]: pam_unix(sshd:session): session closed for user core Nov 6 00:25:29.017659 systemd[1]: sshd@22-10.0.0.58:22-10.0.0.1:33438.service: Deactivated successfully. Nov 6 00:25:29.020803 systemd[1]: session-23.scope: Deactivated successfully. Nov 6 00:25:29.022749 systemd-logind[1599]: Session 23 logged out. Waiting for processes to exit. Nov 6 00:25:29.026249 systemd[1]: Started sshd@23-10.0.0.58:22-10.0.0.1:33454.service - OpenSSH per-connection server daemon (10.0.0.1:33454). Nov 6 00:25:29.027331 systemd-logind[1599]: Removed session 23. Nov 6 00:25:29.096029 sshd[5296]: Accepted publickey for core from 10.0.0.1 port 33454 ssh2: RSA SHA256:L/X2hZ0RfhoRT0scDEdxSx2ppA9YR+iZOac1cX0yhcQ Nov 6 00:25:29.097909 sshd-session[5296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:25:29.103253 systemd-logind[1599]: New session 24 of user core. Nov 6 00:25:29.110018 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 6 00:25:29.247441 containerd[1612]: time="2025-11-06T00:25:29.247209233Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:29.257266 sshd[5299]: Connection closed by 10.0.0.1 port 33454 Nov 6 00:25:29.258062 sshd-session[5296]: pam_unix(sshd:session): session closed for user core Nov 6 00:25:29.263254 systemd-logind[1599]: Session 24 logged out. Waiting for processes to exit. Nov 6 00:25:29.263630 systemd[1]: sshd@23-10.0.0.58:22-10.0.0.1:33454.service: Deactivated successfully. Nov 6 00:25:29.266364 systemd[1]: session-24.scope: Deactivated successfully. Nov 6 00:25:29.268772 systemd-logind[1599]: Removed session 24. Nov 6 00:25:29.272265 containerd[1612]: time="2025-11-06T00:25:29.272116292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:25:29.272358 containerd[1612]: time="2025-11-06T00:25:29.272177518Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:25:29.272557 kubelet[2840]: E1106 00:25:29.272512 2840 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:25:29.272917 kubelet[2840]: E1106 00:25:29.272572 2840 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:25:29.272917 kubelet[2840]: E1106 00:25:29.272717 2840 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:0e734bd69df1469bb7194c239cb44140,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v55h9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d79bfcd45-zqqkv_calico-system(d1c893a4-aa9b-4a6a-9aff-057008af6a5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:29.275292 containerd[1612]: time="2025-11-06T00:25:29.275202020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:25:29.613593 containerd[1612]: time="2025-11-06T00:25:29.613536149Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:29.781829 containerd[1612]: time="2025-11-06T00:25:29.781746618Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:25:29.782025 containerd[1612]: time="2025-11-06T00:25:29.781833081Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:25:29.782194 kubelet[2840]: E1106 00:25:29.782074 2840 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:25:29.782194 kubelet[2840]: E1106 00:25:29.782155 2840 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:25:29.782410 kubelet[2840]: E1106 00:25:29.782328 2840 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v55h9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d79bfcd45-zqqkv_calico-system(d1c893a4-aa9b-4a6a-9aff-057008af6a5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:29.783604 kubelet[2840]: E1106 00:25:29.783531 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d79bfcd45-zqqkv" podUID="d1c893a4-aa9b-4a6a-9aff-057008af6a5e" Nov 6 00:25:30.857295 containerd[1612]: time="2025-11-06T00:25:30.857235582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:25:31.317562 containerd[1612]: time="2025-11-06T00:25:31.317505938Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:31.483675 containerd[1612]: time="2025-11-06T00:25:31.483611680Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:25:31.483675 containerd[1612]: time="2025-11-06T00:25:31.483652777Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:25:31.483972 kubelet[2840]: E1106 00:25:31.483924 2840 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:25:31.484315 kubelet[2840]: E1106 00:25:31.483977 2840 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:25:31.484315 kubelet[2840]: E1106 00:25:31.484233 2840 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jvdc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-79747456c8-9kz7k_calico-apiserver(04a82cf5-90fc-40d4-9038-65add7f7f20f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:31.484426 containerd[1612]: time="2025-11-06T00:25:31.484248873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:25:31.485454 kubelet[2840]: E1106 00:25:31.485426 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79747456c8-9kz7k" podUID="04a82cf5-90fc-40d4-9038-65add7f7f20f" Nov 6 00:25:31.872655 containerd[1612]: time="2025-11-06T00:25:31.872572944Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:31.953826 containerd[1612]: time="2025-11-06T00:25:31.953699893Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:25:31.953826 containerd[1612]: time="2025-11-06T00:25:31.953756740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:25:31.954073 kubelet[2840]: E1106 00:25:31.953995 2840 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:25:31.954073 kubelet[2840]: E1106 00:25:31.954051 2840 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:25:31.954241 kubelet[2840]: E1106 00:25:31.954191 2840 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fdg5t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-79747456c8-w2vxq_calico-apiserver(00228e39-7e54-4a3f-b428-59bfdf4f00aa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:31.955470 kubelet[2840]: E1106 00:25:31.955415 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79747456c8-w2vxq" podUID="00228e39-7e54-4a3f-b428-59bfdf4f00aa" Nov 6 00:25:34.283282 systemd[1]: Started sshd@24-10.0.0.58:22-10.0.0.1:33470.service - OpenSSH per-connection server daemon (10.0.0.1:33470). Nov 6 00:25:34.360832 sshd[5314]: Accepted publickey for core from 10.0.0.1 port 33470 ssh2: RSA SHA256:L/X2hZ0RfhoRT0scDEdxSx2ppA9YR+iZOac1cX0yhcQ Nov 6 00:25:34.359772 sshd-session[5314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:25:34.375747 systemd-logind[1599]: New session 25 of user core. Nov 6 00:25:34.379998 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 6 00:25:34.516519 sshd[5317]: Connection closed by 10.0.0.1 port 33470 Nov 6 00:25:34.518037 sshd-session[5314]: pam_unix(sshd:session): session closed for user core Nov 6 00:25:34.523238 systemd[1]: sshd@24-10.0.0.58:22-10.0.0.1:33470.service: Deactivated successfully. Nov 6 00:25:34.526455 systemd[1]: session-25.scope: Deactivated successfully. Nov 6 00:25:34.527899 systemd-logind[1599]: Session 25 logged out. Waiting for processes to exit. Nov 6 00:25:34.530508 systemd-logind[1599]: Removed session 25. Nov 6 00:25:35.854684 kubelet[2840]: E1106 00:25:35.854553 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-864c69c456-2zzkg" podUID="e5128615-7aa8-48b2-97ed-a5b035282b5e" Nov 6 00:25:39.534770 systemd[1]: Started sshd@25-10.0.0.58:22-10.0.0.1:36080.service - OpenSSH per-connection server daemon (10.0.0.1:36080). Nov 6 00:25:39.592204 sshd[5331]: Accepted publickey for core from 10.0.0.1 port 36080 ssh2: RSA SHA256:L/X2hZ0RfhoRT0scDEdxSx2ppA9YR+iZOac1cX0yhcQ Nov 6 00:25:39.593864 sshd-session[5331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:25:39.598595 systemd-logind[1599]: New session 26 of user core. Nov 6 00:25:39.612973 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 6 00:25:39.740124 sshd[5334]: Connection closed by 10.0.0.1 port 36080 Nov 6 00:25:39.740662 sshd-session[5331]: pam_unix(sshd:session): session closed for user core Nov 6 00:25:39.745925 systemd[1]: sshd@25-10.0.0.58:22-10.0.0.1:36080.service: Deactivated successfully. Nov 6 00:25:39.748579 systemd[1]: session-26.scope: Deactivated successfully. Nov 6 00:25:39.751199 systemd-logind[1599]: Session 26 logged out. Waiting for processes to exit. Nov 6 00:25:39.752793 systemd-logind[1599]: Removed session 26. Nov 6 00:25:40.855444 kubelet[2840]: E1106 00:25:40.855378 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d79bfcd45-zqqkv" podUID="d1c893a4-aa9b-4a6a-9aff-057008af6a5e" Nov 6 00:25:41.855894 kubelet[2840]: E1106 00:25:41.855380 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nvmth" podUID="a4ce142d-4dd8-4bd0-9300-ec9175d131da" Nov 6 00:25:42.855456 kubelet[2840]: E1106 00:25:42.855361 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79747456c8-9kz7k" podUID="04a82cf5-90fc-40d4-9038-65add7f7f20f" Nov 6 00:25:42.855848 kubelet[2840]: E1106 00:25:42.855791 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2gxx8" podUID="02736082-3e52-4d26-97e7-7ca149273f4e" Nov 6 00:25:44.757140 systemd[1]: Started sshd@26-10.0.0.58:22-10.0.0.1:36096.service - OpenSSH per-connection server daemon (10.0.0.1:36096). Nov 6 00:25:44.826729 sshd[5350]: Accepted publickey for core from 10.0.0.1 port 36096 ssh2: RSA SHA256:L/X2hZ0RfhoRT0scDEdxSx2ppA9YR+iZOac1cX0yhcQ Nov 6 00:25:44.828400 sshd-session[5350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:25:44.832677 systemd-logind[1599]: New session 27 of user core. Nov 6 00:25:44.849039 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 6 00:25:45.018324 sshd[5353]: Connection closed by 10.0.0.1 port 36096 Nov 6 00:25:45.020147 sshd-session[5350]: pam_unix(sshd:session): session closed for user core Nov 6 00:25:45.026636 systemd[1]: sshd@26-10.0.0.58:22-10.0.0.1:36096.service: Deactivated successfully. Nov 6 00:25:45.029233 systemd[1]: session-27.scope: Deactivated successfully. Nov 6 00:25:45.030228 systemd-logind[1599]: Session 27 logged out. Waiting for processes to exit. Nov 6 00:25:45.032761 systemd-logind[1599]: Removed session 27. Nov 6 00:25:46.855838 kubelet[2840]: E1106 00:25:46.855331 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79747456c8-w2vxq" podUID="00228e39-7e54-4a3f-b428-59bfdf4f00aa" Nov 6 00:25:46.855838 kubelet[2840]: E1106 00:25:46.855396 2840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-864c69c456-2zzkg" podUID="e5128615-7aa8-48b2-97ed-a5b035282b5e" Nov 6 00:25:48.312498 containerd[1612]: time="2025-11-06T00:25:48.312292027Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae64c1d8601a83c5414344192f698b931f94d3b78c3e14d03c9f66ad2fec9f10\" id:\"182240e3d6e3cba2cd4eff34697cfe9a59fabd53ba3642812ed6b063d047ba67\" pid:5380 exited_at:{seconds:1762388748 nanos:311833683}" Nov 6 00:25:50.045022 systemd[1]: Started sshd@27-10.0.0.58:22-10.0.0.1:47734.service - OpenSSH per-connection server daemon (10.0.0.1:47734). Nov 6 00:25:50.112119 sshd[5395]: Accepted publickey for core from 10.0.0.1 port 47734 ssh2: RSA SHA256:L/X2hZ0RfhoRT0scDEdxSx2ppA9YR+iZOac1cX0yhcQ Nov 6 00:25:50.113905 sshd-session[5395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:25:50.119746 systemd-logind[1599]: New session 28 of user core. Nov 6 00:25:50.127968 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 6 00:25:50.270797 sshd[5400]: Connection closed by 10.0.0.1 port 47734 Nov 6 00:25:50.271168 sshd-session[5395]: pam_unix(sshd:session): session closed for user core Nov 6 00:25:50.276560 systemd[1]: sshd@27-10.0.0.58:22-10.0.0.1:47734.service: Deactivated successfully. Nov 6 00:25:50.279502 systemd[1]: session-28.scope: Deactivated successfully. Nov 6 00:25:50.280546 systemd-logind[1599]: Session 28 logged out. Waiting for processes to exit. Nov 6 00:25:50.282582 systemd-logind[1599]: Removed session 28.