Dec 12 18:34:39.982714 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 12 18:34:39.982740 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:34:39.982749 kernel: BIOS-provided physical RAM map: Dec 12 18:34:39.982758 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 12 18:34:39.982765 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 12 18:34:39.982771 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 12 18:34:39.982779 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Dec 12 18:34:39.982803 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 12 18:34:39.982812 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Dec 12 18:34:39.982819 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Dec 12 18:34:39.982826 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Dec 12 18:34:39.982832 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Dec 12 18:34:39.982841 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Dec 12 18:34:39.982848 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Dec 12 18:34:39.982856 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Dec 12 18:34:39.982864 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Dec 12 18:34:39.982873 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Dec 12 18:34:39.982882 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Dec 12 18:34:39.982890 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Dec 12 18:34:39.982897 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Dec 12 18:34:39.982904 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Dec 12 18:34:39.982911 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Dec 12 18:34:39.982918 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 12 18:34:39.982925 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 12 18:34:39.982932 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Dec 12 18:34:39.982940 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 12 18:34:39.982947 kernel: NX (Execute Disable) protection: active Dec 12 18:34:39.982954 kernel: APIC: Static calls initialized Dec 12 18:34:39.982963 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Dec 12 18:34:39.982971 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Dec 12 18:34:39.982978 kernel: extended physical RAM map: Dec 12 18:34:39.982985 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 12 18:34:39.982992 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 12 18:34:39.982999 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 12 18:34:39.983007 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Dec 12 18:34:39.983014 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 12 18:34:39.983021 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Dec 12 18:34:39.983028 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Dec 12 18:34:39.983035 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Dec 12 18:34:39.983048 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Dec 12 18:34:39.983071 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Dec 12 18:34:39.983079 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Dec 12 18:34:39.983088 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Dec 12 18:34:39.983098 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Dec 12 18:34:39.983110 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Dec 12 18:34:39.983118 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Dec 12 18:34:39.983125 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Dec 12 18:34:39.983132 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Dec 12 18:34:39.983140 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Dec 12 18:34:39.983147 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Dec 12 18:34:39.983155 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Dec 12 18:34:39.983162 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Dec 12 18:34:39.983169 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Dec 12 18:34:39.983177 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Dec 12 18:34:39.983184 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 12 18:34:39.983194 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 12 18:34:39.983201 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Dec 12 18:34:39.983209 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 12 18:34:39.983219 kernel: efi: EFI v2.7 by EDK II Dec 12 18:34:39.983227 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Dec 12 18:34:39.983234 kernel: random: crng init done Dec 12 18:34:39.983244 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Dec 12 18:34:39.983252 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Dec 12 18:34:39.983261 kernel: secureboot: Secure boot disabled Dec 12 18:34:39.983269 kernel: SMBIOS 2.8 present. Dec 12 18:34:39.983276 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Dec 12 18:34:39.983286 kernel: DMI: Memory slots populated: 1/1 Dec 12 18:34:39.983302 kernel: Hypervisor detected: KVM Dec 12 18:34:39.983310 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Dec 12 18:34:39.983317 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 12 18:34:39.983325 kernel: kvm-clock: using sched offset of 6497869061 cycles Dec 12 18:34:39.983334 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 12 18:34:39.983341 kernel: tsc: Detected 2794.750 MHz processor Dec 12 18:34:39.983349 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 12 18:34:39.983357 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 12 18:34:39.983364 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Dec 12 18:34:39.983372 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 12 18:34:39.983382 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 12 18:34:39.983389 kernel: Using GB pages for direct mapping Dec 12 18:34:39.983397 kernel: ACPI: Early table checksum verification disabled Dec 12 18:34:39.983405 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Dec 12 18:34:39.983412 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Dec 12 18:34:39.983420 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:34:39.983428 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:34:39.983435 kernel: ACPI: FACS 0x000000009CBDD000 000040 Dec 12 18:34:39.983443 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:34:39.983453 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:34:39.983460 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:34:39.983468 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 18:34:39.983475 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Dec 12 18:34:39.983483 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Dec 12 18:34:39.983491 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Dec 12 18:34:39.983498 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Dec 12 18:34:39.983506 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Dec 12 18:34:39.983516 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Dec 12 18:34:39.983523 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Dec 12 18:34:39.983531 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Dec 12 18:34:39.983538 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Dec 12 18:34:39.983546 kernel: No NUMA configuration found Dec 12 18:34:39.983553 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Dec 12 18:34:39.983561 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Dec 12 18:34:39.983568 kernel: Zone ranges: Dec 12 18:34:39.983576 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 12 18:34:39.983583 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Dec 12 18:34:39.983593 kernel: Normal empty Dec 12 18:34:39.983600 kernel: Device empty Dec 12 18:34:39.983608 kernel: Movable zone start for each node Dec 12 18:34:39.983615 kernel: Early memory node ranges Dec 12 18:34:39.983623 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 12 18:34:39.983633 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Dec 12 18:34:39.983640 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Dec 12 18:34:39.983648 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Dec 12 18:34:39.983655 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Dec 12 18:34:39.983665 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Dec 12 18:34:39.983672 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Dec 12 18:34:39.983680 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Dec 12 18:34:39.983687 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Dec 12 18:34:39.983697 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 12 18:34:39.983712 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 12 18:34:39.983722 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Dec 12 18:34:39.983730 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 12 18:34:39.983737 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Dec 12 18:34:39.983745 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Dec 12 18:34:39.983753 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 12 18:34:39.983761 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Dec 12 18:34:39.983771 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Dec 12 18:34:39.983779 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 12 18:34:39.983799 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 12 18:34:39.983807 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 12 18:34:39.983815 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 12 18:34:39.983825 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 12 18:34:39.983833 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 12 18:34:39.983841 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 12 18:34:39.983849 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 12 18:34:39.983857 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 12 18:34:39.983865 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 12 18:34:39.983873 kernel: TSC deadline timer available Dec 12 18:34:39.983880 kernel: CPU topo: Max. logical packages: 1 Dec 12 18:34:39.983888 kernel: CPU topo: Max. logical dies: 1 Dec 12 18:34:39.983898 kernel: CPU topo: Max. dies per package: 1 Dec 12 18:34:39.983906 kernel: CPU topo: Max. threads per core: 1 Dec 12 18:34:39.983914 kernel: CPU topo: Num. cores per package: 4 Dec 12 18:34:39.983921 kernel: CPU topo: Num. threads per package: 4 Dec 12 18:34:39.983929 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Dec 12 18:34:39.983937 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 12 18:34:39.983945 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 12 18:34:39.983953 kernel: kvm-guest: setup PV sched yield Dec 12 18:34:39.983961 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Dec 12 18:34:39.983971 kernel: Booting paravirtualized kernel on KVM Dec 12 18:34:39.983979 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 12 18:34:39.983987 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 12 18:34:39.983995 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Dec 12 18:34:39.984002 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Dec 12 18:34:39.984010 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 12 18:34:39.984018 kernel: kvm-guest: PV spinlocks enabled Dec 12 18:34:39.984026 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 12 18:34:39.984037 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:34:39.984048 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 12 18:34:39.984056 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 18:34:39.984064 kernel: Fallback order for Node 0: 0 Dec 12 18:34:39.984071 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Dec 12 18:34:39.984079 kernel: Policy zone: DMA32 Dec 12 18:34:39.984087 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 18:34:39.984095 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 12 18:34:39.984103 kernel: ftrace: allocating 40103 entries in 157 pages Dec 12 18:34:39.984113 kernel: ftrace: allocated 157 pages with 5 groups Dec 12 18:34:39.984121 kernel: Dynamic Preempt: voluntary Dec 12 18:34:39.984128 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 18:34:39.984137 kernel: rcu: RCU event tracing is enabled. Dec 12 18:34:39.984145 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 12 18:34:39.984153 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 18:34:39.984161 kernel: Rude variant of Tasks RCU enabled. Dec 12 18:34:39.984169 kernel: Tracing variant of Tasks RCU enabled. Dec 12 18:34:39.984177 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 18:34:39.984187 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 12 18:34:39.984197 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 18:34:39.984206 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 18:34:39.984214 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 18:34:39.984222 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 12 18:34:39.984229 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 18:34:39.984237 kernel: Console: colour dummy device 80x25 Dec 12 18:34:39.984245 kernel: printk: legacy console [ttyS0] enabled Dec 12 18:34:39.984253 kernel: ACPI: Core revision 20240827 Dec 12 18:34:39.984263 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 12 18:34:39.984271 kernel: APIC: Switch to symmetric I/O mode setup Dec 12 18:34:39.984279 kernel: x2apic enabled Dec 12 18:34:39.984286 kernel: APIC: Switched APIC routing to: physical x2apic Dec 12 18:34:39.984302 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 12 18:34:39.984310 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 12 18:34:39.984317 kernel: kvm-guest: setup PV IPIs Dec 12 18:34:39.984326 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 12 18:34:39.984334 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Dec 12 18:34:39.984344 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Dec 12 18:34:39.984352 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 12 18:34:39.984360 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 12 18:34:39.984368 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 12 18:34:39.984376 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 12 18:34:39.984384 kernel: Spectre V2 : Mitigation: Retpolines Dec 12 18:34:39.984392 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 12 18:34:39.984400 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 12 18:34:39.984408 kernel: active return thunk: retbleed_return_thunk Dec 12 18:34:39.984418 kernel: RETBleed: Mitigation: untrained return thunk Dec 12 18:34:39.984428 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 12 18:34:39.984436 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 12 18:34:39.984444 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 12 18:34:39.984454 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 12 18:34:39.984468 kernel: active return thunk: srso_return_thunk Dec 12 18:34:39.984480 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 12 18:34:39.984490 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 12 18:34:39.984503 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 12 18:34:39.984513 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 12 18:34:39.984523 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 12 18:34:39.984533 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 12 18:34:39.984542 kernel: Freeing SMP alternatives memory: 32K Dec 12 18:34:39.984552 kernel: pid_max: default: 32768 minimum: 301 Dec 12 18:34:39.984561 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 18:34:39.984571 kernel: landlock: Up and running. Dec 12 18:34:39.984581 kernel: SELinux: Initializing. Dec 12 18:34:39.984594 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 18:34:39.984604 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 18:34:39.984613 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 12 18:34:39.984621 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 12 18:34:39.984629 kernel: ... version: 0 Dec 12 18:34:39.984636 kernel: ... bit width: 48 Dec 12 18:34:39.984644 kernel: ... generic registers: 6 Dec 12 18:34:39.984652 kernel: ... value mask: 0000ffffffffffff Dec 12 18:34:39.984660 kernel: ... max period: 00007fffffffffff Dec 12 18:34:39.984670 kernel: ... fixed-purpose events: 0 Dec 12 18:34:39.984678 kernel: ... event mask: 000000000000003f Dec 12 18:34:39.984686 kernel: signal: max sigframe size: 1776 Dec 12 18:34:39.984693 kernel: rcu: Hierarchical SRCU implementation. Dec 12 18:34:39.984701 kernel: rcu: Max phase no-delay instances is 400. Dec 12 18:34:39.984714 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 18:34:39.984721 kernel: smp: Bringing up secondary CPUs ... Dec 12 18:34:39.984729 kernel: smpboot: x86: Booting SMP configuration: Dec 12 18:34:39.984737 kernel: .... node #0, CPUs: #1 #2 #3 Dec 12 18:34:39.984744 kernel: smp: Brought up 1 node, 4 CPUs Dec 12 18:34:39.984755 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Dec 12 18:34:39.984763 kernel: Memory: 2414472K/2565800K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 145388K reserved, 0K cma-reserved) Dec 12 18:34:39.984771 kernel: devtmpfs: initialized Dec 12 18:34:39.984779 kernel: x86/mm: Memory block size: 128MB Dec 12 18:34:39.984842 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Dec 12 18:34:39.984850 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Dec 12 18:34:39.984858 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Dec 12 18:34:39.984866 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Dec 12 18:34:39.984877 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Dec 12 18:34:39.984885 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Dec 12 18:34:39.984893 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 18:34:39.984901 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 12 18:34:39.984910 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 18:34:39.984918 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 18:34:39.984926 kernel: audit: initializing netlink subsys (disabled) Dec 12 18:34:39.984935 kernel: audit: type=2000 audit(1765564475.389:1): state=initialized audit_enabled=0 res=1 Dec 12 18:34:39.984944 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 18:34:39.984956 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 12 18:34:39.984964 kernel: cpuidle: using governor menu Dec 12 18:34:39.984972 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 18:34:39.984980 kernel: dca service started, version 1.12.1 Dec 12 18:34:39.984988 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Dec 12 18:34:39.984996 kernel: PCI: Using configuration type 1 for base access Dec 12 18:34:39.985004 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 12 18:34:39.985012 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 18:34:39.985020 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 18:34:39.985030 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 18:34:39.985037 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 18:34:39.985045 kernel: ACPI: Added _OSI(Module Device) Dec 12 18:34:39.985053 kernel: ACPI: Added _OSI(Processor Device) Dec 12 18:34:39.985061 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 18:34:39.985069 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 18:34:39.985076 kernel: ACPI: Interpreter enabled Dec 12 18:34:39.985084 kernel: ACPI: PM: (supports S0 S3 S5) Dec 12 18:34:39.985092 kernel: ACPI: Using IOAPIC for interrupt routing Dec 12 18:34:39.985102 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 12 18:34:39.985110 kernel: PCI: Using E820 reservations for host bridge windows Dec 12 18:34:39.985118 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 12 18:34:39.985126 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 18:34:39.985357 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 18:34:39.985489 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 12 18:34:39.985651 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 12 18:34:39.985667 kernel: PCI host bridge to bus 0000:00 Dec 12 18:34:39.985868 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 12 18:34:39.986009 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 12 18:34:39.986138 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 12 18:34:39.986265 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Dec 12 18:34:39.986398 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Dec 12 18:34:39.986516 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Dec 12 18:34:39.986657 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 18:34:39.986837 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 12 18:34:39.987034 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Dec 12 18:34:39.987252 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Dec 12 18:34:39.987461 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Dec 12 18:34:39.987589 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Dec 12 18:34:39.987712 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 12 18:34:39.987883 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 12 18:34:39.988010 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Dec 12 18:34:39.988133 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Dec 12 18:34:39.988255 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Dec 12 18:34:39.988408 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 12 18:34:39.988533 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Dec 12 18:34:39.988662 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Dec 12 18:34:39.988799 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Dec 12 18:34:39.988936 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 12 18:34:39.989084 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Dec 12 18:34:39.989232 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Dec 12 18:34:39.989615 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Dec 12 18:34:39.989828 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Dec 12 18:34:39.990046 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 12 18:34:39.990204 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 12 18:34:39.990356 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 12 18:34:39.990568 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Dec 12 18:34:39.990828 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Dec 12 18:34:39.991015 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 12 18:34:39.991804 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Dec 12 18:34:39.991823 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 12 18:34:39.991831 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 12 18:34:39.991840 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 12 18:34:39.991848 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 12 18:34:39.991856 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 12 18:34:39.991864 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 12 18:34:39.991872 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 12 18:34:39.991880 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 12 18:34:39.991890 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 12 18:34:39.991898 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 12 18:34:39.991906 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 12 18:34:39.991914 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 12 18:34:39.991922 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 12 18:34:39.991930 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 12 18:34:39.991938 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 12 18:34:39.991946 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 12 18:34:39.991954 kernel: iommu: Default domain type: Translated Dec 12 18:34:39.991964 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 12 18:34:39.991971 kernel: efivars: Registered efivars operations Dec 12 18:34:39.991979 kernel: PCI: Using ACPI for IRQ routing Dec 12 18:34:39.991987 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 12 18:34:39.992004 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Dec 12 18:34:39.992012 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Dec 12 18:34:39.992024 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Dec 12 18:34:39.992039 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Dec 12 18:34:39.992055 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Dec 12 18:34:39.992075 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Dec 12 18:34:39.992091 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Dec 12 18:34:39.992099 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Dec 12 18:34:39.992258 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 12 18:34:39.992511 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 12 18:34:39.992686 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 12 18:34:39.992702 kernel: vgaarb: loaded Dec 12 18:34:39.992712 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 12 18:34:39.992740 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 12 18:34:39.992757 kernel: clocksource: Switched to clocksource kvm-clock Dec 12 18:34:39.992779 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 18:34:39.992809 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 18:34:39.992817 kernel: pnp: PnP ACPI init Dec 12 18:34:39.993110 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Dec 12 18:34:39.993135 kernel: pnp: PnP ACPI: found 6 devices Dec 12 18:34:39.993145 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 12 18:34:39.993154 kernel: NET: Registered PF_INET protocol family Dec 12 18:34:39.993168 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 12 18:34:39.993177 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 12 18:34:39.993185 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 18:34:39.993193 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 18:34:39.993202 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 12 18:34:39.993210 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 12 18:34:39.993218 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 18:34:39.993226 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 18:34:39.993239 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 18:34:39.993258 kernel: NET: Registered PF_XDP protocol family Dec 12 18:34:39.993953 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Dec 12 18:34:39.994083 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Dec 12 18:34:39.994204 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 12 18:34:39.994348 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 12 18:34:39.994488 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 12 18:34:39.994602 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Dec 12 18:34:39.994725 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Dec 12 18:34:39.994863 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Dec 12 18:34:39.994876 kernel: PCI: CLS 0 bytes, default 64 Dec 12 18:34:39.994885 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Dec 12 18:34:39.994901 kernel: Initialise system trusted keyrings Dec 12 18:34:39.994914 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 12 18:34:39.994922 kernel: Key type asymmetric registered Dec 12 18:34:39.994930 kernel: Asymmetric key parser 'x509' registered Dec 12 18:34:39.994938 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 12 18:34:39.994946 kernel: io scheduler mq-deadline registered Dec 12 18:34:39.994954 kernel: io scheduler kyber registered Dec 12 18:34:39.994963 kernel: io scheduler bfq registered Dec 12 18:34:39.994971 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 12 18:34:39.994984 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 12 18:34:39.994992 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 12 18:34:39.995005 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 12 18:34:39.995013 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 18:34:39.995022 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 12 18:34:39.995030 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 12 18:34:39.995038 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 12 18:34:39.995047 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 12 18:34:39.995190 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 12 18:34:39.995203 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 12 18:34:39.995337 kernel: rtc_cmos 00:04: registered as rtc0 Dec 12 18:34:39.995455 kernel: rtc_cmos 00:04: setting system clock to 2025-12-12T18:34:39 UTC (1765564479) Dec 12 18:34:39.995570 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Dec 12 18:34:39.995581 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 12 18:34:39.995589 kernel: efifb: probing for efifb Dec 12 18:34:39.995597 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Dec 12 18:34:39.995606 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Dec 12 18:34:39.995614 kernel: efifb: scrolling: redraw Dec 12 18:34:39.995630 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 12 18:34:39.995638 kernel: Console: switching to colour frame buffer device 160x50 Dec 12 18:34:39.995647 kernel: fb0: EFI VGA frame buffer device Dec 12 18:34:39.995655 kernel: pstore: Using crash dump compression: deflate Dec 12 18:34:39.995663 kernel: pstore: Registered efi_pstore as persistent store backend Dec 12 18:34:39.995671 kernel: NET: Registered PF_INET6 protocol family Dec 12 18:34:39.995679 kernel: Segment Routing with IPv6 Dec 12 18:34:39.995688 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 18:34:39.995696 kernel: NET: Registered PF_PACKET protocol family Dec 12 18:34:39.995704 kernel: Key type dns_resolver registered Dec 12 18:34:39.995717 kernel: IPI shorthand broadcast: enabled Dec 12 18:34:39.995725 kernel: sched_clock: Marking stable (4173006646, 338801393)->(4682167283, -170359244) Dec 12 18:34:39.995733 kernel: registered taskstats version 1 Dec 12 18:34:39.995741 kernel: Loading compiled-in X.509 certificates Dec 12 18:34:39.995750 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 12 18:34:39.995758 kernel: Demotion targets for Node 0: null Dec 12 18:34:39.995766 kernel: Key type .fscrypt registered Dec 12 18:34:39.995774 kernel: Key type fscrypt-provisioning registered Dec 12 18:34:39.995800 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 18:34:39.995813 kernel: ima: Allocated hash algorithm: sha1 Dec 12 18:34:39.995821 kernel: ima: No architecture policies found Dec 12 18:34:39.995829 kernel: clk: Disabling unused clocks Dec 12 18:34:39.995837 kernel: Warning: unable to open an initial console. Dec 12 18:34:39.995846 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 12 18:34:39.995854 kernel: Write protecting the kernel read-only data: 40960k Dec 12 18:34:39.995863 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 12 18:34:39.995871 kernel: Run /init as init process Dec 12 18:34:39.995879 kernel: with arguments: Dec 12 18:34:39.995911 kernel: /init Dec 12 18:34:39.995919 kernel: with environment: Dec 12 18:34:39.995928 kernel: HOME=/ Dec 12 18:34:39.995936 kernel: TERM=linux Dec 12 18:34:39.995948 systemd[1]: Successfully made /usr/ read-only. Dec 12 18:34:39.995960 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:34:39.995970 systemd[1]: Detected virtualization kvm. Dec 12 18:34:39.995983 systemd[1]: Detected architecture x86-64. Dec 12 18:34:39.995996 systemd[1]: Running in initrd. Dec 12 18:34:39.996016 systemd[1]: No hostname configured, using default hostname. Dec 12 18:34:39.996026 systemd[1]: Hostname set to . Dec 12 18:34:39.996035 systemd[1]: Initializing machine ID from VM UUID. Dec 12 18:34:39.996055 systemd[1]: Queued start job for default target initrd.target. Dec 12 18:34:39.996095 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:34:39.996117 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:34:39.996192 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 18:34:39.996207 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:34:39.996216 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 18:34:39.996225 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 18:34:39.996236 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 12 18:34:39.996244 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 12 18:34:39.996253 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:34:39.996266 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:34:39.996275 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:34:39.996284 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:34:39.996299 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:34:39.996308 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:34:39.996317 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:34:39.996332 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:34:39.996341 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 18:34:39.996353 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 18:34:39.996367 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:34:39.996376 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:34:39.996385 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:34:39.996399 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:34:39.996411 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 18:34:39.996419 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:34:39.996428 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 18:34:39.996437 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 18:34:39.996452 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 18:34:39.996461 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:34:39.996469 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:34:39.996478 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:34:39.996487 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 18:34:39.996497 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:34:39.996510 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 18:34:39.996519 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 18:34:39.996560 systemd-journald[201]: Collecting audit messages is disabled. Dec 12 18:34:39.996586 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:34:39.996596 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 18:34:39.996605 systemd-journald[201]: Journal started Dec 12 18:34:39.996631 systemd-journald[201]: Runtime Journal (/run/log/journal/156963ddc75c445a837e1e18099b3338) is 6M, max 48.1M, 42.1M free. Dec 12 18:34:39.969592 systemd-modules-load[204]: Inserted module 'overlay' Dec 12 18:34:40.002043 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:34:40.006324 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:34:40.012835 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 18:34:40.016281 systemd-modules-load[204]: Inserted module 'br_netfilter' Dec 12 18:34:40.018150 kernel: Bridge firewalling registered Dec 12 18:34:40.019631 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:34:40.024736 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:34:40.025933 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:34:40.028967 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:34:40.047613 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:34:40.063859 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:34:40.074901 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:34:40.078739 systemd-tmpfiles[225]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 18:34:40.114368 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 18:34:40.132042 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:34:40.136986 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:34:40.151572 dracut-cmdline[243]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 12 18:34:40.181035 systemd-resolved[248]: Positive Trust Anchors: Dec 12 18:34:40.181064 systemd-resolved[248]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:34:40.181098 systemd-resolved[248]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:34:40.184476 systemd-resolved[248]: Defaulting to hostname 'linux'. Dec 12 18:34:40.185987 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:34:40.202272 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:34:40.306843 kernel: SCSI subsystem initialized Dec 12 18:34:40.316821 kernel: Loading iSCSI transport class v2.0-870. Dec 12 18:34:40.330836 kernel: iscsi: registered transport (tcp) Dec 12 18:34:40.359420 kernel: iscsi: registered transport (qla4xxx) Dec 12 18:34:40.359542 kernel: QLogic iSCSI HBA Driver Dec 12 18:34:40.396855 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:34:40.423288 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:34:40.448155 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:34:40.517088 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 18:34:40.520525 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 18:34:40.592838 kernel: raid6: avx2x4 gen() 26633 MB/s Dec 12 18:34:40.609819 kernel: raid6: avx2x2 gen() 27785 MB/s Dec 12 18:34:40.627707 kernel: raid6: avx2x1 gen() 20434 MB/s Dec 12 18:34:40.627739 kernel: raid6: using algorithm avx2x2 gen() 27785 MB/s Dec 12 18:34:40.645646 kernel: raid6: .... xor() 17530 MB/s, rmw enabled Dec 12 18:34:40.645676 kernel: raid6: using avx2x2 recovery algorithm Dec 12 18:34:40.670833 kernel: xor: automatically using best checksumming function avx Dec 12 18:34:40.848843 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 18:34:40.858620 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:34:40.860561 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:34:40.904993 systemd-udevd[455]: Using default interface naming scheme 'v255'. Dec 12 18:34:40.912527 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:34:40.918158 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 18:34:40.953363 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Dec 12 18:34:40.987286 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:34:40.989283 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:34:41.107161 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:34:41.111185 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 18:34:41.165818 kernel: cryptd: max_cpu_qlen set to 1000 Dec 12 18:34:41.175807 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 12 18:34:41.218851 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:34:41.221740 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:34:41.227674 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:34:41.234238 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:34:41.238835 kernel: libata version 3.00 loaded. Dec 12 18:34:41.239300 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:34:41.243895 kernel: AES CTR mode by8 optimization enabled Dec 12 18:34:41.243921 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 12 18:34:41.251598 kernel: ahci 0000:00:1f.2: version 3.0 Dec 12 18:34:41.251854 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 12 18:34:41.251877 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 12 18:34:41.255366 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 12 18:34:41.256881 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:34:41.278221 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 12 18:34:41.278437 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 12 18:34:41.278594 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 18:34:41.278621 kernel: GPT:9289727 != 19775487 Dec 12 18:34:41.278632 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 18:34:41.278642 kernel: GPT:9289727 != 19775487 Dec 12 18:34:41.278652 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 18:34:41.278663 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 18:34:41.278673 kernel: scsi host0: ahci Dec 12 18:34:41.278912 kernel: scsi host1: ahci Dec 12 18:34:41.279076 kernel: scsi host2: ahci Dec 12 18:34:41.279252 kernel: scsi host3: ahci Dec 12 18:34:41.279417 kernel: scsi host4: ahci Dec 12 18:34:41.279563 kernel: scsi host5: ahci Dec 12 18:34:41.279740 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Dec 12 18:34:41.279752 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Dec 12 18:34:41.257023 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:34:41.289085 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Dec 12 18:34:41.289113 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Dec 12 18:34:41.289129 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Dec 12 18:34:41.289144 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Dec 12 18:34:41.274603 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:34:41.325372 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 12 18:34:41.337296 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 12 18:34:41.361619 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 12 18:34:41.361837 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 12 18:34:41.373387 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 18:34:41.378095 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 18:34:41.386398 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:34:41.594509 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 12 18:34:41.594593 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 12 18:34:41.594809 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 12 18:34:41.596862 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 12 18:34:41.597831 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 12 18:34:41.612851 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 12 18:34:41.613821 kernel: ata3.00: LPM support broken, forcing max_power Dec 12 18:34:41.615447 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 12 18:34:41.615470 kernel: ata3.00: applying bridge limits Dec 12 18:34:41.617387 kernel: ata3.00: LPM support broken, forcing max_power Dec 12 18:34:41.617414 kernel: ata3.00: configured for UDMA/100 Dec 12 18:34:41.618814 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 12 18:34:41.677505 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 12 18:34:41.677816 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 12 18:34:41.687940 disk-uuid[612]: Primary Header is updated. Dec 12 18:34:41.687940 disk-uuid[612]: Secondary Entries is updated. Dec 12 18:34:41.687940 disk-uuid[612]: Secondary Header is updated. Dec 12 18:34:41.694509 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 18:34:41.700830 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 18:34:41.700898 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 12 18:34:42.117451 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 18:34:42.121895 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:34:42.126300 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:34:42.130821 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:34:42.135564 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 18:34:42.170166 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:34:42.705817 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 18:34:42.708497 disk-uuid[621]: The operation has completed successfully. Dec 12 18:34:42.740266 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 18:34:42.740394 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 18:34:42.782563 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 12 18:34:42.809176 sh[651]: Success Dec 12 18:34:42.830843 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 18:34:42.830947 kernel: device-mapper: uevent: version 1.0.3 Dec 12 18:34:42.830967 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 18:34:42.844815 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 12 18:34:42.880251 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 12 18:34:42.886385 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 12 18:34:42.908474 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 12 18:34:42.915080 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (663) Dec 12 18:34:42.918740 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 12 18:34:42.918765 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:34:42.924730 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 18:34:42.924752 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 18:34:42.926542 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 12 18:34:42.929578 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:34:42.929909 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 18:34:42.931130 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 18:34:42.935617 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 18:34:42.972825 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (696) Dec 12 18:34:42.972881 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:34:42.975634 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:34:42.979661 kernel: BTRFS info (device vda6): turning on async discard Dec 12 18:34:42.979687 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 18:34:42.985811 kernel: BTRFS info (device vda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:34:42.986709 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 18:34:42.988247 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 18:34:43.186715 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:34:43.245280 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:34:43.248463 ignition[741]: Ignition 2.22.0 Dec 12 18:34:43.248474 ignition[741]: Stage: fetch-offline Dec 12 18:34:43.248530 ignition[741]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:34:43.248541 ignition[741]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 18:34:43.248692 ignition[741]: parsed url from cmdline: "" Dec 12 18:34:43.248697 ignition[741]: no config URL provided Dec 12 18:34:43.248704 ignition[741]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 18:34:43.248717 ignition[741]: no config at "/usr/lib/ignition/user.ign" Dec 12 18:34:43.248746 ignition[741]: op(1): [started] loading QEMU firmware config module Dec 12 18:34:43.248755 ignition[741]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 12 18:34:43.257116 ignition[741]: op(1): [finished] loading QEMU firmware config module Dec 12 18:34:43.295907 systemd-networkd[838]: lo: Link UP Dec 12 18:34:43.295919 systemd-networkd[838]: lo: Gained carrier Dec 12 18:34:43.297771 systemd-networkd[838]: Enumeration completed Dec 12 18:34:43.298259 systemd-networkd[838]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:34:43.298263 systemd-networkd[838]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:34:43.298703 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:34:43.299796 systemd-networkd[838]: eth0: Link UP Dec 12 18:34:43.299977 systemd-networkd[838]: eth0: Gained carrier Dec 12 18:34:43.299987 systemd-networkd[838]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:34:43.303102 systemd[1]: Reached target network.target - Network. Dec 12 18:34:43.319845 systemd-networkd[838]: eth0: DHCPv4 address 10.0.0.51/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 18:34:43.364078 ignition[741]: parsing config with SHA512: 11f7379b1004c494a411d45ca6add0fd7666ceff03480bd2242841d9525ee8a9df4e5e01faac322238c21e8e720f710166b858f96b25ab60729a7105a7c7836e Dec 12 18:34:43.371760 unknown[741]: fetched base config from "system" Dec 12 18:34:43.371774 unknown[741]: fetched user config from "qemu" Dec 12 18:34:43.372165 ignition[741]: fetch-offline: fetch-offline passed Dec 12 18:34:43.375074 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:34:43.372240 ignition[741]: Ignition finished successfully Dec 12 18:34:43.377138 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 12 18:34:43.378721 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 18:34:43.457013 ignition[846]: Ignition 2.22.0 Dec 12 18:34:43.457034 ignition[846]: Stage: kargs Dec 12 18:34:43.457278 ignition[846]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:34:43.457295 ignition[846]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 18:34:43.458617 ignition[846]: kargs: kargs passed Dec 12 18:34:43.458696 ignition[846]: Ignition finished successfully Dec 12 18:34:43.466987 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 18:34:43.468851 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 18:34:43.518098 ignition[854]: Ignition 2.22.0 Dec 12 18:34:43.518110 ignition[854]: Stage: disks Dec 12 18:34:43.518251 ignition[854]: no configs at "/usr/lib/ignition/base.d" Dec 12 18:34:43.518261 ignition[854]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 18:34:43.518976 ignition[854]: disks: disks passed Dec 12 18:34:43.519039 ignition[854]: Ignition finished successfully Dec 12 18:34:43.529120 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 18:34:43.532692 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 18:34:43.532817 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 18:34:43.540304 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:34:43.540399 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:34:43.545072 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:34:43.549764 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 18:34:43.586837 systemd-fsck[864]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 12 18:34:43.596072 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 18:34:43.601136 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 18:34:43.721832 kernel: EXT4-fs (vda9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 12 18:34:43.723111 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 18:34:43.725428 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 18:34:43.728073 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:34:43.733496 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 18:34:43.733937 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 18:34:43.733989 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 18:34:43.734017 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:34:43.751501 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 18:34:43.757819 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (872) Dec 12 18:34:43.758093 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 18:34:43.762510 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:34:43.762552 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:34:43.769040 kernel: BTRFS info (device vda6): turning on async discard Dec 12 18:34:43.769099 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 18:34:43.772281 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:34:43.804870 initrd-setup-root[896]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 18:34:43.811298 initrd-setup-root[903]: cut: /sysroot/etc/group: No such file or directory Dec 12 18:34:43.816943 initrd-setup-root[910]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 18:34:43.822169 initrd-setup-root[917]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 18:34:43.916690 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 18:34:43.921835 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 18:34:43.925848 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 18:34:43.941407 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 18:34:43.943800 kernel: BTRFS info (device vda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:34:43.958917 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 18:34:43.995620 ignition[986]: INFO : Ignition 2.22.0 Dec 12 18:34:43.995620 ignition[986]: INFO : Stage: mount Dec 12 18:34:43.998655 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:34:43.998655 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 18:34:43.998655 ignition[986]: INFO : mount: mount passed Dec 12 18:34:43.998655 ignition[986]: INFO : Ignition finished successfully Dec 12 18:34:44.000187 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 18:34:44.003918 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 18:34:44.033098 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 18:34:44.063058 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (998) Dec 12 18:34:44.063105 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 12 18:34:44.063117 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 12 18:34:44.069112 kernel: BTRFS info (device vda6): turning on async discard Dec 12 18:34:44.069142 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 18:34:44.071231 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 18:34:44.133151 ignition[1015]: INFO : Ignition 2.22.0 Dec 12 18:34:44.133151 ignition[1015]: INFO : Stage: files Dec 12 18:34:44.135985 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:34:44.135985 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 18:34:44.135985 ignition[1015]: DEBUG : files: compiled without relabeling support, skipping Dec 12 18:34:44.141539 ignition[1015]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 18:34:44.141539 ignition[1015]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 18:34:44.146058 ignition[1015]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 18:34:44.146058 ignition[1015]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 18:34:44.146058 ignition[1015]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 18:34:44.143642 unknown[1015]: wrote ssh authorized keys file for user: core Dec 12 18:34:44.154324 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 12 18:34:44.154324 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 12 18:34:44.199342 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 18:34:44.376067 systemd-networkd[838]: eth0: Gained IPv6LL Dec 12 18:34:44.377839 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 12 18:34:44.377839 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 12 18:34:44.377839 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 18:34:44.377839 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:34:44.377839 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 18:34:44.377839 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:34:44.377839 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 18:34:44.377839 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:34:44.377839 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 18:34:44.404938 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:34:44.407836 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 18:34:44.407836 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 12 18:34:44.415424 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 12 18:34:44.415424 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 12 18:34:44.415424 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Dec 12 18:34:44.684806 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 12 18:34:45.256827 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 12 18:34:45.256827 ignition[1015]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 12 18:34:45.262744 ignition[1015]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:34:45.273311 ignition[1015]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 18:34:45.273311 ignition[1015]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 12 18:34:45.273311 ignition[1015]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 12 18:34:45.281540 ignition[1015]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 18:34:45.281540 ignition[1015]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 18:34:45.281540 ignition[1015]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 12 18:34:45.281540 ignition[1015]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 12 18:34:45.299386 ignition[1015]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 18:34:45.310293 ignition[1015]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 18:34:45.312961 ignition[1015]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 12 18:34:45.312961 ignition[1015]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 12 18:34:45.312961 ignition[1015]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 18:34:45.312961 ignition[1015]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:34:45.312961 ignition[1015]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 18:34:45.312961 ignition[1015]: INFO : files: files passed Dec 12 18:34:45.312961 ignition[1015]: INFO : Ignition finished successfully Dec 12 18:34:45.317104 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 18:34:45.323366 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 18:34:45.327301 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 18:34:45.354111 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 18:34:45.354300 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 18:34:45.359681 initrd-setup-root-after-ignition[1044]: grep: /sysroot/oem/oem-release: No such file or directory Dec 12 18:34:45.364619 initrd-setup-root-after-ignition[1046]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:34:45.367343 initrd-setup-root-after-ignition[1046]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:34:45.369923 initrd-setup-root-after-ignition[1050]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 18:34:45.373548 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:34:45.373914 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 18:34:45.381176 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 18:34:45.434451 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 18:34:45.434603 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 18:34:45.437099 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 18:34:45.440260 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 18:34:45.443672 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 18:34:45.444916 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 18:34:45.483457 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:34:45.485142 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 18:34:45.569093 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:34:45.571477 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:34:45.575754 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 18:34:45.577884 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 18:34:45.578031 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 18:34:45.620585 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 18:34:45.622453 systemd[1]: Stopped target basic.target - Basic System. Dec 12 18:34:45.625580 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 18:34:45.627035 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 18:34:45.630430 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 18:34:45.635862 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 18:34:45.637677 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 18:34:45.642888 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 18:34:45.648365 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 18:34:45.648538 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 18:34:45.651832 systemd[1]: Stopped target swap.target - Swaps. Dec 12 18:34:45.656222 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 18:34:45.656378 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 18:34:45.662522 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:34:45.662702 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:34:45.668129 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 18:34:45.668299 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:34:45.670203 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 18:34:45.670342 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 18:34:45.677556 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 18:34:45.677764 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 18:34:45.680390 systemd[1]: Stopped target paths.target - Path Units. Dec 12 18:34:45.686899 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 18:34:45.690887 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:34:45.691075 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 18:34:45.695407 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 18:34:45.699729 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 18:34:45.699843 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 18:34:45.701303 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 18:34:45.701386 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 18:34:45.701923 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 18:34:45.702062 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 18:34:45.708996 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 18:34:45.709108 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 18:34:45.713091 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 18:34:45.717196 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 18:34:45.753531 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 18:34:45.753701 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:34:45.754470 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 18:34:45.754567 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 18:34:45.773303 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 18:34:45.773459 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 18:34:45.791602 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 18:34:45.794921 ignition[1070]: INFO : Ignition 2.22.0 Dec 12 18:34:45.794921 ignition[1070]: INFO : Stage: umount Dec 12 18:34:45.794921 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 18:34:45.794921 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 18:34:45.880039 ignition[1070]: INFO : umount: umount passed Dec 12 18:34:45.880039 ignition[1070]: INFO : Ignition finished successfully Dec 12 18:34:45.874650 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 18:34:45.874802 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 18:34:45.876040 systemd[1]: Stopped target network.target - Network. Dec 12 18:34:45.880227 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 18:34:45.880358 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 18:34:45.885974 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 18:34:45.886057 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 18:34:45.889099 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 18:34:45.889194 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 18:34:45.890677 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 18:34:45.890743 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 18:34:45.895396 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 18:34:45.896895 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 18:34:45.913238 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 18:34:45.913377 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 18:34:45.919052 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 12 18:34:45.919336 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 18:34:45.919462 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 18:34:45.924256 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 12 18:34:45.924979 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 18:34:45.939100 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 18:34:45.939158 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:34:45.941691 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 18:34:45.945880 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 18:34:45.947732 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 18:34:45.949309 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 18:34:45.949379 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:34:45.956125 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 18:34:45.956198 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 18:34:45.957752 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 18:34:45.957823 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:34:45.965989 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:34:45.971019 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 12 18:34:45.971093 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:34:45.981045 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 18:34:45.981202 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 18:34:46.006875 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 18:34:46.007095 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:34:46.062220 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 18:34:46.062278 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 18:34:46.066412 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 18:34:46.066478 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:34:46.068407 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 18:34:46.068469 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 18:34:46.074250 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 18:34:46.074369 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 18:34:46.084134 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 18:34:46.084225 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 18:34:46.091662 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 18:34:46.093688 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 18:34:46.093766 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:34:46.099772 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 18:34:46.099854 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:34:46.107596 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 12 18:34:46.107665 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:34:46.113770 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 18:34:46.113863 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:34:46.119255 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 18:34:46.119330 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:34:46.126168 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 12 18:34:46.126247 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Dec 12 18:34:46.126301 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 12 18:34:46.126359 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 18:34:46.126882 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 18:34:46.127029 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 18:34:46.210725 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 18:34:46.210927 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 18:34:46.217153 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 18:34:46.219372 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 18:34:46.219475 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 18:34:46.225117 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 18:34:46.256492 systemd[1]: Switching root. Dec 12 18:34:46.294732 systemd-journald[201]: Journal stopped Dec 12 18:34:48.039196 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Dec 12 18:34:48.039280 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 18:34:48.039303 kernel: SELinux: policy capability open_perms=1 Dec 12 18:34:48.039324 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 18:34:48.039339 kernel: SELinux: policy capability always_check_network=0 Dec 12 18:34:48.039358 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 18:34:48.039375 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 18:34:48.039390 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 18:34:48.039405 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 18:34:48.039419 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 18:34:48.039434 kernel: audit: type=1403 audit(1765564486.904:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 12 18:34:48.039450 systemd[1]: Successfully loaded SELinux policy in 82.781ms. Dec 12 18:34:48.039481 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.103ms. Dec 12 18:34:48.039505 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 18:34:48.039526 systemd[1]: Detected virtualization kvm. Dec 12 18:34:48.039543 systemd[1]: Detected architecture x86-64. Dec 12 18:34:48.039559 systemd[1]: Detected first boot. Dec 12 18:34:48.039574 systemd[1]: Initializing machine ID from VM UUID. Dec 12 18:34:48.039589 zram_generator::config[1116]: No configuration found. Dec 12 18:34:48.039607 kernel: Guest personality initialized and is inactive Dec 12 18:34:48.039623 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 12 18:34:48.039638 kernel: Initialized host personality Dec 12 18:34:48.039653 kernel: NET: Registered PF_VSOCK protocol family Dec 12 18:34:48.039673 systemd[1]: Populated /etc with preset unit settings. Dec 12 18:34:48.039690 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 12 18:34:48.039707 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 18:34:48.039723 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 18:34:48.039742 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 18:34:48.039759 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 18:34:48.039777 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 18:34:48.039816 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 18:34:48.039839 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 18:34:48.039857 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 18:34:48.039874 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 18:34:48.039892 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 18:34:48.039909 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 18:34:48.039926 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 18:34:48.039944 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 18:34:48.039961 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 18:34:48.039978 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 18:34:48.040000 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 18:34:48.040018 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 18:34:48.040035 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 12 18:34:48.040051 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 18:34:48.040068 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 18:34:48.040084 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 18:34:48.040114 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 18:34:48.040132 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 18:34:48.040158 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 18:34:48.040175 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 18:34:48.040192 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 18:34:48.040209 systemd[1]: Reached target slices.target - Slice Units. Dec 12 18:34:48.040226 systemd[1]: Reached target swap.target - Swaps. Dec 12 18:34:48.040243 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 18:34:48.040260 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 18:34:48.040277 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 18:34:48.040293 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 18:34:48.040314 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 18:34:48.040332 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 18:34:48.040348 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 18:34:48.040365 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 18:34:48.040382 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 18:34:48.040399 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 18:34:48.040424 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:34:48.040441 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 18:34:48.040458 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 18:34:48.040479 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 18:34:48.040496 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 18:34:48.040514 systemd[1]: Reached target machines.target - Containers. Dec 12 18:34:48.040533 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 18:34:48.040551 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:34:48.040568 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 18:34:48.040584 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 18:34:48.040601 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:34:48.040622 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:34:48.040639 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:34:48.040654 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 18:34:48.040669 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:34:48.040687 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 18:34:48.040703 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 18:34:48.040718 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 18:34:48.040734 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 18:34:48.040754 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 18:34:48.040770 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:34:48.040806 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 18:34:48.040823 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 18:34:48.040839 kernel: loop: module loaded Dec 12 18:34:48.040854 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 18:34:48.040870 kernel: ACPI: bus type drm_connector registered Dec 12 18:34:48.040885 kernel: fuse: init (API version 7.41) Dec 12 18:34:48.040903 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 18:34:48.040925 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 18:34:48.040942 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 18:34:48.040964 systemd[1]: verity-setup.service: Deactivated successfully. Dec 12 18:34:48.040981 systemd[1]: Stopped verity-setup.service. Dec 12 18:34:48.040998 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:34:48.041015 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 18:34:48.041062 systemd-journald[1192]: Collecting audit messages is disabled. Dec 12 18:34:48.041104 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 18:34:48.041125 systemd-journald[1192]: Journal started Dec 12 18:34:48.041163 systemd-journald[1192]: Runtime Journal (/run/log/journal/156963ddc75c445a837e1e18099b3338) is 6M, max 48.1M, 42.1M free. Dec 12 18:34:47.674680 systemd[1]: Queued start job for default target multi-user.target. Dec 12 18:34:47.703068 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 12 18:34:47.703909 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 18:34:48.043825 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 18:34:48.047039 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 18:34:48.048947 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 18:34:48.051276 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 18:34:48.053608 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 18:34:48.056026 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 18:34:48.058876 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 18:34:48.061719 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 18:34:48.062354 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 18:34:48.065360 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:34:48.065693 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:34:48.068331 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:34:48.068641 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:34:48.072067 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:34:48.072424 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:34:48.075337 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 18:34:48.075648 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 18:34:48.078285 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:34:48.078574 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:34:48.081542 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 18:34:48.084214 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 18:34:48.087253 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 18:34:48.090066 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 18:34:48.107707 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 18:34:48.111623 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 18:34:48.117913 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 18:34:48.120273 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 18:34:48.120339 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 18:34:48.123865 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 18:34:48.132701 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 18:34:48.135075 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:34:48.138010 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 18:34:48.141941 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 18:34:48.144422 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:34:48.147934 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 18:34:48.150669 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:34:48.153080 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 18:34:48.159247 systemd-journald[1192]: Time spent on flushing to /var/log/journal/156963ddc75c445a837e1e18099b3338 is 29.316ms for 1071 entries. Dec 12 18:34:48.159247 systemd-journald[1192]: System Journal (/var/log/journal/156963ddc75c445a837e1e18099b3338) is 8M, max 195.6M, 187.6M free. Dec 12 18:34:48.200861 systemd-journald[1192]: Received client request to flush runtime journal. Dec 12 18:34:48.161212 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 18:34:48.183426 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 18:34:48.191673 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 18:34:48.195757 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 18:34:48.200825 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 18:34:48.208122 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 18:34:48.211079 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 18:34:48.215805 kernel: loop0: detected capacity change from 0 to 110984 Dec 12 18:34:48.218471 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 18:34:48.224492 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 18:34:48.228637 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Dec 12 18:34:48.228660 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Dec 12 18:34:48.228999 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 18:34:48.241449 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 18:34:48.246929 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 18:34:48.249612 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 18:34:48.272996 kernel: loop1: detected capacity change from 0 to 229808 Dec 12 18:34:48.275472 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 18:34:48.295625 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 18:34:48.299596 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 18:34:48.309825 kernel: loop2: detected capacity change from 0 to 128560 Dec 12 18:34:48.325711 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Dec 12 18:34:48.325737 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Dec 12 18:34:48.330485 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 18:34:48.349832 kernel: loop3: detected capacity change from 0 to 110984 Dec 12 18:34:48.360885 kernel: loop4: detected capacity change from 0 to 229808 Dec 12 18:34:48.374822 kernel: loop5: detected capacity change from 0 to 128560 Dec 12 18:34:48.386933 (sd-merge)[1261]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 12 18:34:48.387554 (sd-merge)[1261]: Merged extensions into '/usr'. Dec 12 18:34:48.393161 systemd[1]: Reload requested from client PID 1236 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 18:34:48.393179 systemd[1]: Reloading... Dec 12 18:34:48.473203 zram_generator::config[1290]: No configuration found. Dec 12 18:34:48.554309 ldconfig[1231]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 18:34:48.724352 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 18:34:48.725201 systemd[1]: Reloading finished in 331 ms. Dec 12 18:34:48.758139 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 18:34:48.760804 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 18:34:48.763336 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 18:34:48.784292 systemd[1]: Starting ensure-sysext.service... Dec 12 18:34:48.787030 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 18:34:48.814651 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 18:34:48.829753 systemd[1]: Reload requested from client PID 1326 ('systemctl') (unit ensure-sysext.service)... Dec 12 18:34:48.829772 systemd[1]: Reloading... Dec 12 18:34:48.831487 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 18:34:48.831954 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 18:34:48.832377 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 18:34:48.832737 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 12 18:34:48.834014 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 12 18:34:48.834426 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Dec 12 18:34:48.834516 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Dec 12 18:34:48.841089 systemd-tmpfiles[1327]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:34:48.841103 systemd-tmpfiles[1327]: Skipping /boot Dec 12 18:34:48.854059 systemd-tmpfiles[1327]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 18:34:48.854082 systemd-tmpfiles[1327]: Skipping /boot Dec 12 18:34:48.855887 systemd-udevd[1328]: Using default interface naming scheme 'v255'. Dec 12 18:34:48.899103 zram_generator::config[1359]: No configuration found. Dec 12 18:34:49.054970 kernel: mousedev: PS/2 mouse device common for all mice Dec 12 18:34:49.064901 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 12 18:34:49.075856 kernel: ACPI: button: Power Button [PWRF] Dec 12 18:34:49.091651 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Dec 12 18:34:49.092033 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 12 18:34:49.094426 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 12 18:34:49.184064 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 18:34:49.186905 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 12 18:34:49.187228 systemd[1]: Reloading finished in 357 ms. Dec 12 18:34:49.212723 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 18:34:49.256269 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 18:34:49.274122 kernel: kvm_amd: TSC scaling supported Dec 12 18:34:49.274196 kernel: kvm_amd: Nested Virtualization enabled Dec 12 18:34:49.274232 kernel: kvm_amd: Nested Paging enabled Dec 12 18:34:49.274248 kernel: kvm_amd: LBR virtualization supported Dec 12 18:34:49.276008 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 12 18:34:49.276042 kernel: kvm_amd: Virtual GIF supported Dec 12 18:34:49.310816 kernel: EDAC MC: Ver: 3.0.0 Dec 12 18:34:49.311303 systemd[1]: Finished ensure-sysext.service. Dec 12 18:34:49.337999 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:34:49.340953 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:34:49.345962 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 18:34:49.350895 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 18:34:49.354392 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 18:34:49.364951 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 18:34:49.368300 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 18:34:49.372735 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 18:34:49.374994 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 18:34:49.383981 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 18:34:49.386497 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 18:34:49.388923 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 18:34:49.395095 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 18:34:49.401488 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 18:34:49.405430 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 12 18:34:49.408513 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 18:34:49.411177 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 18:34:49.411294 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 12 18:34:49.412687 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 18:34:49.412971 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 18:34:49.413568 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 18:34:49.413811 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 18:34:49.414221 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 18:34:49.414433 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 18:34:49.415895 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 18:34:49.422495 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 18:34:49.425650 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 18:34:49.431799 augenrules[1481]: No rules Dec 12 18:34:49.434820 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:34:49.435369 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:34:49.442350 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 18:34:49.447397 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 18:34:49.447902 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 18:34:49.450313 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 18:34:49.453129 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 18:34:49.462068 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 18:34:49.472651 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 18:34:49.475314 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 18:34:49.478093 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 18:34:49.498044 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 18:34:49.510711 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 18:34:49.586764 systemd-networkd[1468]: lo: Link UP Dec 12 18:34:49.587144 systemd-networkd[1468]: lo: Gained carrier Dec 12 18:34:49.588932 systemd-networkd[1468]: Enumeration completed Dec 12 18:34:49.589161 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 18:34:49.589507 systemd-networkd[1468]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:34:49.589514 systemd-networkd[1468]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 18:34:49.591237 systemd-networkd[1468]: eth0: Link UP Dec 12 18:34:49.591543 systemd-networkd[1468]: eth0: Gained carrier Dec 12 18:34:49.591561 systemd-networkd[1468]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 18:34:49.593612 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 18:34:49.599944 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 18:34:49.610874 systemd-networkd[1468]: eth0: DHCPv4 address 10.0.0.51/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 18:34:49.611832 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 12 18:34:49.613378 systemd-timesyncd[1474]: Network configuration changed, trying to establish connection. Dec 12 18:34:49.614563 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 18:34:49.616461 systemd-resolved[1471]: Positive Trust Anchors: Dec 12 18:34:49.616472 systemd-resolved[1471]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 18:34:49.616503 systemd-resolved[1471]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 18:34:49.616938 systemd-timesyncd[1474]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 12 18:34:49.617001 systemd-timesyncd[1474]: Initial clock synchronization to Fri 2025-12-12 18:34:49.811001 UTC. Dec 12 18:34:49.620889 systemd-resolved[1471]: Defaulting to hostname 'linux'. Dec 12 18:34:49.623159 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 18:34:49.625399 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 18:34:49.628356 systemd[1]: Reached target network.target - Network. Dec 12 18:34:49.629847 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 18:34:49.631731 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 18:34:49.633520 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 18:34:49.635487 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 18:34:49.637469 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 12 18:34:49.639491 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 18:34:49.641288 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 18:34:49.643300 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 18:34:49.645299 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 18:34:49.645328 systemd[1]: Reached target paths.target - Path Units. Dec 12 18:34:49.646864 systemd[1]: Reached target timers.target - Timer Units. Dec 12 18:34:49.650445 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 18:34:49.655370 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 18:34:49.662725 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 18:34:49.665288 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 18:34:49.667468 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 18:34:49.674391 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 18:34:49.677348 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 18:34:49.680347 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 18:34:49.683228 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 18:34:49.684886 systemd[1]: Reached target basic.target - Basic System. Dec 12 18:34:49.686552 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:34:49.686585 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 18:34:49.687900 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 18:34:49.691065 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 18:34:49.694280 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 18:34:49.698395 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 18:34:49.702728 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 18:34:49.704764 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 18:34:49.706235 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 12 18:34:49.718184 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 18:34:49.723328 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 18:34:49.725929 google_oslogin_nss_cache[1522]: oslogin_cache_refresh[1522]: Refreshing passwd entry cache Dec 12 18:34:49.726331 oslogin_cache_refresh[1522]: Refreshing passwd entry cache Dec 12 18:34:49.730005 jq[1520]: false Dec 12 18:34:49.729916 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 18:34:49.732974 oslogin_cache_refresh[1522]: Failure getting users, quitting Dec 12 18:34:49.733463 google_oslogin_nss_cache[1522]: oslogin_cache_refresh[1522]: Failure getting users, quitting Dec 12 18:34:49.733463 google_oslogin_nss_cache[1522]: oslogin_cache_refresh[1522]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:34:49.733463 google_oslogin_nss_cache[1522]: oslogin_cache_refresh[1522]: Refreshing group entry cache Dec 12 18:34:49.732995 oslogin_cache_refresh[1522]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 12 18:34:49.733064 oslogin_cache_refresh[1522]: Refreshing group entry cache Dec 12 18:34:49.733808 extend-filesystems[1521]: Found /dev/vda6 Dec 12 18:34:49.736974 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 18:34:49.738227 extend-filesystems[1521]: Found /dev/vda9 Dec 12 18:34:49.740566 google_oslogin_nss_cache[1522]: oslogin_cache_refresh[1522]: Failure getting groups, quitting Dec 12 18:34:49.740566 google_oslogin_nss_cache[1522]: oslogin_cache_refresh[1522]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:34:49.739986 oslogin_cache_refresh[1522]: Failure getting groups, quitting Dec 12 18:34:49.739998 oslogin_cache_refresh[1522]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 12 18:34:49.740714 extend-filesystems[1521]: Checking size of /dev/vda9 Dec 12 18:34:49.748482 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 18:34:49.751566 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 18:34:49.752417 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 18:34:49.753935 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 18:34:49.756954 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 18:34:49.761548 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 18:34:49.765369 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 18:34:49.765752 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 18:34:49.766213 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 12 18:34:49.766729 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 12 18:34:49.769397 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 18:34:49.769733 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 18:34:49.774200 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 18:34:49.779036 extend-filesystems[1521]: Resized partition /dev/vda9 Dec 12 18:34:49.774593 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 18:34:49.785439 extend-filesystems[1546]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 18:34:49.793039 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 12 18:34:49.808723 jq[1543]: true Dec 12 18:34:49.831841 update_engine[1540]: I20251212 18:34:49.830035 1540 main.cc:92] Flatcar Update Engine starting Dec 12 18:34:49.835891 jq[1557]: true Dec 12 18:34:49.839603 tar[1545]: linux-amd64/LICENSE Dec 12 18:34:49.839863 tar[1545]: linux-amd64/helm Dec 12 18:34:49.844841 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 12 18:34:49.846342 (ntainerd)[1555]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 12 18:34:49.863609 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 18:34:49.863378 dbus-daemon[1518]: [system] SELinux support is enabled Dec 12 18:34:49.918412 update_engine[1540]: I20251212 18:34:49.871979 1540 update_check_scheduler.cc:74] Next update check in 7m27s Dec 12 18:34:49.868933 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 18:34:49.868967 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 18:34:49.871752 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 18:34:49.871773 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 18:34:49.874354 systemd[1]: Started update-engine.service - Update Engine. Dec 12 18:34:49.878565 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 18:34:49.923945 extend-filesystems[1546]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 12 18:34:49.923945 extend-filesystems[1546]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 12 18:34:49.923945 extend-filesystems[1546]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 12 18:34:49.937926 extend-filesystems[1521]: Resized filesystem in /dev/vda9 Dec 12 18:34:49.926290 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 18:34:49.928012 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 18:34:49.929638 systemd-logind[1539]: Watching system buttons on /dev/input/event2 (Power Button) Dec 12 18:34:49.929664 systemd-logind[1539]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 12 18:34:49.936722 systemd-logind[1539]: New seat seat0. Dec 12 18:34:49.945548 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 18:34:49.950671 bash[1579]: Updated "/home/core/.ssh/authorized_keys" Dec 12 18:34:49.955370 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 18:34:49.963409 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 12 18:34:49.982807 locksmithd[1580]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 18:34:50.313413 sshd_keygen[1565]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 18:34:50.364011 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 18:34:50.372944 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 18:34:50.412930 containerd[1555]: time="2025-12-12T18:34:50Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 18:34:50.413731 containerd[1555]: time="2025-12-12T18:34:50.413693336Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 12 18:34:50.431011 containerd[1555]: time="2025-12-12T18:34:50.430955116Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.9µs" Dec 12 18:34:50.431011 containerd[1555]: time="2025-12-12T18:34:50.430998299Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 18:34:50.431096 containerd[1555]: time="2025-12-12T18:34:50.431020465Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 18:34:50.431296 containerd[1555]: time="2025-12-12T18:34:50.431266259Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 18:34:50.431296 containerd[1555]: time="2025-12-12T18:34:50.431287943Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 18:34:50.431364 containerd[1555]: time="2025-12-12T18:34:50.431319211Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:34:50.431423 containerd[1555]: time="2025-12-12T18:34:50.431401952Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 18:34:50.431423 containerd[1555]: time="2025-12-12T18:34:50.431418762Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:34:50.431802 containerd[1555]: time="2025-12-12T18:34:50.431754615Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 18:34:50.431802 containerd[1555]: time="2025-12-12T18:34:50.431778340Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:34:50.431802 containerd[1555]: time="2025-12-12T18:34:50.431789146Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 18:34:50.431802 containerd[1555]: time="2025-12-12T18:34:50.431797243Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 18:34:50.433585 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 18:34:50.434049 containerd[1555]: time="2025-12-12T18:34:50.433874857Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 18:34:50.434209 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 18:34:50.434359 containerd[1555]: time="2025-12-12T18:34:50.434331626Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:34:50.434387 containerd[1555]: time="2025-12-12T18:34:50.434374553Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 18:34:50.434417 containerd[1555]: time="2025-12-12T18:34:50.434385502Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 18:34:50.434438 containerd[1555]: time="2025-12-12T18:34:50.434421511Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 18:34:50.434740 containerd[1555]: time="2025-12-12T18:34:50.434710806Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 18:34:50.434878 containerd[1555]: time="2025-12-12T18:34:50.434858116Z" level=info msg="metadata content store policy set" policy=shared Dec 12 18:34:50.450852 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 18:34:50.459948 tar[1545]: linux-amd64/README.md Dec 12 18:34:50.490425 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 18:34:50.493757 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 18:34:50.522514 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 18:34:50.526841 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 12 18:34:50.529727 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 18:34:50.537154 containerd[1555]: time="2025-12-12T18:34:50.537082774Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 18:34:50.537224 containerd[1555]: time="2025-12-12T18:34:50.537193531Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 18:34:50.537247 containerd[1555]: time="2025-12-12T18:34:50.537227149Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 18:34:50.537247 containerd[1555]: time="2025-12-12T18:34:50.537242573Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 18:34:50.537286 containerd[1555]: time="2025-12-12T18:34:50.537260368Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 18:34:50.537286 containerd[1555]: time="2025-12-12T18:34:50.537277607Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 18:34:50.537351 containerd[1555]: time="2025-12-12T18:34:50.537295371Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 18:34:50.537351 containerd[1555]: time="2025-12-12T18:34:50.537312056Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 18:34:50.537351 containerd[1555]: time="2025-12-12T18:34:50.537328404Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 18:34:50.537351 containerd[1555]: time="2025-12-12T18:34:50.537341478Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 18:34:50.537424 containerd[1555]: time="2025-12-12T18:34:50.537361550Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 18:34:50.537424 containerd[1555]: time="2025-12-12T18:34:50.537383069Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 18:34:50.537818 containerd[1555]: time="2025-12-12T18:34:50.537744721Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 18:34:50.537872 containerd[1555]: time="2025-12-12T18:34:50.537854534Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 18:34:50.537894 containerd[1555]: time="2025-12-12T18:34:50.537882262Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 18:34:50.537914 containerd[1555]: time="2025-12-12T18:34:50.537900641Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 18:34:50.537934 containerd[1555]: time="2025-12-12T18:34:50.537916219Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 18:34:50.537953 containerd[1555]: time="2025-12-12T18:34:50.537934567Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 18:34:50.537953 containerd[1555]: time="2025-12-12T18:34:50.537949960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 18:34:50.538004 containerd[1555]: time="2025-12-12T18:34:50.537966882Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 18:34:50.538004 containerd[1555]: time="2025-12-12T18:34:50.537982377Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 18:34:50.538004 containerd[1555]: time="2025-12-12T18:34:50.537996354Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 18:34:50.538072 containerd[1555]: time="2025-12-12T18:34:50.538010146Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 18:34:50.538166 containerd[1555]: time="2025-12-12T18:34:50.538126220Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 18:34:50.538166 containerd[1555]: time="2025-12-12T18:34:50.538160679Z" level=info msg="Start snapshots syncer" Dec 12 18:34:50.538220 containerd[1555]: time="2025-12-12T18:34:50.538203831Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 18:34:50.538603 containerd[1555]: time="2025-12-12T18:34:50.538544692Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 18:34:50.538714 containerd[1555]: time="2025-12-12T18:34:50.538627444Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 18:34:50.538714 containerd[1555]: time="2025-12-12T18:34:50.538702992Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 18:34:50.538902 containerd[1555]: time="2025-12-12T18:34:50.538877014Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 18:34:50.538940 containerd[1555]: time="2025-12-12T18:34:50.538908898Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 18:34:50.538940 containerd[1555]: time="2025-12-12T18:34:50.538922577Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 18:34:50.538979 containerd[1555]: time="2025-12-12T18:34:50.538950470Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 18:34:50.538999 containerd[1555]: time="2025-12-12T18:34:50.538976042Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 18:34:50.538999 containerd[1555]: time="2025-12-12T18:34:50.538991312Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 18:34:50.539044 containerd[1555]: time="2025-12-12T18:34:50.539010184Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 18:34:50.539065 containerd[1555]: time="2025-12-12T18:34:50.539043720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 18:34:50.539065 containerd[1555]: time="2025-12-12T18:34:50.539058744Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 18:34:50.539102 containerd[1555]: time="2025-12-12T18:34:50.539072331Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 18:34:50.539166 containerd[1555]: time="2025-12-12T18:34:50.539130372Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:34:50.539216 containerd[1555]: time="2025-12-12T18:34:50.539194191Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 18:34:50.539216 containerd[1555]: time="2025-12-12T18:34:50.539211852Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:34:50.539257 containerd[1555]: time="2025-12-12T18:34:50.539224874Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 18:34:50.539257 containerd[1555]: time="2025-12-12T18:34:50.539235341Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 18:34:50.539257 containerd[1555]: time="2025-12-12T18:34:50.539245706Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 18:34:50.539311 containerd[1555]: time="2025-12-12T18:34:50.539277569Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 18:34:50.539311 containerd[1555]: time="2025-12-12T18:34:50.539301695Z" level=info msg="runtime interface created" Dec 12 18:34:50.539311 containerd[1555]: time="2025-12-12T18:34:50.539308663Z" level=info msg="created NRI interface" Dec 12 18:34:50.539371 containerd[1555]: time="2025-12-12T18:34:50.539318822Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 18:34:50.539371 containerd[1555]: time="2025-12-12T18:34:50.539332399Z" level=info msg="Connect containerd service" Dec 12 18:34:50.539371 containerd[1555]: time="2025-12-12T18:34:50.539361430Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 18:34:50.540684 containerd[1555]: time="2025-12-12T18:34:50.540651645Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 18:34:50.750111 containerd[1555]: time="2025-12-12T18:34:50.749833084Z" level=info msg="Start subscribing containerd event" Dec 12 18:34:50.750332 containerd[1555]: time="2025-12-12T18:34:50.749984314Z" level=info msg="Start recovering state" Dec 12 18:34:50.750563 containerd[1555]: time="2025-12-12T18:34:50.750514200Z" level=info msg="Start event monitor" Dec 12 18:34:50.750563 containerd[1555]: time="2025-12-12T18:34:50.750547705Z" level=info msg="Start cni network conf syncer for default" Dec 12 18:34:50.750563 containerd[1555]: time="2025-12-12T18:34:50.750566864Z" level=info msg="Start streaming server" Dec 12 18:34:50.750748 containerd[1555]: time="2025-12-12T18:34:50.750597085Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 18:34:50.750748 containerd[1555]: time="2025-12-12T18:34:50.750613187Z" level=info msg="runtime interface starting up..." Dec 12 18:34:50.750748 containerd[1555]: time="2025-12-12T18:34:50.750622515Z" level=info msg="starting plugins..." Dec 12 18:34:50.750748 containerd[1555]: time="2025-12-12T18:34:50.750650161Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 18:34:50.750748 containerd[1555]: time="2025-12-12T18:34:50.750720168Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 18:34:50.750962 containerd[1555]: time="2025-12-12T18:34:50.750868771Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 18:34:50.751189 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 18:34:50.751434 containerd[1555]: time="2025-12-12T18:34:50.751398369Z" level=info msg="containerd successfully booted in 0.339170s" Dec 12 18:34:51.114198 systemd-networkd[1468]: eth0: Gained IPv6LL Dec 12 18:34:51.118002 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 18:34:51.121050 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 18:34:51.125266 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 12 18:34:51.129200 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:34:51.132950 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 18:34:51.176362 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 18:34:51.193536 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 12 18:34:51.193986 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 12 18:34:51.196964 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 18:34:52.585546 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:34:52.588514 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 18:34:52.590708 systemd[1]: Startup finished in 4.244s (kernel) + 7.226s (initrd) + 5.754s (userspace) = 17.225s. Dec 12 18:34:52.598308 (kubelet)[1650]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:34:53.288814 kubelet[1650]: E1212 18:34:53.288728 1650 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:34:53.293868 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:34:53.294229 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:34:53.294715 systemd[1]: kubelet.service: Consumed 1.864s CPU time, 265.8M memory peak. Dec 12 18:34:53.765102 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 18:34:53.766601 systemd[1]: Started sshd@0-10.0.0.51:22-10.0.0.1:46606.service - OpenSSH per-connection server daemon (10.0.0.1:46606). Dec 12 18:34:53.854196 sshd[1663]: Accepted publickey for core from 10.0.0.1 port 46606 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:34:53.857130 sshd-session[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:34:53.865489 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 18:34:53.867011 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 18:34:53.877884 systemd-logind[1539]: New session 1 of user core. Dec 12 18:34:53.894843 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 18:34:53.899102 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 18:34:53.926081 (systemd)[1668]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 18:34:53.929362 systemd-logind[1539]: New session c1 of user core. Dec 12 18:34:54.103848 systemd[1668]: Queued start job for default target default.target. Dec 12 18:34:54.115599 systemd[1668]: Created slice app.slice - User Application Slice. Dec 12 18:34:54.115631 systemd[1668]: Reached target paths.target - Paths. Dec 12 18:34:54.115682 systemd[1668]: Reached target timers.target - Timers. Dec 12 18:34:54.118110 systemd[1668]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 18:34:54.133053 systemd[1668]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 18:34:54.133236 systemd[1668]: Reached target sockets.target - Sockets. Dec 12 18:34:54.133294 systemd[1668]: Reached target basic.target - Basic System. Dec 12 18:34:54.133347 systemd[1668]: Reached target default.target - Main User Target. Dec 12 18:34:54.133391 systemd[1668]: Startup finished in 195ms. Dec 12 18:34:54.134022 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 18:34:54.136319 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 18:34:54.207792 systemd[1]: Started sshd@1-10.0.0.51:22-10.0.0.1:46620.service - OpenSSH per-connection server daemon (10.0.0.1:46620). Dec 12 18:34:54.282654 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 46620 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:34:54.284611 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:34:54.290481 systemd-logind[1539]: New session 2 of user core. Dec 12 18:34:54.300020 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 18:34:54.358476 sshd[1682]: Connection closed by 10.0.0.1 port 46620 Dec 12 18:34:54.358895 sshd-session[1679]: pam_unix(sshd:session): session closed for user core Dec 12 18:34:54.369189 systemd[1]: sshd@1-10.0.0.51:22-10.0.0.1:46620.service: Deactivated successfully. Dec 12 18:34:54.378920 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 18:34:54.379767 systemd-logind[1539]: Session 2 logged out. Waiting for processes to exit. Dec 12 18:34:54.382714 systemd[1]: Started sshd@2-10.0.0.51:22-10.0.0.1:46624.service - OpenSSH per-connection server daemon (10.0.0.1:46624). Dec 12 18:34:54.383589 systemd-logind[1539]: Removed session 2. Dec 12 18:34:54.449092 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 46624 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:34:54.451285 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:34:54.456562 systemd-logind[1539]: New session 3 of user core. Dec 12 18:34:54.464122 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 18:34:54.523357 sshd[1692]: Connection closed by 10.0.0.1 port 46624 Dec 12 18:34:54.525323 sshd-session[1688]: pam_unix(sshd:session): session closed for user core Dec 12 18:34:54.542580 systemd[1]: sshd@2-10.0.0.51:22-10.0.0.1:46624.service: Deactivated successfully. Dec 12 18:34:54.545709 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 18:34:54.546748 systemd-logind[1539]: Session 3 logged out. Waiting for processes to exit. Dec 12 18:34:54.549921 systemd[1]: Started sshd@3-10.0.0.51:22-10.0.0.1:46628.service - OpenSSH per-connection server daemon (10.0.0.1:46628). Dec 12 18:34:54.550699 systemd-logind[1539]: Removed session 3. Dec 12 18:34:54.617090 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 46628 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:34:54.617931 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:34:54.623893 systemd-logind[1539]: New session 4 of user core. Dec 12 18:34:54.632192 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 18:34:54.690238 sshd[1702]: Connection closed by 10.0.0.1 port 46628 Dec 12 18:34:54.690852 sshd-session[1698]: pam_unix(sshd:session): session closed for user core Dec 12 18:34:54.701407 systemd[1]: sshd@3-10.0.0.51:22-10.0.0.1:46628.service: Deactivated successfully. Dec 12 18:34:54.707496 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 18:34:54.708847 systemd-logind[1539]: Session 4 logged out. Waiting for processes to exit. Dec 12 18:34:54.712452 systemd[1]: Started sshd@4-10.0.0.51:22-10.0.0.1:46638.service - OpenSSH per-connection server daemon (10.0.0.1:46638). Dec 12 18:34:54.713225 systemd-logind[1539]: Removed session 4. Dec 12 18:34:54.770518 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 46638 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:34:54.772095 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:34:54.776985 systemd-logind[1539]: New session 5 of user core. Dec 12 18:34:54.798957 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 18:34:54.861632 sudo[1713]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 18:34:54.862028 sudo[1713]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:34:54.884244 sudo[1713]: pam_unix(sudo:session): session closed for user root Dec 12 18:34:54.886043 sshd[1712]: Connection closed by 10.0.0.1 port 46638 Dec 12 18:34:54.886572 sshd-session[1708]: pam_unix(sshd:session): session closed for user core Dec 12 18:34:54.897909 systemd[1]: sshd@4-10.0.0.51:22-10.0.0.1:46638.service: Deactivated successfully. Dec 12 18:34:54.900740 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 18:34:54.901763 systemd-logind[1539]: Session 5 logged out. Waiting for processes to exit. Dec 12 18:34:54.905154 systemd[1]: Started sshd@5-10.0.0.51:22-10.0.0.1:46642.service - OpenSSH per-connection server daemon (10.0.0.1:46642). Dec 12 18:34:54.906067 systemd-logind[1539]: Removed session 5. Dec 12 18:34:54.972853 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 46642 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:34:54.975094 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:34:54.980865 systemd-logind[1539]: New session 6 of user core. Dec 12 18:34:54.991053 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 18:34:55.049632 sudo[1724]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 18:34:55.050051 sudo[1724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:34:55.224020 sudo[1724]: pam_unix(sudo:session): session closed for user root Dec 12 18:34:55.231669 sudo[1723]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 18:34:55.232032 sudo[1723]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:34:55.243552 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 18:34:55.291076 augenrules[1746]: No rules Dec 12 18:34:55.293102 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 18:34:55.293403 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 18:34:55.294733 sudo[1723]: pam_unix(sudo:session): session closed for user root Dec 12 18:34:55.296686 sshd[1722]: Connection closed by 10.0.0.1 port 46642 Dec 12 18:34:55.297141 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Dec 12 18:34:55.312448 systemd[1]: sshd@5-10.0.0.51:22-10.0.0.1:46642.service: Deactivated successfully. Dec 12 18:34:55.314669 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 18:34:55.315532 systemd-logind[1539]: Session 6 logged out. Waiting for processes to exit. Dec 12 18:34:55.318782 systemd[1]: Started sshd@6-10.0.0.51:22-10.0.0.1:46650.service - OpenSSH per-connection server daemon (10.0.0.1:46650). Dec 12 18:34:55.319551 systemd-logind[1539]: Removed session 6. Dec 12 18:34:55.386328 sshd[1755]: Accepted publickey for core from 10.0.0.1 port 46650 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:34:55.388108 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:34:55.392960 systemd-logind[1539]: New session 7 of user core. Dec 12 18:34:55.403984 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 18:34:55.462677 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 18:34:55.463086 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 18:34:56.231024 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 18:34:56.254287 (dockerd)[1779]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 18:34:56.695334 dockerd[1779]: time="2025-12-12T18:34:56.695252803Z" level=info msg="Starting up" Dec 12 18:34:56.697230 dockerd[1779]: time="2025-12-12T18:34:56.697142156Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 18:34:56.753253 dockerd[1779]: time="2025-12-12T18:34:56.753166186Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 18:34:57.121843 dockerd[1779]: time="2025-12-12T18:34:57.121593310Z" level=info msg="Loading containers: start." Dec 12 18:34:57.137884 kernel: Initializing XFRM netlink socket Dec 12 18:34:57.503316 systemd-networkd[1468]: docker0: Link UP Dec 12 18:34:57.513326 dockerd[1779]: time="2025-12-12T18:34:57.513261976Z" level=info msg="Loading containers: done." Dec 12 18:34:57.535196 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3660656065-merged.mount: Deactivated successfully. Dec 12 18:34:57.539074 dockerd[1779]: time="2025-12-12T18:34:57.539004941Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 18:34:57.539172 dockerd[1779]: time="2025-12-12T18:34:57.539122742Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 18:34:57.539277 dockerd[1779]: time="2025-12-12T18:34:57.539250252Z" level=info msg="Initializing buildkit" Dec 12 18:34:57.579805 dockerd[1779]: time="2025-12-12T18:34:57.579706868Z" level=info msg="Completed buildkit initialization" Dec 12 18:34:57.587860 dockerd[1779]: time="2025-12-12T18:34:57.587774489Z" level=info msg="Daemon has completed initialization" Dec 12 18:34:57.588116 dockerd[1779]: time="2025-12-12T18:34:57.587979657Z" level=info msg="API listen on /run/docker.sock" Dec 12 18:34:57.588098 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 18:34:58.567369 containerd[1555]: time="2025-12-12T18:34:58.567283860Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 12 18:35:00.471701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount288766697.mount: Deactivated successfully. Dec 12 18:35:01.901335 containerd[1555]: time="2025-12-12T18:35:01.901193390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:35:01.905515 containerd[1555]: time="2025-12-12T18:35:01.905425505Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114712" Dec 12 18:35:01.907695 containerd[1555]: time="2025-12-12T18:35:01.907608704Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:35:01.911982 containerd[1555]: time="2025-12-12T18:35:01.911921195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:35:01.915055 containerd[1555]: time="2025-12-12T18:35:01.914629704Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 3.347265881s" Dec 12 18:35:01.915055 containerd[1555]: time="2025-12-12T18:35:01.914901380Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Dec 12 18:35:01.916002 containerd[1555]: time="2025-12-12T18:35:01.915967169Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 12 18:35:03.372674 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 18:35:03.374615 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:35:03.657956 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:35:03.679386 (kubelet)[2068]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:35:03.795882 kubelet[2068]: E1212 18:35:03.795223 2068 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:35:03.805283 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:35:03.805591 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:35:03.806219 systemd[1]: kubelet.service: Consumed 361ms CPU time, 111.3M memory peak. Dec 12 18:35:05.083833 containerd[1555]: time="2025-12-12T18:35:05.083701711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:35:05.118896 containerd[1555]: time="2025-12-12T18:35:05.118742794Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016781" Dec 12 18:35:05.171713 containerd[1555]: time="2025-12-12T18:35:05.171621980Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:35:05.218430 containerd[1555]: time="2025-12-12T18:35:05.218347027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:35:05.219682 containerd[1555]: time="2025-12-12T18:35:05.219629390Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 3.303539155s" Dec 12 18:35:05.219682 containerd[1555]: time="2025-12-12T18:35:05.219668873Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Dec 12 18:35:05.220341 containerd[1555]: time="2025-12-12T18:35:05.220309280Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 12 18:35:06.709230 containerd[1555]: time="2025-12-12T18:35:06.709160514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:35:06.710058 containerd[1555]: time="2025-12-12T18:35:06.710030386Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158102" Dec 12 18:35:06.711290 containerd[1555]: time="2025-12-12T18:35:06.711258732Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:35:06.713932 containerd[1555]: time="2025-12-12T18:35:06.713879420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:35:06.715208 containerd[1555]: time="2025-12-12T18:35:06.715168303Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.494822591s" Dec 12 18:35:06.715284 containerd[1555]: time="2025-12-12T18:35:06.715213256Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Dec 12 18:35:06.715937 containerd[1555]: time="2025-12-12T18:35:06.715902745Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 12 18:35:07.888592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3091568936.mount: Deactivated successfully. Dec 12 18:35:08.706150 containerd[1555]: time="2025-12-12T18:35:08.706070892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:35:08.706977 containerd[1555]: time="2025-12-12T18:35:08.706904225Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930096" Dec 12 18:35:08.708282 containerd[1555]: time="2025-12-12T18:35:08.708215790Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:35:08.712343 containerd[1555]: time="2025-12-12T18:35:08.712235621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:35:08.713235 containerd[1555]: time="2025-12-12T18:35:08.713177947Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.997245368s" Dec 12 18:35:08.713235 containerd[1555]: time="2025-12-12T18:35:08.713215741Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Dec 12 18:35:08.713882 containerd[1555]: time="2025-12-12T18:35:08.713820556Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 12 18:35:09.377332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount352353398.mount: Deactivated successfully. Dec 12 18:35:10.357617 containerd[1555]: time="2025-12-12T18:35:10.357352042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:35:10.358503 containerd[1555]: time="2025-12-12T18:35:10.358443044Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Dec 12 18:35:10.360102 containerd[1555]: time="2025-12-12T18:35:10.360059922Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:35:10.363399 containerd[1555]: time="2025-12-12T18:35:10.363318827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:35:10.364684 containerd[1555]: time="2025-12-12T18:35:10.364641481Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.650766338s" Dec 12 18:35:10.364684 containerd[1555]: time="2025-12-12T18:35:10.364678703Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Dec 12 18:35:10.365424 containerd[1555]: time="2025-12-12T18:35:10.365384567Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 12 18:35:11.046759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1441028260.mount: Deactivated successfully. Dec 12 18:35:11.054313 containerd[1555]: time="2025-12-12T18:35:11.054246150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:35:11.055138 containerd[1555]: time="2025-12-12T18:35:11.055087419Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Dec 12 18:35:11.056374 containerd[1555]: time="2025-12-12T18:35:11.056319706Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:35:11.059696 containerd[1555]: time="2025-12-12T18:35:11.059639965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 18:35:11.060355 containerd[1555]: time="2025-12-12T18:35:11.060310976Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 694.894256ms" Dec 12 18:35:11.060355 containerd[1555]: time="2025-12-12T18:35:11.060342662Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 12 18:35:11.060932 containerd[1555]: time="2025-12-12T18:35:11.060908612Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 12 18:35:11.946240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount937312606.mount: Deactivated successfully. Dec 12 18:35:13.888690 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 12 18:35:13.894027 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:35:14.371173 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:35:14.479107 (kubelet)[2206]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 18:35:15.132322 kubelet[2206]: E1212 18:35:15.132217 2206 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 18:35:15.138213 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 18:35:15.138482 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 18:35:15.139098 systemd[1]: kubelet.service: Consumed 571ms CPU time, 110.9M memory peak. Dec 12 18:35:16.973824 containerd[1555]: time="2025-12-12T18:35:16.972309673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:35:16.976820 containerd[1555]: time="2025-12-12T18:35:16.976691013Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926227" Dec 12 18:35:16.979588 containerd[1555]: time="2025-12-12T18:35:16.979285558Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:35:16.984690 containerd[1555]: time="2025-12-12T18:35:16.984553035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:35:16.986823 containerd[1555]: time="2025-12-12T18:35:16.985706375Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 5.924767368s" Dec 12 18:35:16.986823 containerd[1555]: time="2025-12-12T18:35:16.985750772Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Dec 12 18:35:20.954915 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:35:20.955175 systemd[1]: kubelet.service: Consumed 571ms CPU time, 110.9M memory peak. Dec 12 18:35:20.958601 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:35:21.001810 systemd[1]: Reload requested from client PID 2248 ('systemctl') (unit session-7.scope)... Dec 12 18:35:21.001856 systemd[1]: Reloading... Dec 12 18:35:21.139871 zram_generator::config[2291]: No configuration found. Dec 12 18:35:21.607975 systemd[1]: Reloading finished in 605 ms. Dec 12 18:35:21.700351 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 18:35:21.700905 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 18:35:21.702569 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:35:21.702640 systemd[1]: kubelet.service: Consumed 220ms CPU time, 98.2M memory peak. Dec 12 18:35:21.705285 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:35:22.110318 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:35:22.128355 (kubelet)[2339]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:35:22.246573 kubelet[2339]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:35:22.246573 kubelet[2339]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:35:22.246573 kubelet[2339]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:35:22.246573 kubelet[2339]: I1212 18:35:22.246526 2339 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:35:23.332612 kubelet[2339]: I1212 18:35:23.332497 2339 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 12 18:35:23.332612 kubelet[2339]: I1212 18:35:23.332543 2339 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:35:23.333406 kubelet[2339]: I1212 18:35:23.332940 2339 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 18:35:23.406227 kubelet[2339]: E1212 18:35:23.403364 2339 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.51:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 18:35:23.406227 kubelet[2339]: I1212 18:35:23.403766 2339 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:35:23.433096 kubelet[2339]: I1212 18:35:23.433021 2339 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:35:23.445629 kubelet[2339]: I1212 18:35:23.445539 2339 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 18:35:23.446126 kubelet[2339]: I1212 18:35:23.446000 2339 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:35:23.446393 kubelet[2339]: I1212 18:35:23.446100 2339 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:35:23.446731 kubelet[2339]: I1212 18:35:23.446398 2339 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:35:23.446731 kubelet[2339]: I1212 18:35:23.446415 2339 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 18:35:23.446731 kubelet[2339]: I1212 18:35:23.446678 2339 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:35:23.500701 kubelet[2339]: I1212 18:35:23.500623 2339 kubelet.go:480] "Attempting to sync node with API server" Dec 12 18:35:23.501481 kubelet[2339]: I1212 18:35:23.500956 2339 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:35:23.501481 kubelet[2339]: I1212 18:35:23.501016 2339 kubelet.go:386] "Adding apiserver pod source" Dec 12 18:35:23.501481 kubelet[2339]: I1212 18:35:23.501064 2339 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:35:23.504892 kubelet[2339]: E1212 18:35:23.504683 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 18:35:23.518223 kubelet[2339]: E1212 18:35:23.518146 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.51:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 18:35:23.532134 kubelet[2339]: I1212 18:35:23.532081 2339 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 18:35:23.533177 kubelet[2339]: I1212 18:35:23.533121 2339 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 18:35:23.537370 kubelet[2339]: W1212 18:35:23.537280 2339 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 18:35:23.550243 kubelet[2339]: I1212 18:35:23.548940 2339 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 18:35:23.550243 kubelet[2339]: I1212 18:35:23.549042 2339 server.go:1289] "Started kubelet" Dec 12 18:35:23.608672 kubelet[2339]: I1212 18:35:23.607831 2339 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:35:23.608672 kubelet[2339]: I1212 18:35:23.608134 2339 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:35:23.615437 kubelet[2339]: I1212 18:35:23.615332 2339 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:35:23.616644 kubelet[2339]: I1212 18:35:23.615724 2339 server.go:317] "Adding debug handlers to kubelet server" Dec 12 18:35:23.617751 kubelet[2339]: I1212 18:35:23.617711 2339 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:35:23.622929 kubelet[2339]: I1212 18:35:23.622660 2339 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 18:35:23.624299 kubelet[2339]: E1212 18:35:23.624188 2339 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 18:35:23.625030 kubelet[2339]: E1212 18:35:23.624989 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="200ms" Dec 12 18:35:23.625591 kubelet[2339]: I1212 18:35:23.625567 2339 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:35:23.625995 kubelet[2339]: I1212 18:35:23.625965 2339 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 18:35:23.626301 kubelet[2339]: I1212 18:35:23.626070 2339 reconciler.go:26] "Reconciler: start to sync state" Dec 12 18:35:23.627108 kubelet[2339]: E1212 18:35:23.623217 2339 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.51:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.51:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18808b946e2b2f51 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-12 18:35:23.548979025 +0000 UTC m=+1.409916433,LastTimestamp:2025-12-12 18:35:23.548979025 +0000 UTC m=+1.409916433,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 12 18:35:23.628756 kubelet[2339]: I1212 18:35:23.628024 2339 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:35:23.628995 kubelet[2339]: E1212 18:35:23.628969 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 18:35:23.635699 kubelet[2339]: E1212 18:35:23.635634 2339 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:35:23.639807 kubelet[2339]: I1212 18:35:23.638328 2339 factory.go:223] Registration of the containerd container factory successfully Dec 12 18:35:23.639807 kubelet[2339]: I1212 18:35:23.638353 2339 factory.go:223] Registration of the systemd container factory successfully Dec 12 18:35:23.673541 kubelet[2339]: I1212 18:35:23.673483 2339 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:35:23.673541 kubelet[2339]: I1212 18:35:23.673524 2339 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:35:23.673541 kubelet[2339]: I1212 18:35:23.673577 2339 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:35:23.682313 kubelet[2339]: I1212 18:35:23.682185 2339 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 18:35:23.684447 kubelet[2339]: I1212 18:35:23.684365 2339 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 18:35:23.684573 kubelet[2339]: I1212 18:35:23.684453 2339 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 18:35:23.684573 kubelet[2339]: I1212 18:35:23.684503 2339 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:35:23.684573 kubelet[2339]: I1212 18:35:23.684520 2339 kubelet.go:2436] "Starting kubelet main sync loop" Dec 12 18:35:23.684676 kubelet[2339]: E1212 18:35:23.684598 2339 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:35:23.688374 kubelet[2339]: E1212 18:35:23.688237 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 18:35:23.693432 kubelet[2339]: I1212 18:35:23.693127 2339 policy_none.go:49] "None policy: Start" Dec 12 18:35:23.693432 kubelet[2339]: I1212 18:35:23.693201 2339 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 18:35:23.693432 kubelet[2339]: I1212 18:35:23.693257 2339 state_mem.go:35] "Initializing new in-memory state store" Dec 12 18:35:23.711662 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 18:35:23.725033 kubelet[2339]: E1212 18:35:23.724969 2339 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 18:35:23.734920 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 18:35:23.747520 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 18:35:23.758885 kubelet[2339]: E1212 18:35:23.758633 2339 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 18:35:23.759044 kubelet[2339]: I1212 18:35:23.758989 2339 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:35:23.759044 kubelet[2339]: I1212 18:35:23.759008 2339 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:35:23.759806 kubelet[2339]: I1212 18:35:23.759459 2339 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:35:23.760971 kubelet[2339]: E1212 18:35:23.760943 2339 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:35:23.761033 kubelet[2339]: E1212 18:35:23.761001 2339 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 12 18:35:23.804488 systemd[1]: Created slice kubepods-burstable-pod151ed30449181dcc45637e6218ddbb24.slice - libcontainer container kubepods-burstable-pod151ed30449181dcc45637e6218ddbb24.slice. Dec 12 18:35:23.827557 kubelet[2339]: E1212 18:35:23.827311 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="400ms" Dec 12 18:35:23.833658 kubelet[2339]: E1212 18:35:23.833576 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 18:35:23.845710 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Dec 12 18:35:23.859766 kubelet[2339]: E1212 18:35:23.857824 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 18:35:23.869948 kubelet[2339]: I1212 18:35:23.869877 2339 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 18:35:23.870427 kubelet[2339]: E1212 18:35:23.870383 2339 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Dec 12 18:35:23.872697 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Dec 12 18:35:23.879109 kubelet[2339]: E1212 18:35:23.879051 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 18:35:23.927247 kubelet[2339]: I1212 18:35:23.927102 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 18:35:23.927247 kubelet[2339]: I1212 18:35:23.927159 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 18:35:23.927247 kubelet[2339]: I1212 18:35:23.927205 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 18:35:23.927247 kubelet[2339]: I1212 18:35:23.927229 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 12 18:35:23.927247 kubelet[2339]: I1212 18:35:23.927266 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/151ed30449181dcc45637e6218ddbb24-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"151ed30449181dcc45637e6218ddbb24\") " pod="kube-system/kube-apiserver-localhost" Dec 12 18:35:23.928067 kubelet[2339]: I1212 18:35:23.927293 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/151ed30449181dcc45637e6218ddbb24-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"151ed30449181dcc45637e6218ddbb24\") " pod="kube-system/kube-apiserver-localhost" Dec 12 18:35:23.928067 kubelet[2339]: I1212 18:35:23.927315 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 18:35:23.928067 kubelet[2339]: I1212 18:35:23.927336 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 18:35:23.928067 kubelet[2339]: I1212 18:35:23.927359 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/151ed30449181dcc45637e6218ddbb24-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"151ed30449181dcc45637e6218ddbb24\") " pod="kube-system/kube-apiserver-localhost" Dec 12 18:35:24.072507 kubelet[2339]: I1212 18:35:24.072448 2339 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 18:35:24.074130 kubelet[2339]: E1212 18:35:24.074052 2339 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Dec 12 18:35:24.136352 kubelet[2339]: E1212 18:35:24.136027 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:24.137432 containerd[1555]: time="2025-12-12T18:35:24.137181348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:151ed30449181dcc45637e6218ddbb24,Namespace:kube-system,Attempt:0,}" Dec 12 18:35:24.159719 kubelet[2339]: E1212 18:35:24.159650 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:24.160461 containerd[1555]: time="2025-12-12T18:35:24.160409831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Dec 12 18:35:24.182857 kubelet[2339]: E1212 18:35:24.182365 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:24.185843 containerd[1555]: time="2025-12-12T18:35:24.185779327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Dec 12 18:35:24.200428 containerd[1555]: time="2025-12-12T18:35:24.200341838Z" level=info msg="connecting to shim 7c2aa7346eed0bf409bbf67cb455d04be36d4bb5fe9c74de39d149ff904f58ac" address="unix:///run/containerd/s/400cb40cb5096d70ea304affe4caa18fecdf28b4684085768f34840fe8513f1e" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:35:24.228511 kubelet[2339]: E1212 18:35:24.228113 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="800ms" Dec 12 18:35:24.290355 containerd[1555]: time="2025-12-12T18:35:24.290199573Z" level=info msg="connecting to shim 7a4018808fe88fef690fb2e4e53e8fa21ac69187f762f910f151c07affd25c61" address="unix:///run/containerd/s/3320a3ffdc6bce14117b64243ef0c49189e57853ec38e3f3afcd2f7adffb2d17" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:35:24.294160 containerd[1555]: time="2025-12-12T18:35:24.293981547Z" level=info msg="connecting to shim 3c74062bad25d09e863a785ffead5fe808c80f637402dd6aa9462da652d54567" address="unix:///run/containerd/s/da338ab12203db8d2e1dc187ff9829f5e44cff615a56f11476b949c5785b5a07" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:35:24.304113 systemd[1]: Started cri-containerd-7c2aa7346eed0bf409bbf67cb455d04be36d4bb5fe9c74de39d149ff904f58ac.scope - libcontainer container 7c2aa7346eed0bf409bbf67cb455d04be36d4bb5fe9c74de39d149ff904f58ac. Dec 12 18:35:24.392033 systemd[1]: Started cri-containerd-7a4018808fe88fef690fb2e4e53e8fa21ac69187f762f910f151c07affd25c61.scope - libcontainer container 7a4018808fe88fef690fb2e4e53e8fa21ac69187f762f910f151c07affd25c61. Dec 12 18:35:24.398515 systemd[1]: Started cri-containerd-3c74062bad25d09e863a785ffead5fe808c80f637402dd6aa9462da652d54567.scope - libcontainer container 3c74062bad25d09e863a785ffead5fe808c80f637402dd6aa9462da652d54567. Dec 12 18:35:24.474095 containerd[1555]: time="2025-12-12T18:35:24.474016373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:151ed30449181dcc45637e6218ddbb24,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c2aa7346eed0bf409bbf67cb455d04be36d4bb5fe9c74de39d149ff904f58ac\"" Dec 12 18:35:24.477239 kubelet[2339]: E1212 18:35:24.477133 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:24.478538 kubelet[2339]: I1212 18:35:24.478498 2339 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 18:35:24.479444 kubelet[2339]: E1212 18:35:24.479278 2339 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" Dec 12 18:35:24.496996 containerd[1555]: time="2025-12-12T18:35:24.492715876Z" level=info msg="CreateContainer within sandbox \"7c2aa7346eed0bf409bbf67cb455d04be36d4bb5fe9c74de39d149ff904f58ac\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 18:35:24.519987 kubelet[2339]: E1212 18:35:24.519816 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.51:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 18:35:24.538741 containerd[1555]: time="2025-12-12T18:35:24.536758233Z" level=info msg="Container 40ba42a3dd5b32976aa8a48997fba2c73e649a397b4e3919294a56fba9f6edd0: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:35:24.556469 kubelet[2339]: E1212 18:35:24.556387 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 18:35:24.568870 containerd[1555]: time="2025-12-12T18:35:24.568736166Z" level=info msg="CreateContainer within sandbox \"7c2aa7346eed0bf409bbf67cb455d04be36d4bb5fe9c74de39d149ff904f58ac\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"40ba42a3dd5b32976aa8a48997fba2c73e649a397b4e3919294a56fba9f6edd0\"" Dec 12 18:35:24.569448 containerd[1555]: time="2025-12-12T18:35:24.569240277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c74062bad25d09e863a785ffead5fe808c80f637402dd6aa9462da652d54567\"" Dec 12 18:35:24.571337 containerd[1555]: time="2025-12-12T18:35:24.571271344Z" level=info msg="StartContainer for \"40ba42a3dd5b32976aa8a48997fba2c73e649a397b4e3919294a56fba9f6edd0\"" Dec 12 18:35:24.571429 kubelet[2339]: E1212 18:35:24.571323 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:24.573485 containerd[1555]: time="2025-12-12T18:35:24.572980448Z" level=info msg="connecting to shim 40ba42a3dd5b32976aa8a48997fba2c73e649a397b4e3919294a56fba9f6edd0" address="unix:///run/containerd/s/400cb40cb5096d70ea304affe4caa18fecdf28b4684085768f34840fe8513f1e" protocol=ttrpc version=3 Dec 12 18:35:24.575937 containerd[1555]: time="2025-12-12T18:35:24.575881887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a4018808fe88fef690fb2e4e53e8fa21ac69187f762f910f151c07affd25c61\"" Dec 12 18:35:24.577382 kubelet[2339]: E1212 18:35:24.577234 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:24.606366 containerd[1555]: time="2025-12-12T18:35:24.606242802Z" level=info msg="CreateContainer within sandbox \"3c74062bad25d09e863a785ffead5fe808c80f637402dd6aa9462da652d54567\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 18:35:24.607847 systemd[1]: Started cri-containerd-40ba42a3dd5b32976aa8a48997fba2c73e649a397b4e3919294a56fba9f6edd0.scope - libcontainer container 40ba42a3dd5b32976aa8a48997fba2c73e649a397b4e3919294a56fba9f6edd0. Dec 12 18:35:24.615366 containerd[1555]: time="2025-12-12T18:35:24.615284948Z" level=info msg="CreateContainer within sandbox \"7a4018808fe88fef690fb2e4e53e8fa21ac69187f762f910f151c07affd25c61\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 18:35:24.645171 containerd[1555]: time="2025-12-12T18:35:24.644665502Z" level=info msg="Container 95a18188bd9ca9bad2baf7d3e12aa0e5f0cb92b07f9a526abed85d2e8c72995a: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:35:24.660141 containerd[1555]: time="2025-12-12T18:35:24.659221649Z" level=info msg="Container ddf5be6fbdb9646e3a2d685de3978e71c160df3b12a08842c556036889076ef5: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:35:24.680477 containerd[1555]: time="2025-12-12T18:35:24.680373342Z" level=info msg="CreateContainer within sandbox \"3c74062bad25d09e863a785ffead5fe808c80f637402dd6aa9462da652d54567\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"95a18188bd9ca9bad2baf7d3e12aa0e5f0cb92b07f9a526abed85d2e8c72995a\"" Dec 12 18:35:24.681618 containerd[1555]: time="2025-12-12T18:35:24.681213066Z" level=info msg="StartContainer for \"95a18188bd9ca9bad2baf7d3e12aa0e5f0cb92b07f9a526abed85d2e8c72995a\"" Dec 12 18:35:24.682680 containerd[1555]: time="2025-12-12T18:35:24.682637282Z" level=info msg="connecting to shim 95a18188bd9ca9bad2baf7d3e12aa0e5f0cb92b07f9a526abed85d2e8c72995a" address="unix:///run/containerd/s/da338ab12203db8d2e1dc187ff9829f5e44cff615a56f11476b949c5785b5a07" protocol=ttrpc version=3 Dec 12 18:35:24.685285 containerd[1555]: time="2025-12-12T18:35:24.685237405Z" level=info msg="CreateContainer within sandbox \"7a4018808fe88fef690fb2e4e53e8fa21ac69187f762f910f151c07affd25c61\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ddf5be6fbdb9646e3a2d685de3978e71c160df3b12a08842c556036889076ef5\"" Dec 12 18:35:24.686199 containerd[1555]: time="2025-12-12T18:35:24.686170127Z" level=info msg="StartContainer for \"ddf5be6fbdb9646e3a2d685de3978e71c160df3b12a08842c556036889076ef5\"" Dec 12 18:35:24.687831 containerd[1555]: time="2025-12-12T18:35:24.687770127Z" level=info msg="connecting to shim ddf5be6fbdb9646e3a2d685de3978e71c160df3b12a08842c556036889076ef5" address="unix:///run/containerd/s/3320a3ffdc6bce14117b64243ef0c49189e57853ec38e3f3afcd2f7adffb2d17" protocol=ttrpc version=3 Dec 12 18:35:24.717280 systemd[1]: Started cri-containerd-95a18188bd9ca9bad2baf7d3e12aa0e5f0cb92b07f9a526abed85d2e8c72995a.scope - libcontainer container 95a18188bd9ca9bad2baf7d3e12aa0e5f0cb92b07f9a526abed85d2e8c72995a. Dec 12 18:35:24.720446 containerd[1555]: time="2025-12-12T18:35:24.720343745Z" level=info msg="StartContainer for \"40ba42a3dd5b32976aa8a48997fba2c73e649a397b4e3919294a56fba9f6edd0\" returns successfully" Dec 12 18:35:24.731377 systemd[1]: Started cri-containerd-ddf5be6fbdb9646e3a2d685de3978e71c160df3b12a08842c556036889076ef5.scope - libcontainer container ddf5be6fbdb9646e3a2d685de3978e71c160df3b12a08842c556036889076ef5. Dec 12 18:35:24.781205 kubelet[2339]: E1212 18:35:24.781139 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.51:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 18:35:24.819628 containerd[1555]: time="2025-12-12T18:35:24.819554183Z" level=info msg="StartContainer for \"ddf5be6fbdb9646e3a2d685de3978e71c160df3b12a08842c556036889076ef5\" returns successfully" Dec 12 18:35:24.860338 containerd[1555]: time="2025-12-12T18:35:24.860266500Z" level=info msg="StartContainer for \"95a18188bd9ca9bad2baf7d3e12aa0e5f0cb92b07f9a526abed85d2e8c72995a\" returns successfully" Dec 12 18:35:25.285017 kubelet[2339]: I1212 18:35:25.284952 2339 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 18:35:25.722712 kubelet[2339]: E1212 18:35:25.722631 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 18:35:25.724030 kubelet[2339]: E1212 18:35:25.723073 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:25.732949 kubelet[2339]: E1212 18:35:25.732753 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 18:35:25.733161 kubelet[2339]: E1212 18:35:25.732972 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:25.742156 kubelet[2339]: E1212 18:35:25.741027 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 18:35:25.742754 kubelet[2339]: E1212 18:35:25.742717 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:26.746016 kubelet[2339]: E1212 18:35:26.745004 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 18:35:26.746016 kubelet[2339]: E1212 18:35:26.745223 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:26.746016 kubelet[2339]: E1212 18:35:26.745592 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 18:35:26.746016 kubelet[2339]: E1212 18:35:26.745701 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:26.749140 kubelet[2339]: E1212 18:35:26.748463 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 18:35:26.749140 kubelet[2339]: E1212 18:35:26.748987 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:27.744734 kubelet[2339]: E1212 18:35:27.744665 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 18:35:27.744975 kubelet[2339]: E1212 18:35:27.744919 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:27.749482 kubelet[2339]: E1212 18:35:27.748097 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 18:35:27.749482 kubelet[2339]: E1212 18:35:27.748256 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:30.153830 kubelet[2339]: E1212 18:35:30.152968 2339 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 12 18:35:30.379119 kubelet[2339]: I1212 18:35:30.379058 2339 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 12 18:35:30.424708 kubelet[2339]: I1212 18:35:30.424495 2339 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 18:35:30.481393 kubelet[2339]: E1212 18:35:30.481337 2339 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 12 18:35:30.481393 kubelet[2339]: I1212 18:35:30.481386 2339 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 18:35:30.489564 kubelet[2339]: E1212 18:35:30.489384 2339 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 12 18:35:30.489564 kubelet[2339]: I1212 18:35:30.489436 2339 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 18:35:30.496206 kubelet[2339]: E1212 18:35:30.495630 2339 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 12 18:35:30.513772 kubelet[2339]: I1212 18:35:30.512797 2339 apiserver.go:52] "Watching apiserver" Dec 12 18:35:30.538472 kubelet[2339]: I1212 18:35:30.538390 2339 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 18:35:33.649092 kubelet[2339]: I1212 18:35:33.648070 2339 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 18:35:33.678438 kubelet[2339]: E1212 18:35:33.678360 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:33.865962 kubelet[2339]: E1212 18:35:33.865551 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:34.470379 kubelet[2339]: I1212 18:35:34.468881 2339 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 18:35:34.511856 kubelet[2339]: I1212 18:35:34.511496 2339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.511443085 podStartE2EDuration="1.511443085s" podCreationTimestamp="2025-12-12 18:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:35:33.790470669 +0000 UTC m=+11.651408107" watchObservedRunningTime="2025-12-12 18:35:34.511443085 +0000 UTC m=+12.372380493" Dec 12 18:35:34.518066 kubelet[2339]: E1212 18:35:34.517982 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:34.875361 kubelet[2339]: E1212 18:35:34.874665 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:35.372474 update_engine[1540]: I20251212 18:35:35.365554 1540 update_attempter.cc:509] Updating boot flags... Dec 12 18:35:35.976008 systemd[1]: Reload requested from client PID 2637 ('systemctl') (unit session-7.scope)... Dec 12 18:35:35.976032 systemd[1]: Reloading... Dec 12 18:35:36.217987 kubelet[2339]: I1212 18:35:36.210118 2339 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 18:35:36.331566 kubelet[2339]: E1212 18:35:36.287028 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:36.408881 kubelet[2339]: I1212 18:35:36.405278 2339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.405258017 podStartE2EDuration="2.405258017s" podCreationTimestamp="2025-12-12 18:35:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:35:36.343307152 +0000 UTC m=+14.204244560" watchObservedRunningTime="2025-12-12 18:35:36.405258017 +0000 UTC m=+14.266195425" Dec 12 18:35:36.487072 zram_generator::config[2694]: No configuration found. Dec 12 18:35:36.905527 kubelet[2339]: E1212 18:35:36.902516 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:37.500557 systemd[1]: Reloading finished in 1519 ms. Dec 12 18:35:37.644833 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:35:37.684456 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 18:35:37.691484 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:35:37.691623 systemd[1]: kubelet.service: Consumed 2.792s CPU time, 134.2M memory peak. Dec 12 18:35:37.700941 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 18:35:38.320368 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 18:35:38.350609 (kubelet)[2727]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 18:35:38.602274 kubelet[2727]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:35:38.602274 kubelet[2727]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 18:35:38.602274 kubelet[2727]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 18:35:38.607911 kubelet[2727]: I1212 18:35:38.600896 2727 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 18:35:38.627809 kubelet[2727]: I1212 18:35:38.625175 2727 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 12 18:35:38.627809 kubelet[2727]: I1212 18:35:38.625239 2727 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 18:35:38.627809 kubelet[2727]: I1212 18:35:38.625555 2727 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 18:35:38.639879 kubelet[2727]: I1212 18:35:38.634620 2727 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 12 18:35:38.663777 kubelet[2727]: I1212 18:35:38.661439 2727 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 18:35:38.684748 kubelet[2727]: I1212 18:35:38.684708 2727 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 18:35:38.692165 kubelet[2727]: I1212 18:35:38.691153 2727 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 18:35:38.692165 kubelet[2727]: I1212 18:35:38.691483 2727 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 18:35:38.692165 kubelet[2727]: I1212 18:35:38.691524 2727 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 18:35:38.692165 kubelet[2727]: I1212 18:35:38.691746 2727 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 18:35:38.692507 kubelet[2727]: I1212 18:35:38.691759 2727 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 18:35:38.692507 kubelet[2727]: I1212 18:35:38.691833 2727 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:35:38.692507 kubelet[2727]: I1212 18:35:38.692027 2727 kubelet.go:480] "Attempting to sync node with API server" Dec 12 18:35:38.692507 kubelet[2727]: I1212 18:35:38.692045 2727 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 18:35:38.692507 kubelet[2727]: I1212 18:35:38.692080 2727 kubelet.go:386] "Adding apiserver pod source" Dec 12 18:35:38.692666 kubelet[2727]: I1212 18:35:38.692651 2727 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 18:35:38.712143 kubelet[2727]: I1212 18:35:38.700583 2727 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 18:35:38.712143 kubelet[2727]: I1212 18:35:38.701094 2727 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 18:35:38.745814 kubelet[2727]: I1212 18:35:38.745051 2727 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 18:35:38.745814 kubelet[2727]: I1212 18:35:38.745160 2727 server.go:1289] "Started kubelet" Dec 12 18:35:38.752895 kubelet[2727]: I1212 18:35:38.748934 2727 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 18:35:38.752895 kubelet[2727]: I1212 18:35:38.749363 2727 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 18:35:38.766057 kubelet[2727]: I1212 18:35:38.764817 2727 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 18:35:38.766510 kubelet[2727]: I1212 18:35:38.766480 2727 server.go:317] "Adding debug handlers to kubelet server" Dec 12 18:35:38.772638 kubelet[2727]: I1212 18:35:38.772125 2727 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 18:35:38.779808 kubelet[2727]: I1212 18:35:38.778304 2727 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 18:35:38.801814 kubelet[2727]: I1212 18:35:38.791282 2727 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 18:35:38.801814 kubelet[2727]: I1212 18:35:38.791440 2727 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 18:35:38.801814 kubelet[2727]: I1212 18:35:38.791615 2727 reconciler.go:26] "Reconciler: start to sync state" Dec 12 18:35:38.801814 kubelet[2727]: I1212 18:35:38.797486 2727 factory.go:223] Registration of the systemd container factory successfully Dec 12 18:35:38.801814 kubelet[2727]: I1212 18:35:38.797608 2727 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 18:35:38.824856 kubelet[2727]: E1212 18:35:38.819224 2727 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 18:35:38.824856 kubelet[2727]: I1212 18:35:38.819436 2727 factory.go:223] Registration of the containerd container factory successfully Dec 12 18:35:38.933630 kubelet[2727]: I1212 18:35:38.933314 2727 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 18:35:38.947834 kubelet[2727]: I1212 18:35:38.946753 2727 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 18:35:38.947834 kubelet[2727]: I1212 18:35:38.946846 2727 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 18:35:38.947834 kubelet[2727]: I1212 18:35:38.946884 2727 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 18:35:38.947834 kubelet[2727]: I1212 18:35:38.946927 2727 kubelet.go:2436] "Starting kubelet main sync loop" Dec 12 18:35:38.947834 kubelet[2727]: E1212 18:35:38.947039 2727 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 18:35:39.048926 kubelet[2727]: E1212 18:35:39.048801 2727 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 12 18:35:39.079541 kubelet[2727]: I1212 18:35:39.079451 2727 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 18:35:39.079541 kubelet[2727]: I1212 18:35:39.079521 2727 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 18:35:39.080816 kubelet[2727]: I1212 18:35:39.079561 2727 state_mem.go:36] "Initialized new in-memory state store" Dec 12 18:35:39.082272 kubelet[2727]: I1212 18:35:39.081402 2727 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 18:35:39.082272 kubelet[2727]: I1212 18:35:39.081428 2727 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 18:35:39.082272 kubelet[2727]: I1212 18:35:39.081485 2727 policy_none.go:49] "None policy: Start" Dec 12 18:35:39.082272 kubelet[2727]: I1212 18:35:39.081501 2727 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 18:35:39.082272 kubelet[2727]: I1212 18:35:39.081515 2727 state_mem.go:35] "Initializing new in-memory state store" Dec 12 18:35:39.082272 kubelet[2727]: I1212 18:35:39.081739 2727 state_mem.go:75] "Updated machine memory state" Dec 12 18:35:39.106442 kubelet[2727]: E1212 18:35:39.105147 2727 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 18:35:39.106442 kubelet[2727]: I1212 18:35:39.105419 2727 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 18:35:39.106442 kubelet[2727]: I1212 18:35:39.105433 2727 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 18:35:39.107813 kubelet[2727]: I1212 18:35:39.105759 2727 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 18:35:39.111538 kubelet[2727]: E1212 18:35:39.110104 2727 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 18:35:39.249190 kubelet[2727]: I1212 18:35:39.245865 2727 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 18:35:39.256400 kubelet[2727]: I1212 18:35:39.255907 2727 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 18:35:39.256400 kubelet[2727]: I1212 18:35:39.255907 2727 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 18:35:39.261044 kubelet[2727]: I1212 18:35:39.260991 2727 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 18:35:39.298209 kubelet[2727]: I1212 18:35:39.298096 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 18:35:39.298209 kubelet[2727]: I1212 18:35:39.298171 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 18:35:39.298209 kubelet[2727]: I1212 18:35:39.298205 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/151ed30449181dcc45637e6218ddbb24-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"151ed30449181dcc45637e6218ddbb24\") " pod="kube-system/kube-apiserver-localhost" Dec 12 18:35:39.298209 kubelet[2727]: I1212 18:35:39.298226 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 18:35:39.298209 kubelet[2727]: I1212 18:35:39.298266 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 18:35:39.299322 kubelet[2727]: I1212 18:35:39.298319 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 18:35:39.299322 kubelet[2727]: I1212 18:35:39.298346 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 12 18:35:39.299322 kubelet[2727]: I1212 18:35:39.298372 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/151ed30449181dcc45637e6218ddbb24-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"151ed30449181dcc45637e6218ddbb24\") " pod="kube-system/kube-apiserver-localhost" Dec 12 18:35:39.299322 kubelet[2727]: I1212 18:35:39.298396 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/151ed30449181dcc45637e6218ddbb24-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"151ed30449181dcc45637e6218ddbb24\") " pod="kube-system/kube-apiserver-localhost" Dec 12 18:35:39.375890 kubelet[2727]: E1212 18:35:39.374694 2727 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 12 18:35:39.379633 kubelet[2727]: E1212 18:35:39.379446 2727 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 12 18:35:39.379633 kubelet[2727]: E1212 18:35:39.379477 2727 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 12 18:35:39.462376 kubelet[2727]: I1212 18:35:39.462302 2727 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 12 18:35:39.462615 kubelet[2727]: I1212 18:35:39.462467 2727 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 12 18:35:39.676353 kubelet[2727]: E1212 18:35:39.676097 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:39.681245 kubelet[2727]: E1212 18:35:39.680067 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:39.681245 kubelet[2727]: E1212 18:35:39.680279 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:39.708498 kubelet[2727]: I1212 18:35:39.705645 2727 apiserver.go:52] "Watching apiserver" Dec 12 18:35:39.795347 kubelet[2727]: I1212 18:35:39.792467 2727 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 18:35:40.008156 kubelet[2727]: I1212 18:35:40.007473 2727 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 18:35:40.010777 containerd[1555]: time="2025-12-12T18:35:40.010079468Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 18:35:40.011263 kubelet[2727]: I1212 18:35:40.010349 2727 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 18:35:40.047931 kubelet[2727]: E1212 18:35:40.043619 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:40.047931 kubelet[2727]: I1212 18:35:40.044618 2727 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 18:35:40.048826 kubelet[2727]: E1212 18:35:40.048773 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:40.129978 kubelet[2727]: E1212 18:35:40.124823 2727 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 12 18:35:40.129978 kubelet[2727]: E1212 18:35:40.128775 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:40.760172 systemd[1]: Created slice kubepods-besteffort-podfdc31066_b688_4cb7_ac4b_fe691747950a.slice - libcontainer container kubepods-besteffort-podfdc31066_b688_4cb7_ac4b_fe691747950a.slice. Dec 12 18:35:40.831592 kubelet[2727]: I1212 18:35:40.831513 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fdc31066-b688-4cb7-ac4b-fe691747950a-xtables-lock\") pod \"kube-proxy-p9vvh\" (UID: \"fdc31066-b688-4cb7-ac4b-fe691747950a\") " pod="kube-system/kube-proxy-p9vvh" Dec 12 18:35:40.831592 kubelet[2727]: I1212 18:35:40.831579 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdc31066-b688-4cb7-ac4b-fe691747950a-lib-modules\") pod \"kube-proxy-p9vvh\" (UID: \"fdc31066-b688-4cb7-ac4b-fe691747950a\") " pod="kube-system/kube-proxy-p9vvh" Dec 12 18:35:40.831592 kubelet[2727]: I1212 18:35:40.831613 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcwr8\" (UniqueName: \"kubernetes.io/projected/fdc31066-b688-4cb7-ac4b-fe691747950a-kube-api-access-rcwr8\") pod \"kube-proxy-p9vvh\" (UID: \"fdc31066-b688-4cb7-ac4b-fe691747950a\") " pod="kube-system/kube-proxy-p9vvh" Dec 12 18:35:40.832359 kubelet[2727]: I1212 18:35:40.831639 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fdc31066-b688-4cb7-ac4b-fe691747950a-kube-proxy\") pod \"kube-proxy-p9vvh\" (UID: \"fdc31066-b688-4cb7-ac4b-fe691747950a\") " pod="kube-system/kube-proxy-p9vvh" Dec 12 18:35:41.056171 kubelet[2727]: E1212 18:35:41.051731 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:41.056171 kubelet[2727]: E1212 18:35:41.052045 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:41.056171 kubelet[2727]: E1212 18:35:41.054320 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:41.086438 kubelet[2727]: E1212 18:35:41.085435 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:41.090077 containerd[1555]: time="2025-12-12T18:35:41.087083239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p9vvh,Uid:fdc31066-b688-4cb7-ac4b-fe691747950a,Namespace:kube-system,Attempt:0,}" Dec 12 18:35:41.643351 containerd[1555]: time="2025-12-12T18:35:41.634356172Z" level=info msg="connecting to shim c0687eda931408b62a0cbc96cd78a2d2f21e13c11e9cbd5e383af631299a9f3b" address="unix:///run/containerd/s/e3c7ddd54979141879826782bd10510d438979c4919f1f47fde06cc48bee6e30" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:35:41.784580 systemd[1]: Started cri-containerd-c0687eda931408b62a0cbc96cd78a2d2f21e13c11e9cbd5e383af631299a9f3b.scope - libcontainer container c0687eda931408b62a0cbc96cd78a2d2f21e13c11e9cbd5e383af631299a9f3b. Dec 12 18:35:41.923905 containerd[1555]: time="2025-12-12T18:35:41.923702431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p9vvh,Uid:fdc31066-b688-4cb7-ac4b-fe691747950a,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0687eda931408b62a0cbc96cd78a2d2f21e13c11e9cbd5e383af631299a9f3b\"" Dec 12 18:35:41.926011 kubelet[2727]: E1212 18:35:41.925976 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:42.071305 containerd[1555]: time="2025-12-12T18:35:42.071144130Z" level=info msg="CreateContainer within sandbox \"c0687eda931408b62a0cbc96cd78a2d2f21e13c11e9cbd5e383af631299a9f3b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 18:35:42.072409 kubelet[2727]: E1212 18:35:42.072257 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:42.072409 kubelet[2727]: E1212 18:35:42.072370 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:42.256232 containerd[1555]: time="2025-12-12T18:35:42.256079486Z" level=info msg="Container 4b3734ed016251654598169df703484f1016065491ae99b39386603487cdad27: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:35:42.257771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3266637268.mount: Deactivated successfully. Dec 12 18:35:42.271152 systemd[1]: Created slice kubepods-besteffort-pod665851ec_bbfa_4789_9d75_367ff0a8731d.slice - libcontainer container kubepods-besteffort-pod665851ec_bbfa_4789_9d75_367ff0a8731d.slice. Dec 12 18:35:42.292614 containerd[1555]: time="2025-12-12T18:35:42.292481257Z" level=info msg="CreateContainer within sandbox \"c0687eda931408b62a0cbc96cd78a2d2f21e13c11e9cbd5e383af631299a9f3b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4b3734ed016251654598169df703484f1016065491ae99b39386603487cdad27\"" Dec 12 18:35:42.303276 containerd[1555]: time="2025-12-12T18:35:42.303097451Z" level=info msg="StartContainer for \"4b3734ed016251654598169df703484f1016065491ae99b39386603487cdad27\"" Dec 12 18:35:42.307585 containerd[1555]: time="2025-12-12T18:35:42.307505423Z" level=info msg="connecting to shim 4b3734ed016251654598169df703484f1016065491ae99b39386603487cdad27" address="unix:///run/containerd/s/e3c7ddd54979141879826782bd10510d438979c4919f1f47fde06cc48bee6e30" protocol=ttrpc version=3 Dec 12 18:35:42.350114 systemd[1]: Started cri-containerd-4b3734ed016251654598169df703484f1016065491ae99b39386603487cdad27.scope - libcontainer container 4b3734ed016251654598169df703484f1016065491ae99b39386603487cdad27. Dec 12 18:35:42.355768 kubelet[2727]: I1212 18:35:42.355620 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpblm\" (UniqueName: \"kubernetes.io/projected/665851ec-bbfa-4789-9d75-367ff0a8731d-kube-api-access-zpblm\") pod \"tigera-operator-7dcd859c48-g54r7\" (UID: \"665851ec-bbfa-4789-9d75-367ff0a8731d\") " pod="tigera-operator/tigera-operator-7dcd859c48-g54r7" Dec 12 18:35:42.355768 kubelet[2727]: I1212 18:35:42.355707 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/665851ec-bbfa-4789-9d75-367ff0a8731d-var-lib-calico\") pod \"tigera-operator-7dcd859c48-g54r7\" (UID: \"665851ec-bbfa-4789-9d75-367ff0a8731d\") " pod="tigera-operator/tigera-operator-7dcd859c48-g54r7" Dec 12 18:35:42.615442 containerd[1555]: time="2025-12-12T18:35:42.615151555Z" level=info msg="StartContainer for \"4b3734ed016251654598169df703484f1016065491ae99b39386603487cdad27\" returns successfully" Dec 12 18:35:42.880992 containerd[1555]: time="2025-12-12T18:35:42.879946688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-g54r7,Uid:665851ec-bbfa-4789-9d75-367ff0a8731d,Namespace:tigera-operator,Attempt:0,}" Dec 12 18:35:43.018947 containerd[1555]: time="2025-12-12T18:35:43.018851252Z" level=info msg="connecting to shim 64600f573442154f8e35aecc733c7bfd1a3277eff54d6bed216bbfba9d6d9685" address="unix:///run/containerd/s/b18c11b7e1514ac462532209f8fb31fc85902e145634cc6b60d2aa99d45e468b" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:35:43.065097 kubelet[2727]: E1212 18:35:43.065035 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:43.065673 kubelet[2727]: E1212 18:35:43.065339 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:43.066498 systemd[1]: Started cri-containerd-64600f573442154f8e35aecc733c7bfd1a3277eff54d6bed216bbfba9d6d9685.scope - libcontainer container 64600f573442154f8e35aecc733c7bfd1a3277eff54d6bed216bbfba9d6d9685. Dec 12 18:35:43.280293 containerd[1555]: time="2025-12-12T18:35:43.280082591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-g54r7,Uid:665851ec-bbfa-4789-9d75-367ff0a8731d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"64600f573442154f8e35aecc733c7bfd1a3277eff54d6bed216bbfba9d6d9685\"" Dec 12 18:35:43.285942 containerd[1555]: time="2025-12-12T18:35:43.285593418Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 12 18:35:44.094036 kubelet[2727]: E1212 18:35:44.093831 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:44.094748 kubelet[2727]: E1212 18:35:44.094579 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:44.337673 kubelet[2727]: E1212 18:35:44.337348 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:44.418575 kubelet[2727]: I1212 18:35:44.415955 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-p9vvh" podStartSLOduration=4.415928367 podStartE2EDuration="4.415928367s" podCreationTimestamp="2025-12-12 18:35:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:35:43.115050769 +0000 UTC m=+4.730417688" watchObservedRunningTime="2025-12-12 18:35:44.415928367 +0000 UTC m=+6.031295266" Dec 12 18:35:45.098053 kubelet[2727]: E1212 18:35:45.098011 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:45.487525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount521132547.mount: Deactivated successfully. Dec 12 18:35:46.100809 kubelet[2727]: E1212 18:35:46.100721 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:35:46.160532 containerd[1555]: time="2025-12-12T18:35:46.160445599Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:35:46.163655 containerd[1555]: time="2025-12-12T18:35:46.163447890Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Dec 12 18:35:46.166494 containerd[1555]: time="2025-12-12T18:35:46.166371785Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:35:46.178713 containerd[1555]: time="2025-12-12T18:35:46.178138585Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:35:46.179871 containerd[1555]: time="2025-12-12T18:35:46.179766549Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.894110014s" Dec 12 18:35:46.179871 containerd[1555]: time="2025-12-12T18:35:46.179848053Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 12 18:35:46.199125 containerd[1555]: time="2025-12-12T18:35:46.199044555Z" level=info msg="CreateContainer within sandbox \"64600f573442154f8e35aecc733c7bfd1a3277eff54d6bed216bbfba9d6d9685\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 12 18:35:46.222817 containerd[1555]: time="2025-12-12T18:35:46.222711242Z" level=info msg="Container 150a6a2ce1e6e6b7e166a82ec4ab8034ce0ac840c4797cc0dce4649b20267cb3: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:35:46.238610 containerd[1555]: time="2025-12-12T18:35:46.238349181Z" level=info msg="CreateContainer within sandbox \"64600f573442154f8e35aecc733c7bfd1a3277eff54d6bed216bbfba9d6d9685\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"150a6a2ce1e6e6b7e166a82ec4ab8034ce0ac840c4797cc0dce4649b20267cb3\"" Dec 12 18:35:46.242401 containerd[1555]: time="2025-12-12T18:35:46.239052005Z" level=info msg="StartContainer for \"150a6a2ce1e6e6b7e166a82ec4ab8034ce0ac840c4797cc0dce4649b20267cb3\"" Dec 12 18:35:46.242401 containerd[1555]: time="2025-12-12T18:35:46.240498748Z" level=info msg="connecting to shim 150a6a2ce1e6e6b7e166a82ec4ab8034ce0ac840c4797cc0dce4649b20267cb3" address="unix:///run/containerd/s/b18c11b7e1514ac462532209f8fb31fc85902e145634cc6b60d2aa99d45e468b" protocol=ttrpc version=3 Dec 12 18:35:46.286165 systemd[1]: Started cri-containerd-150a6a2ce1e6e6b7e166a82ec4ab8034ce0ac840c4797cc0dce4649b20267cb3.scope - libcontainer container 150a6a2ce1e6e6b7e166a82ec4ab8034ce0ac840c4797cc0dce4649b20267cb3. Dec 12 18:35:46.361000 containerd[1555]: time="2025-12-12T18:35:46.360775258Z" level=info msg="StartContainer for \"150a6a2ce1e6e6b7e166a82ec4ab8034ce0ac840c4797cc0dce4649b20267cb3\" returns successfully" Dec 12 18:35:49.430427 systemd[1]: cri-containerd-150a6a2ce1e6e6b7e166a82ec4ab8034ce0ac840c4797cc0dce4649b20267cb3.scope: Deactivated successfully. Dec 12 18:35:49.439537 containerd[1555]: time="2025-12-12T18:35:49.438640720Z" level=info msg="received container exit event container_id:\"150a6a2ce1e6e6b7e166a82ec4ab8034ce0ac840c4797cc0dce4649b20267cb3\" id:\"150a6a2ce1e6e6b7e166a82ec4ab8034ce0ac840c4797cc0dce4649b20267cb3\" pid:3061 exit_status:1 exited_at:{seconds:1765564549 nanos:432814694}" Dec 12 18:35:49.645912 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-150a6a2ce1e6e6b7e166a82ec4ab8034ce0ac840c4797cc0dce4649b20267cb3-rootfs.mount: Deactivated successfully. Dec 12 18:35:51.129141 kubelet[2727]: I1212 18:35:51.129094 2727 scope.go:117] "RemoveContainer" containerID="150a6a2ce1e6e6b7e166a82ec4ab8034ce0ac840c4797cc0dce4649b20267cb3" Dec 12 18:35:51.137285 containerd[1555]: time="2025-12-12T18:35:51.137227237Z" level=info msg="CreateContainer within sandbox \"64600f573442154f8e35aecc733c7bfd1a3277eff54d6bed216bbfba9d6d9685\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 12 18:35:51.206283 containerd[1555]: time="2025-12-12T18:35:51.204596763Z" level=info msg="Container 389a38d291a88d54ab2461ddb6f1737a1dc5b0a4308aea4ad944d9b6767d359a: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:35:51.222751 containerd[1555]: time="2025-12-12T18:35:51.222692235Z" level=info msg="CreateContainer within sandbox \"64600f573442154f8e35aecc733c7bfd1a3277eff54d6bed216bbfba9d6d9685\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"389a38d291a88d54ab2461ddb6f1737a1dc5b0a4308aea4ad944d9b6767d359a\"" Dec 12 18:35:51.225005 containerd[1555]: time="2025-12-12T18:35:51.224948737Z" level=info msg="StartContainer for \"389a38d291a88d54ab2461ddb6f1737a1dc5b0a4308aea4ad944d9b6767d359a\"" Dec 12 18:35:51.226947 containerd[1555]: time="2025-12-12T18:35:51.226828454Z" level=info msg="connecting to shim 389a38d291a88d54ab2461ddb6f1737a1dc5b0a4308aea4ad944d9b6767d359a" address="unix:///run/containerd/s/b18c11b7e1514ac462532209f8fb31fc85902e145634cc6b60d2aa99d45e468b" protocol=ttrpc version=3 Dec 12 18:35:51.272193 systemd[1]: Started cri-containerd-389a38d291a88d54ab2461ddb6f1737a1dc5b0a4308aea4ad944d9b6767d359a.scope - libcontainer container 389a38d291a88d54ab2461ddb6f1737a1dc5b0a4308aea4ad944d9b6767d359a. Dec 12 18:35:51.370529 containerd[1555]: time="2025-12-12T18:35:51.370464117Z" level=info msg="StartContainer for \"389a38d291a88d54ab2461ddb6f1737a1dc5b0a4308aea4ad944d9b6767d359a\" returns successfully" Dec 12 18:35:52.218118 kubelet[2727]: I1212 18:35:52.217462 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-g54r7" podStartSLOduration=7.314587069 podStartE2EDuration="10.217428072s" podCreationTimestamp="2025-12-12 18:35:42 +0000 UTC" firstStartedPulling="2025-12-12 18:35:43.283418475 +0000 UTC m=+4.898785374" lastFinishedPulling="2025-12-12 18:35:46.186259468 +0000 UTC m=+7.801626377" observedRunningTime="2025-12-12 18:35:47.229848226 +0000 UTC m=+8.845215125" watchObservedRunningTime="2025-12-12 18:35:52.217428072 +0000 UTC m=+13.832794971" Dec 12 18:35:54.350803 sudo[1759]: pam_unix(sudo:session): session closed for user root Dec 12 18:35:54.353154 sshd[1758]: Connection closed by 10.0.0.1 port 46650 Dec 12 18:35:54.368029 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Dec 12 18:35:54.376850 systemd[1]: sshd@6-10.0.0.51:22-10.0.0.1:46650.service: Deactivated successfully. Dec 12 18:35:54.382209 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 18:35:54.384741 systemd[1]: session-7.scope: Consumed 7.432s CPU time, 221.3M memory peak. Dec 12 18:35:54.392607 systemd-logind[1539]: Session 7 logged out. Waiting for processes to exit. Dec 12 18:35:54.398747 systemd-logind[1539]: Removed session 7. Dec 12 18:36:08.157851 systemd[1]: Created slice kubepods-besteffort-pod5a9838e1_4e9b_40b5_ba2d_fad80175f64d.slice - libcontainer container kubepods-besteffort-pod5a9838e1_4e9b_40b5_ba2d_fad80175f64d.slice. Dec 12 18:36:08.221810 kubelet[2727]: I1212 18:36:08.221506 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5a9838e1-4e9b-40b5-ba2d-fad80175f64d-typha-certs\") pod \"calico-typha-649c7c4f4c-ndg97\" (UID: \"5a9838e1-4e9b-40b5-ba2d-fad80175f64d\") " pod="calico-system/calico-typha-649c7c4f4c-ndg97" Dec 12 18:36:08.221810 kubelet[2727]: I1212 18:36:08.221596 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67qng\" (UniqueName: \"kubernetes.io/projected/5a9838e1-4e9b-40b5-ba2d-fad80175f64d-kube-api-access-67qng\") pod \"calico-typha-649c7c4f4c-ndg97\" (UID: \"5a9838e1-4e9b-40b5-ba2d-fad80175f64d\") " pod="calico-system/calico-typha-649c7c4f4c-ndg97" Dec 12 18:36:08.221810 kubelet[2727]: I1212 18:36:08.221630 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a9838e1-4e9b-40b5-ba2d-fad80175f64d-tigera-ca-bundle\") pod \"calico-typha-649c7c4f4c-ndg97\" (UID: \"5a9838e1-4e9b-40b5-ba2d-fad80175f64d\") " pod="calico-system/calico-typha-649c7c4f4c-ndg97" Dec 12 18:36:08.415098 systemd[1]: Created slice kubepods-besteffort-pod6a461261_ac6f_496e_9834_60e79b35dd91.slice - libcontainer container kubepods-besteffort-pod6a461261_ac6f_496e_9834_60e79b35dd91.slice. Dec 12 18:36:08.464903 kubelet[2727]: E1212 18:36:08.464768 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:36:08.465665 containerd[1555]: time="2025-12-12T18:36:08.465615775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-649c7c4f4c-ndg97,Uid:5a9838e1-4e9b-40b5-ba2d-fad80175f64d,Namespace:calico-system,Attempt:0,}" Dec 12 18:36:08.537229 kubelet[2727]: I1212 18:36:08.537134 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6a461261-ac6f-496e-9834-60e79b35dd91-flexvol-driver-host\") pod \"calico-node-g4r7c\" (UID: \"6a461261-ac6f-496e-9834-60e79b35dd91\") " pod="calico-system/calico-node-g4r7c" Dec 12 18:36:08.537229 kubelet[2727]: I1212 18:36:08.537202 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a461261-ac6f-496e-9834-60e79b35dd91-lib-modules\") pod \"calico-node-g4r7c\" (UID: \"6a461261-ac6f-496e-9834-60e79b35dd91\") " pod="calico-system/calico-node-g4r7c" Dec 12 18:36:08.537229 kubelet[2727]: I1212 18:36:08.537226 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6a461261-ac6f-496e-9834-60e79b35dd91-var-lib-calico\") pod \"calico-node-g4r7c\" (UID: \"6a461261-ac6f-496e-9834-60e79b35dd91\") " pod="calico-system/calico-node-g4r7c" Dec 12 18:36:08.537229 kubelet[2727]: I1212 18:36:08.537247 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a461261-ac6f-496e-9834-60e79b35dd91-tigera-ca-bundle\") pod \"calico-node-g4r7c\" (UID: \"6a461261-ac6f-496e-9834-60e79b35dd91\") " pod="calico-system/calico-node-g4r7c" Dec 12 18:36:08.537720 kubelet[2727]: I1212 18:36:08.537267 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6a461261-ac6f-496e-9834-60e79b35dd91-var-run-calico\") pod \"calico-node-g4r7c\" (UID: \"6a461261-ac6f-496e-9834-60e79b35dd91\") " pod="calico-system/calico-node-g4r7c" Dec 12 18:36:08.537720 kubelet[2727]: I1212 18:36:08.537288 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6a461261-ac6f-496e-9834-60e79b35dd91-cni-log-dir\") pod \"calico-node-g4r7c\" (UID: \"6a461261-ac6f-496e-9834-60e79b35dd91\") " pod="calico-system/calico-node-g4r7c" Dec 12 18:36:08.537720 kubelet[2727]: I1212 18:36:08.537314 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a461261-ac6f-496e-9834-60e79b35dd91-xtables-lock\") pod \"calico-node-g4r7c\" (UID: \"6a461261-ac6f-496e-9834-60e79b35dd91\") " pod="calico-system/calico-node-g4r7c" Dec 12 18:36:08.537720 kubelet[2727]: I1212 18:36:08.537364 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6a461261-ac6f-496e-9834-60e79b35dd91-cni-bin-dir\") pod \"calico-node-g4r7c\" (UID: \"6a461261-ac6f-496e-9834-60e79b35dd91\") " pod="calico-system/calico-node-g4r7c" Dec 12 18:36:08.537720 kubelet[2727]: I1212 18:36:08.537383 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6a461261-ac6f-496e-9834-60e79b35dd91-policysync\") pod \"calico-node-g4r7c\" (UID: \"6a461261-ac6f-496e-9834-60e79b35dd91\") " pod="calico-system/calico-node-g4r7c" Dec 12 18:36:08.537975 kubelet[2727]: I1212 18:36:08.537403 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6a461261-ac6f-496e-9834-60e79b35dd91-node-certs\") pod \"calico-node-g4r7c\" (UID: \"6a461261-ac6f-496e-9834-60e79b35dd91\") " pod="calico-system/calico-node-g4r7c" Dec 12 18:36:08.537975 kubelet[2727]: I1212 18:36:08.537432 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6a461261-ac6f-496e-9834-60e79b35dd91-cni-net-dir\") pod \"calico-node-g4r7c\" (UID: \"6a461261-ac6f-496e-9834-60e79b35dd91\") " pod="calico-system/calico-node-g4r7c" Dec 12 18:36:08.537975 kubelet[2727]: I1212 18:36:08.537455 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tkz5\" (UniqueName: \"kubernetes.io/projected/6a461261-ac6f-496e-9834-60e79b35dd91-kube-api-access-8tkz5\") pod \"calico-node-g4r7c\" (UID: \"6a461261-ac6f-496e-9834-60e79b35dd91\") " pod="calico-system/calico-node-g4r7c" Dec 12 18:36:08.726997 kubelet[2727]: E1212 18:36:08.726681 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:08.726997 kubelet[2727]: W1212 18:36:08.726726 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:08.730617 kubelet[2727]: E1212 18:36:08.730502 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:08.824699 containerd[1555]: time="2025-12-12T18:36:08.824619715Z" level=info msg="connecting to shim 49dd0f87c3b2c40313c07b6b4bcf0fe3177110adf22a995f6112fedc20eca1cc" address="unix:///run/containerd/s/a5e5208efcb3c2584687e25cc66464100e96baf6368db7f57b9d8b0504fa3816" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:36:08.853137 kubelet[2727]: E1212 18:36:08.853025 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mtm8f" podUID="8652b687-d41e-47f6-a864-e604e24deb5b" Dec 12 18:36:08.896168 systemd[1]: Started cri-containerd-49dd0f87c3b2c40313c07b6b4bcf0fe3177110adf22a995f6112fedc20eca1cc.scope - libcontainer container 49dd0f87c3b2c40313c07b6b4bcf0fe3177110adf22a995f6112fedc20eca1cc. Dec 12 18:36:08.948677 kubelet[2727]: E1212 18:36:08.948341 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:08.948898 kubelet[2727]: W1212 18:36:08.948582 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:08.948898 kubelet[2727]: E1212 18:36:08.948822 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:08.950501 kubelet[2727]: E1212 18:36:08.950468 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:08.950633 kubelet[2727]: W1212 18:36:08.950492 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:08.950701 kubelet[2727]: E1212 18:36:08.950636 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:08.956001 kubelet[2727]: E1212 18:36:08.953298 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:08.956001 kubelet[2727]: W1212 18:36:08.953322 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:08.956001 kubelet[2727]: E1212 18:36:08.953347 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:08.956001 kubelet[2727]: E1212 18:36:08.956412 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:08.956001 kubelet[2727]: W1212 18:36:08.956432 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:08.956001 kubelet[2727]: E1212 18:36:08.956454 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:08.958541 kubelet[2727]: E1212 18:36:08.958434 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:08.958541 kubelet[2727]: W1212 18:36:08.958487 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:08.958541 kubelet[2727]: E1212 18:36:08.958511 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:08.958965 kubelet[2727]: E1212 18:36:08.958947 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:08.958965 kubelet[2727]: W1212 18:36:08.958957 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:08.959125 kubelet[2727]: E1212 18:36:08.958996 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:08.959867 kubelet[2727]: E1212 18:36:08.959330 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:08.959867 kubelet[2727]: W1212 18:36:08.959343 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:08.959867 kubelet[2727]: E1212 18:36:08.959355 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:08.959867 kubelet[2727]: E1212 18:36:08.959624 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:08.959867 kubelet[2727]: W1212 18:36:08.959635 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:08.959867 kubelet[2727]: E1212 18:36:08.959650 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:08.961842 kubelet[2727]: E1212 18:36:08.961126 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:08.961842 kubelet[2727]: W1212 18:36:08.961143 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:08.961842 kubelet[2727]: E1212 18:36:08.961172 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:08.961842 kubelet[2727]: E1212 18:36:08.961573 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:08.961842 kubelet[2727]: W1212 18:36:08.961587 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:08.961842 kubelet[2727]: E1212 18:36:08.961637 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:08.962774 kubelet[2727]: E1212 18:36:08.962741 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:08.962774 kubelet[2727]: W1212 18:36:08.962764 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:08.962919 kubelet[2727]: E1212 18:36:08.962780 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:08.963212 kubelet[2727]: E1212 18:36:08.963184 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:08.963212 kubelet[2727]: W1212 18:36:08.963203 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:08.963297 kubelet[2727]: E1212 18:36:08.963218 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:08.963582 kubelet[2727]: E1212 18:36:08.963550 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:08.963642 kubelet[2727]: W1212 18:36:08.963600 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:08.963642 kubelet[2727]: E1212 18:36:08.963615 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:08.964017 kubelet[2727]: E1212 18:36:08.963990 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:08.964017 kubelet[2727]: W1212 18:36:08.964009 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:08.965270 kubelet[2727]: E1212 18:36:08.964022 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:08.965270 kubelet[2727]: E1212 18:36:08.964325 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:08.965270 kubelet[2727]: W1212 18:36:08.964335 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:08.965270 kubelet[2727]: E1212 18:36:08.964374 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:08.965270 kubelet[2727]: E1212 18:36:08.964671 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:08.965270 kubelet[2727]: W1212 18:36:08.964683 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:08.965270 kubelet[2727]: E1212 18:36:08.964696 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:08.965270 kubelet[2727]: E1212 18:36:08.965110 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:08.965270 kubelet[2727]: W1212 18:36:08.965123 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:08.965270 kubelet[2727]: E1212 18:36:08.965168 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:08.973609 kubelet[2727]: E1212 18:36:08.972363 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:08.973609 kubelet[2727]: W1212 18:36:08.972429 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:08.973609 kubelet[2727]: E1212 18:36:08.972473 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:08.973609 kubelet[2727]: E1212 18:36:08.972878 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:08.973609 kubelet[2727]: W1212 18:36:08.972892 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:08.973609 kubelet[2727]: E1212 18:36:08.972907 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:08.973609 kubelet[2727]: E1212 18:36:08.973240 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:08.973609 kubelet[2727]: W1212 18:36:08.973255 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:08.973609 kubelet[2727]: E1212 18:36:08.973268 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.029747 kubelet[2727]: E1212 18:36:09.029070 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:36:09.040379 containerd[1555]: time="2025-12-12T18:36:09.037001463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g4r7c,Uid:6a461261-ac6f-496e-9834-60e79b35dd91,Namespace:calico-system,Attempt:0,}" Dec 12 18:36:09.051808 kubelet[2727]: E1212 18:36:09.051720 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.051808 kubelet[2727]: W1212 18:36:09.051775 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.052061 kubelet[2727]: E1212 18:36:09.051840 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.052061 kubelet[2727]: I1212 18:36:09.051898 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8652b687-d41e-47f6-a864-e604e24deb5b-kubelet-dir\") pod \"csi-node-driver-mtm8f\" (UID: \"8652b687-d41e-47f6-a864-e604e24deb5b\") " pod="calico-system/csi-node-driver-mtm8f" Dec 12 18:36:09.052844 kubelet[2727]: E1212 18:36:09.052809 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.052844 kubelet[2727]: W1212 18:36:09.052839 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.053155 kubelet[2727]: E1212 18:36:09.052857 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.053155 kubelet[2727]: I1212 18:36:09.052882 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8652b687-d41e-47f6-a864-e604e24deb5b-varrun\") pod \"csi-node-driver-mtm8f\" (UID: \"8652b687-d41e-47f6-a864-e604e24deb5b\") " pod="calico-system/csi-node-driver-mtm8f" Dec 12 18:36:09.054695 kubelet[2727]: E1212 18:36:09.054533 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.054695 kubelet[2727]: W1212 18:36:09.054560 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.054695 kubelet[2727]: E1212 18:36:09.054693 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.055831 kubelet[2727]: I1212 18:36:09.055317 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8652b687-d41e-47f6-a864-e604e24deb5b-registration-dir\") pod \"csi-node-driver-mtm8f\" (UID: \"8652b687-d41e-47f6-a864-e604e24deb5b\") " pod="calico-system/csi-node-driver-mtm8f" Dec 12 18:36:09.056528 kubelet[2727]: E1212 18:36:09.056444 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.056528 kubelet[2727]: W1212 18:36:09.056524 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.056623 kubelet[2727]: E1212 18:36:09.056574 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.056623 kubelet[2727]: I1212 18:36:09.056595 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8652b687-d41e-47f6-a864-e604e24deb5b-socket-dir\") pod \"csi-node-driver-mtm8f\" (UID: \"8652b687-d41e-47f6-a864-e604e24deb5b\") " pod="calico-system/csi-node-driver-mtm8f" Dec 12 18:36:09.057038 kubelet[2727]: E1212 18:36:09.057008 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.057038 kubelet[2727]: W1212 18:36:09.057029 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.057153 kubelet[2727]: E1212 18:36:09.057048 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.057153 kubelet[2727]: I1212 18:36:09.057130 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6v6s\" (UniqueName: \"kubernetes.io/projected/8652b687-d41e-47f6-a864-e604e24deb5b-kube-api-access-h6v6s\") pod \"csi-node-driver-mtm8f\" (UID: \"8652b687-d41e-47f6-a864-e604e24deb5b\") " pod="calico-system/csi-node-driver-mtm8f" Dec 12 18:36:09.057853 kubelet[2727]: E1212 18:36:09.057706 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.057853 kubelet[2727]: W1212 18:36:09.057850 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.057981 kubelet[2727]: E1212 18:36:09.057866 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.058316 kubelet[2727]: E1212 18:36:09.058298 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.058316 kubelet[2727]: W1212 18:36:09.058312 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.058402 kubelet[2727]: E1212 18:36:09.058324 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.058572 kubelet[2727]: E1212 18:36:09.058558 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.058572 kubelet[2727]: W1212 18:36:09.058569 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.058656 kubelet[2727]: E1212 18:36:09.058578 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.058930 kubelet[2727]: E1212 18:36:09.058894 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.058930 kubelet[2727]: W1212 18:36:09.058907 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.058930 kubelet[2727]: E1212 18:36:09.058918 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.060439 kubelet[2727]: E1212 18:36:09.059358 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.060439 kubelet[2727]: W1212 18:36:09.059370 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.060439 kubelet[2727]: E1212 18:36:09.059382 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.060439 kubelet[2727]: E1212 18:36:09.059778 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.060439 kubelet[2727]: W1212 18:36:09.059826 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.060439 kubelet[2727]: E1212 18:36:09.059838 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.060439 kubelet[2727]: E1212 18:36:09.060103 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.060439 kubelet[2727]: W1212 18:36:09.060112 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.060439 kubelet[2727]: E1212 18:36:09.060123 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.060439 kubelet[2727]: E1212 18:36:09.060442 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.060811 kubelet[2727]: W1212 18:36:09.060458 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.060811 kubelet[2727]: E1212 18:36:09.060470 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.060811 kubelet[2727]: E1212 18:36:09.060762 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.060811 kubelet[2727]: W1212 18:36:09.060772 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.060945 kubelet[2727]: E1212 18:36:09.060820 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.061060 kubelet[2727]: E1212 18:36:09.061040 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.061060 kubelet[2727]: W1212 18:36:09.061052 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.061148 kubelet[2727]: E1212 18:36:09.061062 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.089753 containerd[1555]: time="2025-12-12T18:36:09.089601744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-649c7c4f4c-ndg97,Uid:5a9838e1-4e9b-40b5-ba2d-fad80175f64d,Namespace:calico-system,Attempt:0,} returns sandbox id \"49dd0f87c3b2c40313c07b6b4bcf0fe3177110adf22a995f6112fedc20eca1cc\"" Dec 12 18:36:09.093820 kubelet[2727]: E1212 18:36:09.093693 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:36:09.100223 containerd[1555]: time="2025-12-12T18:36:09.100150834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 12 18:36:09.143838 containerd[1555]: time="2025-12-12T18:36:09.143207914Z" level=info msg="connecting to shim 39d44712ca943ca5981db023412c4bd66191ca55a267c022b1a186ed1d749b96" address="unix:///run/containerd/s/561becdfeb76663763e605dff422d96c5e6172ee4b34ff575b62cbdd43322acd" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:36:09.160829 kubelet[2727]: E1212 18:36:09.159719 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.160829 kubelet[2727]: W1212 18:36:09.159813 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.160829 kubelet[2727]: E1212 18:36:09.159883 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.164215 kubelet[2727]: E1212 18:36:09.162997 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.164215 kubelet[2727]: W1212 18:36:09.163014 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.164215 kubelet[2727]: E1212 18:36:09.163032 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.164679 kubelet[2727]: E1212 18:36:09.164657 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.164822 kubelet[2727]: W1212 18:36:09.164806 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.164945 kubelet[2727]: E1212 18:36:09.164912 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.165438 kubelet[2727]: E1212 18:36:09.165398 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.165565 kubelet[2727]: W1212 18:36:09.165548 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.165744 kubelet[2727]: E1212 18:36:09.165725 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.166470 kubelet[2727]: E1212 18:36:09.166413 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.166470 kubelet[2727]: W1212 18:36:09.166430 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.166692 kubelet[2727]: E1212 18:36:09.166564 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.171159 kubelet[2727]: E1212 18:36:09.171081 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.171551 kubelet[2727]: W1212 18:36:09.171230 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.171551 kubelet[2727]: E1212 18:36:09.171261 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.174852 kubelet[2727]: E1212 18:36:09.172044 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.180286 kubelet[2727]: W1212 18:36:09.180033 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.180286 kubelet[2727]: E1212 18:36:09.180276 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.183489 kubelet[2727]: E1212 18:36:09.183053 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.183489 kubelet[2727]: W1212 18:36:09.183077 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.183489 kubelet[2727]: E1212 18:36:09.183109 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.193302 kubelet[2727]: E1212 18:36:09.190623 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.193302 kubelet[2727]: W1212 18:36:09.190665 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.193302 kubelet[2727]: E1212 18:36:09.190702 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.193302 kubelet[2727]: E1212 18:36:09.193294 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.193653 kubelet[2727]: W1212 18:36:09.193319 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.193653 kubelet[2727]: E1212 18:36:09.193349 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.194695 kubelet[2727]: E1212 18:36:09.194646 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.194805 kubelet[2727]: W1212 18:36:09.194766 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.194900 kubelet[2727]: E1212 18:36:09.194851 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.195483 kubelet[2727]: E1212 18:36:09.195443 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.195483 kubelet[2727]: W1212 18:36:09.195476 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.195567 kubelet[2727]: E1212 18:36:09.195492 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.201229 kubelet[2727]: E1212 18:36:09.197656 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.201229 kubelet[2727]: W1212 18:36:09.197684 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.201229 kubelet[2727]: E1212 18:36:09.197702 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.201229 kubelet[2727]: E1212 18:36:09.199770 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.201229 kubelet[2727]: W1212 18:36:09.199808 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.201229 kubelet[2727]: E1212 18:36:09.199830 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.201229 kubelet[2727]: E1212 18:36:09.200246 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.201229 kubelet[2727]: W1212 18:36:09.200263 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.201229 kubelet[2727]: E1212 18:36:09.200276 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.201229 kubelet[2727]: E1212 18:36:09.200567 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.201839 kubelet[2727]: W1212 18:36:09.200579 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.201839 kubelet[2727]: E1212 18:36:09.200590 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.201839 kubelet[2727]: E1212 18:36:09.200914 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.201839 kubelet[2727]: W1212 18:36:09.200930 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.201839 kubelet[2727]: E1212 18:36:09.200944 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.201839 kubelet[2727]: E1212 18:36:09.201202 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.201839 kubelet[2727]: W1212 18:36:09.201214 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.201839 kubelet[2727]: E1212 18:36:09.201227 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.203963 kubelet[2727]: E1212 18:36:09.203922 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.203963 kubelet[2727]: W1212 18:36:09.203955 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.204106 kubelet[2727]: E1212 18:36:09.203982 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.204500 kubelet[2727]: E1212 18:36:09.204473 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.204500 kubelet[2727]: W1212 18:36:09.204493 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.204754 kubelet[2727]: E1212 18:36:09.204508 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.205815 kubelet[2727]: E1212 18:36:09.205728 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.210624 kubelet[2727]: W1212 18:36:09.205756 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.210763 kubelet[2727]: E1212 18:36:09.210656 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.216553 kubelet[2727]: E1212 18:36:09.213683 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.216553 kubelet[2727]: W1212 18:36:09.213705 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.216553 kubelet[2727]: E1212 18:36:09.213748 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.230051 kubelet[2727]: E1212 18:36:09.226470 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.230051 kubelet[2727]: W1212 18:36:09.226499 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.230051 kubelet[2727]: E1212 18:36:09.226526 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.237701 kubelet[2727]: E1212 18:36:09.237186 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.237701 kubelet[2727]: W1212 18:36:09.237217 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.237701 kubelet[2727]: E1212 18:36:09.237253 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.240686 kubelet[2727]: E1212 18:36:09.240531 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.240686 kubelet[2727]: W1212 18:36:09.240552 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.240686 kubelet[2727]: E1212 18:36:09.240579 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.245320 systemd[1]: Started cri-containerd-39d44712ca943ca5981db023412c4bd66191ca55a267c022b1a186ed1d749b96.scope - libcontainer container 39d44712ca943ca5981db023412c4bd66191ca55a267c022b1a186ed1d749b96. Dec 12 18:36:09.263260 kubelet[2727]: E1212 18:36:09.263206 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:09.263260 kubelet[2727]: W1212 18:36:09.263245 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:09.263260 kubelet[2727]: E1212 18:36:09.263279 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:09.344092 containerd[1555]: time="2025-12-12T18:36:09.343862929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g4r7c,Uid:6a461261-ac6f-496e-9834-60e79b35dd91,Namespace:calico-system,Attempt:0,} returns sandbox id \"39d44712ca943ca5981db023412c4bd66191ca55a267c022b1a186ed1d749b96\"" Dec 12 18:36:09.345843 kubelet[2727]: E1212 18:36:09.345771 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:36:10.916703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2774395899.mount: Deactivated successfully. Dec 12 18:36:10.948444 kubelet[2727]: E1212 18:36:10.948307 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mtm8f" podUID="8652b687-d41e-47f6-a864-e604e24deb5b" Dec 12 18:36:12.290431 kernel: hrtimer: interrupt took 4598761 ns Dec 12 18:36:12.955094 kubelet[2727]: E1212 18:36:12.954557 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mtm8f" podUID="8652b687-d41e-47f6-a864-e604e24deb5b" Dec 12 18:36:13.088909 containerd[1555]: time="2025-12-12T18:36:13.088759109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:36:13.093164 containerd[1555]: time="2025-12-12T18:36:13.092999442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Dec 12 18:36:13.095123 containerd[1555]: time="2025-12-12T18:36:13.093512975Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:36:13.104835 containerd[1555]: time="2025-12-12T18:36:13.104728091Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:36:13.105549 containerd[1555]: time="2025-12-12T18:36:13.105492410Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 4.005285959s" Dec 12 18:36:13.105549 containerd[1555]: time="2025-12-12T18:36:13.105533188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 12 18:36:13.117854 containerd[1555]: time="2025-12-12T18:36:13.116731032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 18:36:13.175280 containerd[1555]: time="2025-12-12T18:36:13.174832772Z" level=info msg="CreateContainer within sandbox \"49dd0f87c3b2c40313c07b6b4bcf0fe3177110adf22a995f6112fedc20eca1cc\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 12 18:36:13.250560 containerd[1555]: time="2025-12-12T18:36:13.249999647Z" level=info msg="Container 1a6643b5ed95b01534fd056319b3b4f1ac21f8907eac88e4b89a51409f2a47a3: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:36:13.318207 containerd[1555]: time="2025-12-12T18:36:13.318059753Z" level=info msg="CreateContainer within sandbox \"49dd0f87c3b2c40313c07b6b4bcf0fe3177110adf22a995f6112fedc20eca1cc\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1a6643b5ed95b01534fd056319b3b4f1ac21f8907eac88e4b89a51409f2a47a3\"" Dec 12 18:36:13.326100 containerd[1555]: time="2025-12-12T18:36:13.322723766Z" level=info msg="StartContainer for \"1a6643b5ed95b01534fd056319b3b4f1ac21f8907eac88e4b89a51409f2a47a3\"" Dec 12 18:36:13.326100 containerd[1555]: time="2025-12-12T18:36:13.324662556Z" level=info msg="connecting to shim 1a6643b5ed95b01534fd056319b3b4f1ac21f8907eac88e4b89a51409f2a47a3" address="unix:///run/containerd/s/a5e5208efcb3c2584687e25cc66464100e96baf6368db7f57b9d8b0504fa3816" protocol=ttrpc version=3 Dec 12 18:36:13.375939 systemd[1]: Started cri-containerd-1a6643b5ed95b01534fd056319b3b4f1ac21f8907eac88e4b89a51409f2a47a3.scope - libcontainer container 1a6643b5ed95b01534fd056319b3b4f1ac21f8907eac88e4b89a51409f2a47a3. Dec 12 18:36:13.629818 containerd[1555]: time="2025-12-12T18:36:13.629739093Z" level=info msg="StartContainer for \"1a6643b5ed95b01534fd056319b3b4f1ac21f8907eac88e4b89a51409f2a47a3\" returns successfully" Dec 12 18:36:14.306621 kubelet[2727]: E1212 18:36:14.306451 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:36:14.350244 kubelet[2727]: E1212 18:36:14.350185 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:14.350244 kubelet[2727]: W1212 18:36:14.350222 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:14.350244 kubelet[2727]: E1212 18:36:14.350251 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:14.375451 kubelet[2727]: E1212 18:36:14.362416 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:14.375451 kubelet[2727]: W1212 18:36:14.362463 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:14.375451 kubelet[2727]: E1212 18:36:14.362499 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:14.378749 kubelet[2727]: E1212 18:36:14.378684 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:14.378749 kubelet[2727]: W1212 18:36:14.378728 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:14.378994 kubelet[2727]: E1212 18:36:14.378798 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:14.390593 kubelet[2727]: E1212 18:36:14.383950 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:14.390593 kubelet[2727]: W1212 18:36:14.384003 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:14.390593 kubelet[2727]: E1212 18:36:14.384036 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:14.391391 kubelet[2727]: E1212 18:36:14.391204 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:14.391391 kubelet[2727]: W1212 18:36:14.391238 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:14.391391 kubelet[2727]: E1212 18:36:14.391274 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:14.391770 kubelet[2727]: E1212 18:36:14.391753 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:14.391876 kubelet[2727]: W1212 18:36:14.391860 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:14.391951 kubelet[2727]: E1212 18:36:14.391938 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:14.392328 kubelet[2727]: E1212 18:36:14.392223 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:14.392328 kubelet[2727]: W1212 18:36:14.392238 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:14.392328 kubelet[2727]: E1212 18:36:14.392250 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:14.402065 kubelet[2727]: E1212 18:36:14.396917 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:14.406745 kubelet[2727]: W1212 18:36:14.406691 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:14.407091 kubelet[2727]: E1212 18:36:14.407068 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:14.416680 kubelet[2727]: E1212 18:36:14.416618 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:14.416983 kubelet[2727]: W1212 18:36:14.416951 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:14.417106 kubelet[2727]: E1212 18:36:14.417085 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:14.430825 kubelet[2727]: E1212 18:36:14.425870 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:14.431198 kubelet[2727]: W1212 18:36:14.431147 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:14.431320 kubelet[2727]: E1212 18:36:14.431282 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:14.445079 kubelet[2727]: E1212 18:36:14.445034 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:14.452572 kubelet[2727]: W1212 18:36:14.448608 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:14.452826 kubelet[2727]: E1212 18:36:14.452752 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:14.455973 kubelet[2727]: E1212 18:36:14.455926 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:14.456194 kubelet[2727]: W1212 18:36:14.456169 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:14.456304 kubelet[2727]: E1212 18:36:14.456276 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:14.463081 kubelet[2727]: E1212 18:36:14.463039 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:14.470503 kubelet[2727]: W1212 18:36:14.470426 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:14.470805 kubelet[2727]: E1212 18:36:14.470767 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:14.471897 kubelet[2727]: E1212 18:36:14.471813 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:14.472095 kubelet[2727]: W1212 18:36:14.472074 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:14.472225 kubelet[2727]: E1212 18:36:14.472164 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:14.472644 kubelet[2727]: E1212 18:36:14.472554 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:14.472644 kubelet[2727]: W1212 18:36:14.472569 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:14.472644 kubelet[2727]: E1212 18:36:14.472582 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:14.473356 kubelet[2727]: E1212 18:36:14.473337 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:14.473591 kubelet[2727]: W1212 18:36:14.473433 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:14.473591 kubelet[2727]: E1212 18:36:14.473450 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:14.477709 kubelet[2727]: E1212 18:36:14.477670 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:14.478174 kubelet[2727]: W1212 18:36:14.477923 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:14.478174 kubelet[2727]: E1212 18:36:14.477964 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:14.478662 kubelet[2727]: E1212 18:36:14.478642 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:14.478747 kubelet[2727]: W1212 18:36:14.478732 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:14.478856 kubelet[2727]: E1212 18:36:14.478841 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:14.479308 kubelet[2727]: E1212 18:36:14.479216 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:14.479308 kubelet[2727]: W1212 18:36:14.479230 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:14.479308 kubelet[2727]: E1212 18:36:14.479246 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:14.479685 kubelet[2727]: E1212 18:36:14.479670 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:14.482611 kubelet[2727]: W1212 18:36:14.479749 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:14.482611 kubelet[2727]: E1212 18:36:14.479768 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:14.482611 kubelet[2727]: E1212 18:36:14.480051 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:14.482611 kubelet[2727]: W1212 18:36:14.480064 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:14.482611 kubelet[2727]: E1212 18:36:14.480078 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:14.482611 kubelet[2727]: E1212 18:36:14.481903 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:14.482611 kubelet[2727]: W1212 18:36:14.481923 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:14.482611 kubelet[2727]: E1212 18:36:14.481944 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:14.482611 kubelet[2727]: E1212 18:36:14.482221 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:14.482611 kubelet[2727]: W1212 18:36:14.482232 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:14.483083 kubelet[2727]: E1212 18:36:14.482244 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:14.483083 kubelet[2727]: E1212 18:36:14.482501 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:14.483083 kubelet[2727]: W1212 18:36:14.482514 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:14.483083 kubelet[2727]: E1212 18:36:14.482525 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:14.483393 kubelet[2727]: E1212 18:36:14.483351 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:14.483393 kubelet[2727]: W1212 18:36:14.483372 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:14.483393 kubelet[2727]: E1212 18:36:14.483385 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:14.486512 kubelet[2727]: E1212 18:36:14.485961 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:14.486512 kubelet[2727]: W1212 18:36:14.485996 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:14.486512 kubelet[2727]: E1212 18:36:14.486029 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:14.486687 kubelet[2727]: E1212 18:36:14.486670 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:14.486773 kubelet[2727]: W1212 18:36:14.486757 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:14.486943 kubelet[2727]: E1212 18:36:14.486927 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:14.490690 kubelet[2727]: E1212 18:36:14.490615 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:14.490690 kubelet[2727]: W1212 18:36:14.490666 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:14.490690 kubelet[2727]: E1212 18:36:14.490702 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:14.491235 kubelet[2727]: E1212 18:36:14.491111 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:14.491235 kubelet[2727]: W1212 18:36:14.491123 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:14.491235 kubelet[2727]: E1212 18:36:14.491136 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:14.491472 kubelet[2727]: E1212 18:36:14.491445 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:14.491472 kubelet[2727]: W1212 18:36:14.491466 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:14.491544 kubelet[2727]: E1212 18:36:14.491479 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:14.491779 kubelet[2727]: E1212 18:36:14.491755 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:14.491779 kubelet[2727]: W1212 18:36:14.491769 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:14.491779 kubelet[2727]: E1212 18:36:14.491781 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:14.493460 kubelet[2727]: E1212 18:36:14.493414 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:14.493460 kubelet[2727]: W1212 18:36:14.493434 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:14.493460 kubelet[2727]: E1212 18:36:14.493448 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:14.494058 kubelet[2727]: E1212 18:36:14.494004 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:14.494058 kubelet[2727]: W1212 18:36:14.494022 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:14.494058 kubelet[2727]: E1212 18:36:14.494044 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:14.957064 kubelet[2727]: E1212 18:36:14.956183 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mtm8f" podUID="8652b687-d41e-47f6-a864-e604e24deb5b" Dec 12 18:36:15.247366 containerd[1555]: time="2025-12-12T18:36:15.247156905Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:36:15.249310 containerd[1555]: time="2025-12-12T18:36:15.249271923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Dec 12 18:36:15.255311 containerd[1555]: time="2025-12-12T18:36:15.255204811Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:36:15.258396 containerd[1555]: time="2025-12-12T18:36:15.258221872Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:36:15.259528 containerd[1555]: time="2025-12-12T18:36:15.258961001Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.142169202s" Dec 12 18:36:15.259528 containerd[1555]: time="2025-12-12T18:36:15.259002551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 12 18:36:15.286131 containerd[1555]: time="2025-12-12T18:36:15.286060291Z" level=info msg="CreateContainer within sandbox \"39d44712ca943ca5981db023412c4bd66191ca55a267c022b1a186ed1d749b96\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 18:36:15.308811 kubelet[2727]: I1212 18:36:15.306677 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 18:36:15.308811 kubelet[2727]: E1212 18:36:15.307406 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:36:15.320870 containerd[1555]: time="2025-12-12T18:36:15.319357281Z" level=info msg="Container 9410ab6ad869d16ccd0926d73e5ef698e9f58c3abbf92b568c222611e6ecc0f3: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:36:15.322090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1889792921.mount: Deactivated successfully. Dec 12 18:36:15.346863 containerd[1555]: time="2025-12-12T18:36:15.346777952Z" level=info msg="CreateContainer within sandbox \"39d44712ca943ca5981db023412c4bd66191ca55a267c022b1a186ed1d749b96\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9410ab6ad869d16ccd0926d73e5ef698e9f58c3abbf92b568c222611e6ecc0f3\"" Dec 12 18:36:15.348416 containerd[1555]: time="2025-12-12T18:36:15.348378586Z" level=info msg="StartContainer for \"9410ab6ad869d16ccd0926d73e5ef698e9f58c3abbf92b568c222611e6ecc0f3\"" Dec 12 18:36:15.350546 containerd[1555]: time="2025-12-12T18:36:15.350497352Z" level=info msg="connecting to shim 9410ab6ad869d16ccd0926d73e5ef698e9f58c3abbf92b568c222611e6ecc0f3" address="unix:///run/containerd/s/561becdfeb76663763e605dff422d96c5e6172ee4b34ff575b62cbdd43322acd" protocol=ttrpc version=3 Dec 12 18:36:15.385534 kubelet[2727]: E1212 18:36:15.385482 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:15.385946 kubelet[2727]: W1212 18:36:15.385750 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:15.385946 kubelet[2727]: E1212 18:36:15.385810 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:15.386346 kubelet[2727]: E1212 18:36:15.386328 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:15.387375 kubelet[2727]: W1212 18:36:15.386430 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:15.387375 kubelet[2727]: E1212 18:36:15.386448 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:15.388982 kubelet[2727]: E1212 18:36:15.388961 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:15.389163 kubelet[2727]: W1212 18:36:15.389040 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:15.389163 kubelet[2727]: E1212 18:36:15.389058 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:15.393552 kubelet[2727]: E1212 18:36:15.393489 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:15.393815 kubelet[2727]: W1212 18:36:15.393613 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:15.393815 kubelet[2727]: E1212 18:36:15.393641 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:15.394486 kubelet[2727]: E1212 18:36:15.394438 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:15.394664 kubelet[2727]: W1212 18:36:15.394561 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:15.394664 kubelet[2727]: E1212 18:36:15.394580 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:15.395226 kubelet[2727]: E1212 18:36:15.395115 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:15.395226 kubelet[2727]: W1212 18:36:15.395164 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:15.395226 kubelet[2727]: E1212 18:36:15.395178 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:15.395807 kubelet[2727]: E1212 18:36:15.395715 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:15.396033 kubelet[2727]: W1212 18:36:15.395897 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:15.396033 kubelet[2727]: E1212 18:36:15.395920 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:15.406466 kubelet[2727]: E1212 18:36:15.400736 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:15.406466 kubelet[2727]: W1212 18:36:15.400779 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:15.406466 kubelet[2727]: E1212 18:36:15.400838 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:15.409510 kubelet[2727]: E1212 18:36:15.407167 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:15.409510 kubelet[2727]: W1212 18:36:15.407203 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:15.409510 kubelet[2727]: E1212 18:36:15.407235 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:15.409510 kubelet[2727]: E1212 18:36:15.407527 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:15.409510 kubelet[2727]: W1212 18:36:15.407538 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:15.409510 kubelet[2727]: E1212 18:36:15.407550 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:15.409510 kubelet[2727]: E1212 18:36:15.407769 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:15.409510 kubelet[2727]: W1212 18:36:15.407805 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:15.409510 kubelet[2727]: E1212 18:36:15.407819 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:15.409510 kubelet[2727]: E1212 18:36:15.408075 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:15.409995 kubelet[2727]: W1212 18:36:15.408088 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:15.409995 kubelet[2727]: E1212 18:36:15.408101 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:15.409995 kubelet[2727]: E1212 18:36:15.408429 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:15.409995 kubelet[2727]: W1212 18:36:15.408440 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:15.409995 kubelet[2727]: E1212 18:36:15.408450 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:15.409995 kubelet[2727]: E1212 18:36:15.408673 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:15.409995 kubelet[2727]: W1212 18:36:15.408684 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:15.409995 kubelet[2727]: E1212 18:36:15.408697 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:15.409995 kubelet[2727]: E1212 18:36:15.408972 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:15.409995 kubelet[2727]: W1212 18:36:15.408984 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:15.410314 kubelet[2727]: E1212 18:36:15.408995 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:15.410314 kubelet[2727]: E1212 18:36:15.409337 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:15.410314 kubelet[2727]: W1212 18:36:15.409349 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:15.410314 kubelet[2727]: E1212 18:36:15.409363 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:15.410314 kubelet[2727]: E1212 18:36:15.409659 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:15.410314 kubelet[2727]: W1212 18:36:15.409671 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:15.410314 kubelet[2727]: E1212 18:36:15.409682 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:15.410314 kubelet[2727]: E1212 18:36:15.410043 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:15.410314 kubelet[2727]: W1212 18:36:15.410054 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:15.410314 kubelet[2727]: E1212 18:36:15.410066 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:15.413005 kubelet[2727]: E1212 18:36:15.412387 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:15.413005 kubelet[2727]: W1212 18:36:15.412530 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:15.413191 kubelet[2727]: E1212 18:36:15.413153 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:15.417673 kubelet[2727]: E1212 18:36:15.417608 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:15.417673 kubelet[2727]: W1212 18:36:15.417635 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:15.417673 kubelet[2727]: E1212 18:36:15.417654 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:15.421430 kubelet[2727]: E1212 18:36:15.418950 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:15.421430 kubelet[2727]: W1212 18:36:15.418963 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:15.421430 kubelet[2727]: E1212 18:36:15.418976 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:15.421430 kubelet[2727]: E1212 18:36:15.421420 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:15.421430 kubelet[2727]: W1212 18:36:15.421437 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:15.421847 kubelet[2727]: E1212 18:36:15.421454 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:15.421847 kubelet[2727]: E1212 18:36:15.421687 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:15.421847 kubelet[2727]: W1212 18:36:15.421701 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:15.421847 kubelet[2727]: E1212 18:36:15.421712 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:15.422053 kubelet[2727]: E1212 18:36:15.421913 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:15.422053 kubelet[2727]: W1212 18:36:15.421924 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:15.422053 kubelet[2727]: E1212 18:36:15.421935 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:15.422507 kubelet[2727]: E1212 18:36:15.422450 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:15.425456 kubelet[2727]: W1212 18:36:15.422587 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:15.425456 kubelet[2727]: E1212 18:36:15.424055 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:15.425456 kubelet[2727]: E1212 18:36:15.424726 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:15.425456 kubelet[2727]: W1212 18:36:15.424736 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:15.425456 kubelet[2727]: E1212 18:36:15.424770 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:15.425456 kubelet[2727]: E1212 18:36:15.424955 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:15.425456 kubelet[2727]: W1212 18:36:15.424963 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:15.425456 kubelet[2727]: E1212 18:36:15.424973 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:15.425456 kubelet[2727]: E1212 18:36:15.425179 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:15.425456 kubelet[2727]: W1212 18:36:15.425187 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:15.425761 kubelet[2727]: E1212 18:36:15.425197 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:15.427406 systemd[1]: Started cri-containerd-9410ab6ad869d16ccd0926d73e5ef698e9f58c3abbf92b568c222611e6ecc0f3.scope - libcontainer container 9410ab6ad869d16ccd0926d73e5ef698e9f58c3abbf92b568c222611e6ecc0f3. Dec 12 18:36:15.431535 kubelet[2727]: E1212 18:36:15.428714 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:15.431535 kubelet[2727]: W1212 18:36:15.428732 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:15.431535 kubelet[2727]: E1212 18:36:15.428749 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:15.431981 kubelet[2727]: E1212 18:36:15.431942 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:15.431981 kubelet[2727]: W1212 18:36:15.431966 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:15.431981 kubelet[2727]: E1212 18:36:15.431984 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:15.432466 kubelet[2727]: E1212 18:36:15.432446 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:15.432466 kubelet[2727]: W1212 18:36:15.432462 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:15.432552 kubelet[2727]: E1212 18:36:15.432526 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:15.433242 kubelet[2727]: E1212 18:36:15.433213 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:15.433242 kubelet[2727]: W1212 18:36:15.433231 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:15.433242 kubelet[2727]: E1212 18:36:15.433243 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:15.436692 kubelet[2727]: E1212 18:36:15.436468 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 18:36:15.436692 kubelet[2727]: W1212 18:36:15.436491 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 18:36:15.436692 kubelet[2727]: E1212 18:36:15.436508 2727 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 18:36:15.638742 containerd[1555]: time="2025-12-12T18:36:15.636584669Z" level=info msg="StartContainer for \"9410ab6ad869d16ccd0926d73e5ef698e9f58c3abbf92b568c222611e6ecc0f3\" returns successfully" Dec 12 18:36:15.662960 systemd[1]: cri-containerd-9410ab6ad869d16ccd0926d73e5ef698e9f58c3abbf92b568c222611e6ecc0f3.scope: Deactivated successfully. Dec 12 18:36:15.663951 containerd[1555]: time="2025-12-12T18:36:15.663903043Z" level=info msg="received container exit event container_id:\"9410ab6ad869d16ccd0926d73e5ef698e9f58c3abbf92b568c222611e6ecc0f3\" id:\"9410ab6ad869d16ccd0926d73e5ef698e9f58c3abbf92b568c222611e6ecc0f3\" pid:3506 exited_at:{seconds:1765564575 nanos:663467101}" Dec 12 18:36:15.663962 systemd[1]: cri-containerd-9410ab6ad869d16ccd0926d73e5ef698e9f58c3abbf92b568c222611e6ecc0f3.scope: Consumed 75ms CPU time, 6.5M memory peak, 4.6M written to disk. Dec 12 18:36:15.762447 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9410ab6ad869d16ccd0926d73e5ef698e9f58c3abbf92b568c222611e6ecc0f3-rootfs.mount: Deactivated successfully. Dec 12 18:36:16.330320 kubelet[2727]: E1212 18:36:16.324038 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:36:16.331015 containerd[1555]: time="2025-12-12T18:36:16.328055689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 18:36:16.424663 kubelet[2727]: I1212 18:36:16.419911 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-649c7c4f4c-ndg97" podStartSLOduration=4.407012109 podStartE2EDuration="8.419885599s" podCreationTimestamp="2025-12-12 18:36:08 +0000 UTC" firstStartedPulling="2025-12-12 18:36:09.099620305 +0000 UTC m=+30.714987204" lastFinishedPulling="2025-12-12 18:36:13.112493794 +0000 UTC m=+34.727860694" observedRunningTime="2025-12-12 18:36:14.384140347 +0000 UTC m=+35.999507256" watchObservedRunningTime="2025-12-12 18:36:16.419885599 +0000 UTC m=+38.035252498" Dec 12 18:36:16.956477 kubelet[2727]: E1212 18:36:16.948334 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mtm8f" podUID="8652b687-d41e-47f6-a864-e604e24deb5b" Dec 12 18:36:18.955469 kubelet[2727]: E1212 18:36:18.955330 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mtm8f" podUID="8652b687-d41e-47f6-a864-e604e24deb5b" Dec 12 18:36:19.790837 kubelet[2727]: I1212 18:36:19.788434 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 18:36:19.791128 kubelet[2727]: E1212 18:36:19.791087 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:36:20.362609 kubelet[2727]: E1212 18:36:20.362555 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:36:20.951583 kubelet[2727]: E1212 18:36:20.948675 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mtm8f" podUID="8652b687-d41e-47f6-a864-e604e24deb5b" Dec 12 18:36:22.949907 kubelet[2727]: E1212 18:36:22.947768 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mtm8f" podUID="8652b687-d41e-47f6-a864-e604e24deb5b" Dec 12 18:36:24.473950 containerd[1555]: time="2025-12-12T18:36:24.472879557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:36:24.476190 containerd[1555]: time="2025-12-12T18:36:24.476108070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Dec 12 18:36:24.485822 containerd[1555]: time="2025-12-12T18:36:24.485568065Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:36:24.501192 containerd[1555]: time="2025-12-12T18:36:24.498672375Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:36:24.501192 containerd[1555]: time="2025-12-12T18:36:24.499684085Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 8.171579902s" Dec 12 18:36:24.501192 containerd[1555]: time="2025-12-12T18:36:24.499721818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 12 18:36:24.526199 containerd[1555]: time="2025-12-12T18:36:24.524279986Z" level=info msg="CreateContainer within sandbox \"39d44712ca943ca5981db023412c4bd66191ca55a267c022b1a186ed1d749b96\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 18:36:24.589489 containerd[1555]: time="2025-12-12T18:36:24.587880835Z" level=info msg="Container 9d07fb10850714335b45ccca3641caecd14cc39dfe1e4fb60786f2fc401b10a8: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:36:24.637853 containerd[1555]: time="2025-12-12T18:36:24.634089958Z" level=info msg="CreateContainer within sandbox \"39d44712ca943ca5981db023412c4bd66191ca55a267c022b1a186ed1d749b96\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9d07fb10850714335b45ccca3641caecd14cc39dfe1e4fb60786f2fc401b10a8\"" Dec 12 18:36:24.645722 containerd[1555]: time="2025-12-12T18:36:24.643410364Z" level=info msg="StartContainer for \"9d07fb10850714335b45ccca3641caecd14cc39dfe1e4fb60786f2fc401b10a8\"" Dec 12 18:36:24.659957 containerd[1555]: time="2025-12-12T18:36:24.653947255Z" level=info msg="connecting to shim 9d07fb10850714335b45ccca3641caecd14cc39dfe1e4fb60786f2fc401b10a8" address="unix:///run/containerd/s/561becdfeb76663763e605dff422d96c5e6172ee4b34ff575b62cbdd43322acd" protocol=ttrpc version=3 Dec 12 18:36:24.750930 systemd[1]: Started cri-containerd-9d07fb10850714335b45ccca3641caecd14cc39dfe1e4fb60786f2fc401b10a8.scope - libcontainer container 9d07fb10850714335b45ccca3641caecd14cc39dfe1e4fb60786f2fc401b10a8. Dec 12 18:36:24.952717 kubelet[2727]: E1212 18:36:24.949023 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mtm8f" podUID="8652b687-d41e-47f6-a864-e604e24deb5b" Dec 12 18:36:24.953342 containerd[1555]: time="2025-12-12T18:36:24.951967483Z" level=info msg="StartContainer for \"9d07fb10850714335b45ccca3641caecd14cc39dfe1e4fb60786f2fc401b10a8\" returns successfully" Dec 12 18:36:25.413736 kubelet[2727]: E1212 18:36:25.411757 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:36:26.417351 kubelet[2727]: E1212 18:36:26.417264 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:36:26.854841 systemd[1]: cri-containerd-9d07fb10850714335b45ccca3641caecd14cc39dfe1e4fb60786f2fc401b10a8.scope: Deactivated successfully. Dec 12 18:36:26.855304 systemd[1]: cri-containerd-9d07fb10850714335b45ccca3641caecd14cc39dfe1e4fb60786f2fc401b10a8.scope: Consumed 1.036s CPU time, 181.5M memory peak, 4.1M read from disk, 171.3M written to disk. Dec 12 18:36:26.867332 containerd[1555]: time="2025-12-12T18:36:26.867119245Z" level=info msg="received container exit event container_id:\"9d07fb10850714335b45ccca3641caecd14cc39dfe1e4fb60786f2fc401b10a8\" id:\"9d07fb10850714335b45ccca3641caecd14cc39dfe1e4fb60786f2fc401b10a8\" pid:3568 exited_at:{seconds:1765564586 nanos:861211308}" Dec 12 18:36:26.942282 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d07fb10850714335b45ccca3641caecd14cc39dfe1e4fb60786f2fc401b10a8-rootfs.mount: Deactivated successfully. Dec 12 18:36:26.949818 kubelet[2727]: E1212 18:36:26.949082 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mtm8f" podUID="8652b687-d41e-47f6-a864-e604e24deb5b" Dec 12 18:36:26.977947 kubelet[2727]: I1212 18:36:26.977041 2727 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 12 18:36:27.783832 kubelet[2727]: I1212 18:36:27.783337 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df0e4c6b-2ef9-409a-ab59-75e3dc03ae44-whisker-ca-bundle\") pod \"whisker-7657c76566-4sqmn\" (UID: \"df0e4c6b-2ef9-409a-ab59-75e3dc03ae44\") " pod="calico-system/whisker-7657c76566-4sqmn" Dec 12 18:36:27.783832 kubelet[2727]: I1212 18:36:27.783507 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/df0e4c6b-2ef9-409a-ab59-75e3dc03ae44-whisker-backend-key-pair\") pod \"whisker-7657c76566-4sqmn\" (UID: \"df0e4c6b-2ef9-409a-ab59-75e3dc03ae44\") " pod="calico-system/whisker-7657c76566-4sqmn" Dec 12 18:36:27.786557 kubelet[2727]: I1212 18:36:27.783553 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29qgw\" (UniqueName: \"kubernetes.io/projected/df0e4c6b-2ef9-409a-ab59-75e3dc03ae44-kube-api-access-29qgw\") pod \"whisker-7657c76566-4sqmn\" (UID: \"df0e4c6b-2ef9-409a-ab59-75e3dc03ae44\") " pod="calico-system/whisker-7657c76566-4sqmn" Dec 12 18:36:27.789075 systemd[1]: Created slice kubepods-besteffort-poddf0e4c6b_2ef9_409a_ab59_75e3dc03ae44.slice - libcontainer container kubepods-besteffort-poddf0e4c6b_2ef9_409a_ab59_75e3dc03ae44.slice. Dec 12 18:36:27.991023 kubelet[2727]: I1212 18:36:27.990815 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eca2884d-7c05-43ed-b865-dfa363821ba5-config-volume\") pod \"coredns-674b8bbfcf-gp587\" (UID: \"eca2884d-7c05-43ed-b865-dfa363821ba5\") " pod="kube-system/coredns-674b8bbfcf-gp587" Dec 12 18:36:27.991023 kubelet[2727]: I1212 18:36:27.990901 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5drgx\" (UniqueName: \"kubernetes.io/projected/eca2884d-7c05-43ed-b865-dfa363821ba5-kube-api-access-5drgx\") pod \"coredns-674b8bbfcf-gp587\" (UID: \"eca2884d-7c05-43ed-b865-dfa363821ba5\") " pod="kube-system/coredns-674b8bbfcf-gp587" Dec 12 18:36:27.996235 systemd[1]: Created slice kubepods-burstable-podeca2884d_7c05_43ed_b865_dfa363821ba5.slice - libcontainer container kubepods-burstable-podeca2884d_7c05_43ed_b865_dfa363821ba5.slice. Dec 12 18:36:28.100663 containerd[1555]: time="2025-12-12T18:36:28.100422043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7657c76566-4sqmn,Uid:df0e4c6b-2ef9-409a-ab59-75e3dc03ae44,Namespace:calico-system,Attempt:0,}" Dec 12 18:36:28.125256 systemd[1]: Created slice kubepods-besteffort-pod8652b687_d41e_47f6_a864_e604e24deb5b.slice - libcontainer container kubepods-besteffort-pod8652b687_d41e_47f6_a864_e604e24deb5b.slice. Dec 12 18:36:28.157486 containerd[1555]: time="2025-12-12T18:36:28.157412039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mtm8f,Uid:8652b687-d41e-47f6-a864-e604e24deb5b,Namespace:calico-system,Attempt:0,}" Dec 12 18:36:28.193204 kubelet[2727]: I1212 18:36:28.192528 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/69dcfde4-a416-44cb-b592-faa494483016-calico-apiserver-certs\") pod \"calico-apiserver-5f4dfd9b79-68db6\" (UID: \"69dcfde4-a416-44cb-b592-faa494483016\") " pod="calico-apiserver/calico-apiserver-5f4dfd9b79-68db6" Dec 12 18:36:28.193204 kubelet[2727]: I1212 18:36:28.192586 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/433cc46b-ce9f-4fdf-9392-bdd29bdc4330-calico-apiserver-certs\") pod \"calico-apiserver-5f4dfd9b79-gmfhk\" (UID: \"433cc46b-ce9f-4fdf-9392-bdd29bdc4330\") " pod="calico-apiserver/calico-apiserver-5f4dfd9b79-gmfhk" Dec 12 18:36:28.193204 kubelet[2727]: I1212 18:36:28.192617 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmk7r\" (UniqueName: \"kubernetes.io/projected/69dcfde4-a416-44cb-b592-faa494483016-kube-api-access-bmk7r\") pod \"calico-apiserver-5f4dfd9b79-68db6\" (UID: \"69dcfde4-a416-44cb-b592-faa494483016\") " pod="calico-apiserver/calico-apiserver-5f4dfd9b79-68db6" Dec 12 18:36:28.193204 kubelet[2727]: I1212 18:36:28.192645 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4j8s\" (UniqueName: \"kubernetes.io/projected/433cc46b-ce9f-4fdf-9392-bdd29bdc4330-kube-api-access-p4j8s\") pod \"calico-apiserver-5f4dfd9b79-gmfhk\" (UID: \"433cc46b-ce9f-4fdf-9392-bdd29bdc4330\") " pod="calico-apiserver/calico-apiserver-5f4dfd9b79-gmfhk" Dec 12 18:36:28.214853 systemd[1]: Created slice kubepods-besteffort-pod433cc46b_ce9f_4fdf_9392_bdd29bdc4330.slice - libcontainer container kubepods-besteffort-pod433cc46b_ce9f_4fdf_9392_bdd29bdc4330.slice. Dec 12 18:36:28.318655 kubelet[2727]: E1212 18:36:28.305355 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:36:28.322504 containerd[1555]: time="2025-12-12T18:36:28.320744047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gp587,Uid:eca2884d-7c05-43ed-b865-dfa363821ba5,Namespace:kube-system,Attempt:0,}" Dec 12 18:36:28.381954 systemd[1]: Created slice kubepods-besteffort-podb2de3129_f188_4d80_9725_7a97224ed672.slice - libcontainer container kubepods-besteffort-podb2de3129_f188_4d80_9725_7a97224ed672.slice. Dec 12 18:36:28.396493 kubelet[2727]: I1212 18:36:28.396379 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97c6c51c-88d4-4a5c-b977-999f64d65996-tigera-ca-bundle\") pod \"calico-kube-controllers-845fcfc7bd-n97cm\" (UID: \"97c6c51c-88d4-4a5c-b977-999f64d65996\") " pod="calico-system/calico-kube-controllers-845fcfc7bd-n97cm" Dec 12 18:36:28.402502 kubelet[2727]: I1212 18:36:28.400156 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f79zb\" (UniqueName: \"kubernetes.io/projected/97c6c51c-88d4-4a5c-b977-999f64d65996-kube-api-access-f79zb\") pod \"calico-kube-controllers-845fcfc7bd-n97cm\" (UID: \"97c6c51c-88d4-4a5c-b977-999f64d65996\") " pod="calico-system/calico-kube-controllers-845fcfc7bd-n97cm" Dec 12 18:36:28.402502 kubelet[2727]: I1212 18:36:28.400278 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2de3129-f188-4d80-9725-7a97224ed672-config\") pod \"goldmane-666569f655-clgcw\" (UID: \"b2de3129-f188-4d80-9725-7a97224ed672\") " pod="calico-system/goldmane-666569f655-clgcw" Dec 12 18:36:28.402690 kubelet[2727]: I1212 18:36:28.402537 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b2de3129-f188-4d80-9725-7a97224ed672-goldmane-key-pair\") pod \"goldmane-666569f655-clgcw\" (UID: \"b2de3129-f188-4d80-9725-7a97224ed672\") " pod="calico-system/goldmane-666569f655-clgcw" Dec 12 18:36:28.402690 kubelet[2727]: I1212 18:36:28.402603 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9dsm\" (UniqueName: \"kubernetes.io/projected/b2de3129-f188-4d80-9725-7a97224ed672-kube-api-access-m9dsm\") pod \"goldmane-666569f655-clgcw\" (UID: \"b2de3129-f188-4d80-9725-7a97224ed672\") " pod="calico-system/goldmane-666569f655-clgcw" Dec 12 18:36:28.402755 kubelet[2727]: I1212 18:36:28.402694 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2de3129-f188-4d80-9725-7a97224ed672-goldmane-ca-bundle\") pod \"goldmane-666569f655-clgcw\" (UID: \"b2de3129-f188-4d80-9725-7a97224ed672\") " pod="calico-system/goldmane-666569f655-clgcw" Dec 12 18:36:28.411718 systemd[1]: Created slice kubepods-besteffort-pod97c6c51c_88d4_4a5c_b977_999f64d65996.slice - libcontainer container kubepods-besteffort-pod97c6c51c_88d4_4a5c_b977_999f64d65996.slice. Dec 12 18:36:28.528296 containerd[1555]: time="2025-12-12T18:36:28.528237567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f4dfd9b79-gmfhk,Uid:433cc46b-ce9f-4fdf-9392-bdd29bdc4330,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:36:28.561608 kubelet[2727]: E1212 18:36:28.561370 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:36:28.594248 containerd[1555]: time="2025-12-12T18:36:28.592814772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 18:36:28.605467 systemd[1]: Created slice kubepods-besteffort-pod69dcfde4_a416_44cb_b592_faa494483016.slice - libcontainer container kubepods-besteffort-pod69dcfde4_a416_44cb_b592_faa494483016.slice. Dec 12 18:36:28.609373 kubelet[2727]: I1212 18:36:28.609208 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5b51e08-d9b5-4edc-977d-d07e96ed0aed-config-volume\") pod \"coredns-674b8bbfcf-btnsn\" (UID: \"e5b51e08-d9b5-4edc-977d-d07e96ed0aed\") " pod="kube-system/coredns-674b8bbfcf-btnsn" Dec 12 18:36:28.609373 kubelet[2727]: I1212 18:36:28.609321 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh6nw\" (UniqueName: \"kubernetes.io/projected/e5b51e08-d9b5-4edc-977d-d07e96ed0aed-kube-api-access-bh6nw\") pod \"coredns-674b8bbfcf-btnsn\" (UID: \"e5b51e08-d9b5-4edc-977d-d07e96ed0aed\") " pod="kube-system/coredns-674b8bbfcf-btnsn" Dec 12 18:36:28.616736 containerd[1555]: time="2025-12-12T18:36:28.616657357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f4dfd9b79-68db6,Uid:69dcfde4-a416-44cb-b592-faa494483016,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:36:28.630834 systemd[1]: Created slice kubepods-burstable-pode5b51e08_d9b5_4edc_977d_d07e96ed0aed.slice - libcontainer container kubepods-burstable-pode5b51e08_d9b5_4edc_977d_d07e96ed0aed.slice. Dec 12 18:36:28.700009 containerd[1555]: time="2025-12-12T18:36:28.699870007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-clgcw,Uid:b2de3129-f188-4d80-9725-7a97224ed672,Namespace:calico-system,Attempt:0,}" Dec 12 18:36:28.772008 containerd[1555]: time="2025-12-12T18:36:28.771943004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-845fcfc7bd-n97cm,Uid:97c6c51c-88d4-4a5c-b977-999f64d65996,Namespace:calico-system,Attempt:0,}" Dec 12 18:36:28.956442 kubelet[2727]: E1212 18:36:28.956239 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:36:28.959510 containerd[1555]: time="2025-12-12T18:36:28.958732991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-btnsn,Uid:e5b51e08-d9b5-4edc-977d-d07e96ed0aed,Namespace:kube-system,Attempt:0,}" Dec 12 18:36:29.013742 containerd[1555]: time="2025-12-12T18:36:29.013674029Z" level=error msg="Failed to destroy network for sandbox \"caf297bf18355b89cb6cda2844cd0db13489848a12c367b8eff9a58069cdac84\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:29.026089 containerd[1555]: time="2025-12-12T18:36:29.026006099Z" level=error msg="Failed to destroy network for sandbox \"08d210693f11f5f9e20a32f63d863a9934f5f7b14f20bbeb7f434b0ae1c890a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:29.026358 containerd[1555]: time="2025-12-12T18:36:29.026323896Z" level=error msg="Failed to destroy network for sandbox \"ba4ce2b88ffe3d68cc271686e67a9ef8b4f5ad645b26320a1a27aa7fb76f73fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:29.030619 containerd[1555]: time="2025-12-12T18:36:29.030557726Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gp587,Uid:eca2884d-7c05-43ed-b865-dfa363821ba5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"caf297bf18355b89cb6cda2844cd0db13489848a12c367b8eff9a58069cdac84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:29.032730 containerd[1555]: time="2025-12-12T18:36:29.032635791Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f4dfd9b79-gmfhk,Uid:433cc46b-ce9f-4fdf-9392-bdd29bdc4330,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"08d210693f11f5f9e20a32f63d863a9934f5f7b14f20bbeb7f434b0ae1c890a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:29.033960 containerd[1555]: time="2025-12-12T18:36:29.033657066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-clgcw,Uid:b2de3129-f188-4d80-9725-7a97224ed672,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba4ce2b88ffe3d68cc271686e67a9ef8b4f5ad645b26320a1a27aa7fb76f73fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:29.044899 kubelet[2727]: E1212 18:36:29.044706 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caf297bf18355b89cb6cda2844cd0db13489848a12c367b8eff9a58069cdac84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:29.044899 kubelet[2727]: E1212 18:36:29.044887 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08d210693f11f5f9e20a32f63d863a9934f5f7b14f20bbeb7f434b0ae1c890a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:29.045106 kubelet[2727]: E1212 18:36:29.044967 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08d210693f11f5f9e20a32f63d863a9934f5f7b14f20bbeb7f434b0ae1c890a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f4dfd9b79-gmfhk" Dec 12 18:36:29.045106 kubelet[2727]: E1212 18:36:29.045005 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08d210693f11f5f9e20a32f63d863a9934f5f7b14f20bbeb7f434b0ae1c890a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f4dfd9b79-gmfhk" Dec 12 18:36:29.045106 kubelet[2727]: E1212 18:36:29.045078 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f4dfd9b79-gmfhk_calico-apiserver(433cc46b-ce9f-4fdf-9392-bdd29bdc4330)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f4dfd9b79-gmfhk_calico-apiserver(433cc46b-ce9f-4fdf-9392-bdd29bdc4330)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"08d210693f11f5f9e20a32f63d863a9934f5f7b14f20bbeb7f434b0ae1c890a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f4dfd9b79-gmfhk" podUID="433cc46b-ce9f-4fdf-9392-bdd29bdc4330" Dec 12 18:36:29.046221 containerd[1555]: time="2025-12-12T18:36:29.045932342Z" level=error msg="Failed to destroy network for sandbox \"0842710deb2b1822371a8027fbada29c2019873c9d88785d6d12b2d1b373b2b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:29.046365 kubelet[2727]: E1212 18:36:29.046073 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caf297bf18355b89cb6cda2844cd0db13489848a12c367b8eff9a58069cdac84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gp587" Dec 12 18:36:29.046365 kubelet[2727]: E1212 18:36:29.046103 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caf297bf18355b89cb6cda2844cd0db13489848a12c367b8eff9a58069cdac84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gp587" Dec 12 18:36:29.046365 kubelet[2727]: E1212 18:36:29.046163 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-gp587_kube-system(eca2884d-7c05-43ed-b865-dfa363821ba5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-gp587_kube-system(eca2884d-7c05-43ed-b865-dfa363821ba5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"caf297bf18355b89cb6cda2844cd0db13489848a12c367b8eff9a58069cdac84\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-gp587" podUID="eca2884d-7c05-43ed-b865-dfa363821ba5" Dec 12 18:36:29.047381 kubelet[2727]: E1212 18:36:29.047239 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba4ce2b88ffe3d68cc271686e67a9ef8b4f5ad645b26320a1a27aa7fb76f73fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:29.047381 kubelet[2727]: E1212 18:36:29.047276 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba4ce2b88ffe3d68cc271686e67a9ef8b4f5ad645b26320a1a27aa7fb76f73fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-clgcw" Dec 12 18:36:29.047381 kubelet[2727]: E1212 18:36:29.047297 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba4ce2b88ffe3d68cc271686e67a9ef8b4f5ad645b26320a1a27aa7fb76f73fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-clgcw" Dec 12 18:36:29.047547 kubelet[2727]: E1212 18:36:29.047334 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-clgcw_calico-system(b2de3129-f188-4d80-9725-7a97224ed672)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-clgcw_calico-system(b2de3129-f188-4d80-9725-7a97224ed672)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba4ce2b88ffe3d68cc271686e67a9ef8b4f5ad645b26320a1a27aa7fb76f73fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-clgcw" podUID="b2de3129-f188-4d80-9725-7a97224ed672" Dec 12 18:36:29.063031 containerd[1555]: time="2025-12-12T18:36:29.060400892Z" level=error msg="Failed to destroy network for sandbox \"9fe65df0d7e1d26d567ee9b6a8ee51676c963e10ac3583573be1f9df1ebb95de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:29.065026 containerd[1555]: time="2025-12-12T18:36:29.046064131Z" level=error msg="Failed to destroy network for sandbox \"e35fd167449fd3e18f40cf353095ed9406e57221a63a6f1533e4b79ed685d8c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:29.067505 systemd[1]: run-netns-cni\x2dbf1649f8\x2d402e\x2dfbe8\x2dd3af\x2d768dc6684c40.mount: Deactivated successfully. Dec 12 18:36:29.067993 systemd[1]: run-netns-cni\x2d5ad07fde\x2d3135\x2df5bf\x2dec77\x2d0943100d1edc.mount: Deactivated successfully. Dec 12 18:36:29.077630 systemd[1]: run-netns-cni\x2d239d778e\x2ddb00\x2dcce6\x2d971c\x2dc163e664ce63.mount: Deactivated successfully. Dec 12 18:36:29.113912 containerd[1555]: time="2025-12-12T18:36:29.113756786Z" level=error msg="Failed to destroy network for sandbox \"8bd322ea56f400775cdfe440affc6b5caf11baf0d23b1d444865833adc7d3e9f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:29.118255 systemd[1]: run-netns-cni\x2de16c765e\x2d724b\x2d7e1e\x2dd860\x2dbd92bb5b2a03.mount: Deactivated successfully. Dec 12 18:36:29.154643 containerd[1555]: time="2025-12-12T18:36:29.154562485Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mtm8f,Uid:8652b687-d41e-47f6-a864-e604e24deb5b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0842710deb2b1822371a8027fbada29c2019873c9d88785d6d12b2d1b373b2b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:29.157065 kubelet[2727]: E1212 18:36:29.156926 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0842710deb2b1822371a8027fbada29c2019873c9d88785d6d12b2d1b373b2b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:29.157404 kubelet[2727]: E1212 18:36:29.157271 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0842710deb2b1822371a8027fbada29c2019873c9d88785d6d12b2d1b373b2b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mtm8f" Dec 12 18:36:29.157530 kubelet[2727]: E1212 18:36:29.157505 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0842710deb2b1822371a8027fbada29c2019873c9d88785d6d12b2d1b373b2b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mtm8f" Dec 12 18:36:29.157857 kubelet[2727]: E1212 18:36:29.157799 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mtm8f_calico-system(8652b687-d41e-47f6-a864-e604e24deb5b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mtm8f_calico-system(8652b687-d41e-47f6-a864-e604e24deb5b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0842710deb2b1822371a8027fbada29c2019873c9d88785d6d12b2d1b373b2b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mtm8f" podUID="8652b687-d41e-47f6-a864-e604e24deb5b" Dec 12 18:36:29.162150 containerd[1555]: time="2025-12-12T18:36:29.162069320Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7657c76566-4sqmn,Uid:df0e4c6b-2ef9-409a-ab59-75e3dc03ae44,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e35fd167449fd3e18f40cf353095ed9406e57221a63a6f1533e4b79ed685d8c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:29.164234 kubelet[2727]: E1212 18:36:29.164172 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e35fd167449fd3e18f40cf353095ed9406e57221a63a6f1533e4b79ed685d8c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:29.164374 kubelet[2727]: E1212 18:36:29.164274 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e35fd167449fd3e18f40cf353095ed9406e57221a63a6f1533e4b79ed685d8c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7657c76566-4sqmn" Dec 12 18:36:29.164374 kubelet[2727]: E1212 18:36:29.164303 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e35fd167449fd3e18f40cf353095ed9406e57221a63a6f1533e4b79ed685d8c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7657c76566-4sqmn" Dec 12 18:36:29.165393 kubelet[2727]: E1212 18:36:29.165311 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7657c76566-4sqmn_calico-system(df0e4c6b-2ef9-409a-ab59-75e3dc03ae44)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7657c76566-4sqmn_calico-system(df0e4c6b-2ef9-409a-ab59-75e3dc03ae44)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e35fd167449fd3e18f40cf353095ed9406e57221a63a6f1533e4b79ed685d8c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7657c76566-4sqmn" podUID="df0e4c6b-2ef9-409a-ab59-75e3dc03ae44" Dec 12 18:36:29.167586 containerd[1555]: time="2025-12-12T18:36:29.167522335Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-845fcfc7bd-n97cm,Uid:97c6c51c-88d4-4a5c-b977-999f64d65996,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bd322ea56f400775cdfe440affc6b5caf11baf0d23b1d444865833adc7d3e9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:29.168048 kubelet[2727]: E1212 18:36:29.167984 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bd322ea56f400775cdfe440affc6b5caf11baf0d23b1d444865833adc7d3e9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:29.168113 kubelet[2727]: E1212 18:36:29.168048 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bd322ea56f400775cdfe440affc6b5caf11baf0d23b1d444865833adc7d3e9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-845fcfc7bd-n97cm" Dec 12 18:36:29.168113 kubelet[2727]: E1212 18:36:29.168083 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bd322ea56f400775cdfe440affc6b5caf11baf0d23b1d444865833adc7d3e9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-845fcfc7bd-n97cm" Dec 12 18:36:29.168201 kubelet[2727]: E1212 18:36:29.168149 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-845fcfc7bd-n97cm_calico-system(97c6c51c-88d4-4a5c-b977-999f64d65996)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-845fcfc7bd-n97cm_calico-system(97c6c51c-88d4-4a5c-b977-999f64d65996)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8bd322ea56f400775cdfe440affc6b5caf11baf0d23b1d444865833adc7d3e9f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-845fcfc7bd-n97cm" podUID="97c6c51c-88d4-4a5c-b977-999f64d65996" Dec 12 18:36:29.174532 containerd[1555]: time="2025-12-12T18:36:29.172860832Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f4dfd9b79-68db6,Uid:69dcfde4-a416-44cb-b592-faa494483016,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fe65df0d7e1d26d567ee9b6a8ee51676c963e10ac3583573be1f9df1ebb95de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:29.175118 kubelet[2727]: E1212 18:36:29.173099 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fe65df0d7e1d26d567ee9b6a8ee51676c963e10ac3583573be1f9df1ebb95de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:29.175118 kubelet[2727]: E1212 18:36:29.173230 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fe65df0d7e1d26d567ee9b6a8ee51676c963e10ac3583573be1f9df1ebb95de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f4dfd9b79-68db6" Dec 12 18:36:29.175118 kubelet[2727]: E1212 18:36:29.173263 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fe65df0d7e1d26d567ee9b6a8ee51676c963e10ac3583573be1f9df1ebb95de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f4dfd9b79-68db6" Dec 12 18:36:29.175300 kubelet[2727]: E1212 18:36:29.173354 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f4dfd9b79-68db6_calico-apiserver(69dcfde4-a416-44cb-b592-faa494483016)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f4dfd9b79-68db6_calico-apiserver(69dcfde4-a416-44cb-b592-faa494483016)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9fe65df0d7e1d26d567ee9b6a8ee51676c963e10ac3583573be1f9df1ebb95de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f4dfd9b79-68db6" podUID="69dcfde4-a416-44cb-b592-faa494483016" Dec 12 18:36:29.178042 containerd[1555]: time="2025-12-12T18:36:29.177916264Z" level=error msg="Failed to destroy network for sandbox \"e8e70fd1b0d8ad4352cff1c20bd1ff3287de319e4eeb6404c34570cb5310c40a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:29.190988 systemd[1]: run-netns-cni\x2de919b68b\x2d2988\x2d700b\x2dc9cd\x2d78eb0554f114.mount: Deactivated successfully. Dec 12 18:36:29.197260 containerd[1555]: time="2025-12-12T18:36:29.197183982Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-btnsn,Uid:e5b51e08-d9b5-4edc-977d-d07e96ed0aed,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8e70fd1b0d8ad4352cff1c20bd1ff3287de319e4eeb6404c34570cb5310c40a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:29.197708 kubelet[2727]: E1212 18:36:29.197647 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8e70fd1b0d8ad4352cff1c20bd1ff3287de319e4eeb6404c34570cb5310c40a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:29.197776 kubelet[2727]: E1212 18:36:29.197740 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8e70fd1b0d8ad4352cff1c20bd1ff3287de319e4eeb6404c34570cb5310c40a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-btnsn" Dec 12 18:36:29.197868 kubelet[2727]: E1212 18:36:29.197773 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8e70fd1b0d8ad4352cff1c20bd1ff3287de319e4eeb6404c34570cb5310c40a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-btnsn" Dec 12 18:36:29.197920 kubelet[2727]: E1212 18:36:29.197859 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-btnsn_kube-system(e5b51e08-d9b5-4edc-977d-d07e96ed0aed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-btnsn_kube-system(e5b51e08-d9b5-4edc-977d-d07e96ed0aed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8e70fd1b0d8ad4352cff1c20bd1ff3287de319e4eeb6404c34570cb5310c40a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-btnsn" podUID="e5b51e08-d9b5-4edc-977d-d07e96ed0aed" Dec 12 18:36:39.955120 containerd[1555]: time="2025-12-12T18:36:39.955052576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mtm8f,Uid:8652b687-d41e-47f6-a864-e604e24deb5b,Namespace:calico-system,Attempt:0,}" Dec 12 18:36:40.522138 containerd[1555]: time="2025-12-12T18:36:40.522054918Z" level=error msg="Failed to destroy network for sandbox \"861f53d745cc710c8efeddfe69f653b1d7210decf5a0744d21fc1c71ae4ed473\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:40.528666 systemd[1]: run-netns-cni\x2d05e63e61\x2d2fd4\x2d120b\x2d8098\x2d42078477c1d2.mount: Deactivated successfully. Dec 12 18:36:40.535359 containerd[1555]: time="2025-12-12T18:36:40.535283274Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mtm8f,Uid:8652b687-d41e-47f6-a864-e604e24deb5b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"861f53d745cc710c8efeddfe69f653b1d7210decf5a0744d21fc1c71ae4ed473\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:40.536051 kubelet[2727]: E1212 18:36:40.535991 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"861f53d745cc710c8efeddfe69f653b1d7210decf5a0744d21fc1c71ae4ed473\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:40.536925 kubelet[2727]: E1212 18:36:40.536725 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"861f53d745cc710c8efeddfe69f653b1d7210decf5a0744d21fc1c71ae4ed473\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mtm8f" Dec 12 18:36:40.536925 kubelet[2727]: E1212 18:36:40.536764 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"861f53d745cc710c8efeddfe69f653b1d7210decf5a0744d21fc1c71ae4ed473\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mtm8f" Dec 12 18:36:40.538159 kubelet[2727]: E1212 18:36:40.537128 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mtm8f_calico-system(8652b687-d41e-47f6-a864-e604e24deb5b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mtm8f_calico-system(8652b687-d41e-47f6-a864-e604e24deb5b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"861f53d745cc710c8efeddfe69f653b1d7210decf5a0744d21fc1c71ae4ed473\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mtm8f" podUID="8652b687-d41e-47f6-a864-e604e24deb5b" Dec 12 18:36:40.950706 containerd[1555]: time="2025-12-12T18:36:40.950620261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7657c76566-4sqmn,Uid:df0e4c6b-2ef9-409a-ab59-75e3dc03ae44,Namespace:calico-system,Attempt:0,}" Dec 12 18:36:41.084375 containerd[1555]: time="2025-12-12T18:36:41.084305453Z" level=error msg="Failed to destroy network for sandbox \"1b5e0c166bc97e1447c5fac9fcc6c744f2458d19171713e0847333d272fd60e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:41.086509 containerd[1555]: time="2025-12-12T18:36:41.085980855Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7657c76566-4sqmn,Uid:df0e4c6b-2ef9-409a-ab59-75e3dc03ae44,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b5e0c166bc97e1447c5fac9fcc6c744f2458d19171713e0847333d272fd60e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:41.086834 kubelet[2727]: E1212 18:36:41.086733 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b5e0c166bc97e1447c5fac9fcc6c744f2458d19171713e0847333d272fd60e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:41.086932 kubelet[2727]: E1212 18:36:41.086868 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b5e0c166bc97e1447c5fac9fcc6c744f2458d19171713e0847333d272fd60e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7657c76566-4sqmn" Dec 12 18:36:41.086932 kubelet[2727]: E1212 18:36:41.086899 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b5e0c166bc97e1447c5fac9fcc6c744f2458d19171713e0847333d272fd60e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7657c76566-4sqmn" Dec 12 18:36:41.089407 kubelet[2727]: E1212 18:36:41.086985 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7657c76566-4sqmn_calico-system(df0e4c6b-2ef9-409a-ab59-75e3dc03ae44)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7657c76566-4sqmn_calico-system(df0e4c6b-2ef9-409a-ab59-75e3dc03ae44)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b5e0c166bc97e1447c5fac9fcc6c744f2458d19171713e0847333d272fd60e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7657c76566-4sqmn" podUID="df0e4c6b-2ef9-409a-ab59-75e3dc03ae44" Dec 12 18:36:41.091904 systemd[1]: run-netns-cni\x2d79a6f027\x2dfe8f\x2dd13c\x2d7d2e\x2d9756f139400e.mount: Deactivated successfully. Dec 12 18:36:41.947815 kubelet[2727]: E1212 18:36:41.947745 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:36:41.949200 containerd[1555]: time="2025-12-12T18:36:41.949143229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-clgcw,Uid:b2de3129-f188-4d80-9725-7a97224ed672,Namespace:calico-system,Attempt:0,}" Dec 12 18:36:41.951459 containerd[1555]: time="2025-12-12T18:36:41.950402413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-btnsn,Uid:e5b51e08-d9b5-4edc-977d-d07e96ed0aed,Namespace:kube-system,Attempt:0,}" Dec 12 18:36:42.288326 containerd[1555]: time="2025-12-12T18:36:42.287953393Z" level=error msg="Failed to destroy network for sandbox \"c10da726bbb650aa4fbd912042ac0a9195ca5b5dab741a6102b666d4789cdf72\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:42.294461 systemd[1]: run-netns-cni\x2d672ae4b8\x2d0bda\x2d9f05\x2dd493\x2d0dbd0ea7d2a5.mount: Deactivated successfully. Dec 12 18:36:42.296373 containerd[1555]: time="2025-12-12T18:36:42.296316393Z" level=error msg="Failed to destroy network for sandbox \"54fe55790d2117953733bcb978bb5db3815c4779211774887455826b57652b29\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:42.300681 systemd[1]: run-netns-cni\x2d93ae3e05\x2d836d\x2df238\x2d5eaa\x2d16a3d769f93f.mount: Deactivated successfully. Dec 12 18:36:42.387270 containerd[1555]: time="2025-12-12T18:36:42.384827022Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-btnsn,Uid:e5b51e08-d9b5-4edc-977d-d07e96ed0aed,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c10da726bbb650aa4fbd912042ac0a9195ca5b5dab741a6102b666d4789cdf72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:42.387964 kubelet[2727]: E1212 18:36:42.385519 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c10da726bbb650aa4fbd912042ac0a9195ca5b5dab741a6102b666d4789cdf72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:42.387964 kubelet[2727]: E1212 18:36:42.386607 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c10da726bbb650aa4fbd912042ac0a9195ca5b5dab741a6102b666d4789cdf72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-btnsn" Dec 12 18:36:42.387964 kubelet[2727]: E1212 18:36:42.386650 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c10da726bbb650aa4fbd912042ac0a9195ca5b5dab741a6102b666d4789cdf72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-btnsn" Dec 12 18:36:42.388164 containerd[1555]: time="2025-12-12T18:36:42.387861127Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-clgcw,Uid:b2de3129-f188-4d80-9725-7a97224ed672,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"54fe55790d2117953733bcb978bb5db3815c4779211774887455826b57652b29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:42.388230 kubelet[2727]: E1212 18:36:42.387386 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-btnsn_kube-system(e5b51e08-d9b5-4edc-977d-d07e96ed0aed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-btnsn_kube-system(e5b51e08-d9b5-4edc-977d-d07e96ed0aed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c10da726bbb650aa4fbd912042ac0a9195ca5b5dab741a6102b666d4789cdf72\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-btnsn" podUID="e5b51e08-d9b5-4edc-977d-d07e96ed0aed" Dec 12 18:36:42.388230 kubelet[2727]: E1212 18:36:42.388131 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54fe55790d2117953733bcb978bb5db3815c4779211774887455826b57652b29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:42.388230 kubelet[2727]: E1212 18:36:42.388176 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54fe55790d2117953733bcb978bb5db3815c4779211774887455826b57652b29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-clgcw" Dec 12 18:36:42.388378 kubelet[2727]: E1212 18:36:42.388198 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54fe55790d2117953733bcb978bb5db3815c4779211774887455826b57652b29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-clgcw" Dec 12 18:36:42.388378 kubelet[2727]: E1212 18:36:42.388241 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-clgcw_calico-system(b2de3129-f188-4d80-9725-7a97224ed672)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-clgcw_calico-system(b2de3129-f188-4d80-9725-7a97224ed672)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"54fe55790d2117953733bcb978bb5db3815c4779211774887455826b57652b29\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-clgcw" podUID="b2de3129-f188-4d80-9725-7a97224ed672" Dec 12 18:36:43.309851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2560914369.mount: Deactivated successfully. Dec 12 18:36:43.374699 containerd[1555]: time="2025-12-12T18:36:43.372019394Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:36:43.378987 containerd[1555]: time="2025-12-12T18:36:43.378808649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Dec 12 18:36:43.380223 containerd[1555]: time="2025-12-12T18:36:43.380162473Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:36:43.390521 containerd[1555]: time="2025-12-12T18:36:43.388132141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 18:36:43.390521 containerd[1555]: time="2025-12-12T18:36:43.389145034Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 14.796271166s" Dec 12 18:36:43.390521 containerd[1555]: time="2025-12-12T18:36:43.389186160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 12 18:36:43.464563 containerd[1555]: time="2025-12-12T18:36:43.463160024Z" level=info msg="CreateContainer within sandbox \"39d44712ca943ca5981db023412c4bd66191ca55a267c022b1a186ed1d749b96\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 18:36:43.500396 containerd[1555]: time="2025-12-12T18:36:43.500307403Z" level=info msg="Container 36cdc5614202db9dcb308ac044d1376ed9c8660284589316c70f894b03dc84a9: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:36:43.534571 containerd[1555]: time="2025-12-12T18:36:43.534492913Z" level=info msg="CreateContainer within sandbox \"39d44712ca943ca5981db023412c4bd66191ca55a267c022b1a186ed1d749b96\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"36cdc5614202db9dcb308ac044d1376ed9c8660284589316c70f894b03dc84a9\"" Dec 12 18:36:43.542507 containerd[1555]: time="2025-12-12T18:36:43.541899952Z" level=info msg="StartContainer for \"36cdc5614202db9dcb308ac044d1376ed9c8660284589316c70f894b03dc84a9\"" Dec 12 18:36:43.543904 containerd[1555]: time="2025-12-12T18:36:43.543861739Z" level=info msg="connecting to shim 36cdc5614202db9dcb308ac044d1376ed9c8660284589316c70f894b03dc84a9" address="unix:///run/containerd/s/561becdfeb76663763e605dff422d96c5e6172ee4b34ff575b62cbdd43322acd" protocol=ttrpc version=3 Dec 12 18:36:43.727968 systemd[1]: Started cri-containerd-36cdc5614202db9dcb308ac044d1376ed9c8660284589316c70f894b03dc84a9.scope - libcontainer container 36cdc5614202db9dcb308ac044d1376ed9c8660284589316c70f894b03dc84a9. Dec 12 18:36:43.953867 kubelet[2727]: E1212 18:36:43.950178 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:36:43.954533 containerd[1555]: time="2025-12-12T18:36:43.953034183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gp587,Uid:eca2884d-7c05-43ed-b865-dfa363821ba5,Namespace:kube-system,Attempt:0,}" Dec 12 18:36:43.954533 containerd[1555]: time="2025-12-12T18:36:43.953312056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f4dfd9b79-68db6,Uid:69dcfde4-a416-44cb-b592-faa494483016,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:36:44.314074 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 18:36:44.314259 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 18:36:44.621521 containerd[1555]: time="2025-12-12T18:36:44.615043880Z" level=info msg="StartContainer for \"36cdc5614202db9dcb308ac044d1376ed9c8660284589316c70f894b03dc84a9\" returns successfully" Dec 12 18:36:44.697854 kubelet[2727]: E1212 18:36:44.687741 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:36:44.886832 kubelet[2727]: I1212 18:36:44.886521 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-g4r7c" podStartSLOduration=2.848252199 podStartE2EDuration="36.886192959s" podCreationTimestamp="2025-12-12 18:36:08 +0000 UTC" firstStartedPulling="2025-12-12 18:36:09.353261719 +0000 UTC m=+30.968628618" lastFinishedPulling="2025-12-12 18:36:43.391202479 +0000 UTC m=+65.006569378" observedRunningTime="2025-12-12 18:36:44.863230261 +0000 UTC m=+66.478597160" watchObservedRunningTime="2025-12-12 18:36:44.886192959 +0000 UTC m=+66.501559858" Dec 12 18:36:44.960994 containerd[1555]: time="2025-12-12T18:36:44.960921235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-845fcfc7bd-n97cm,Uid:97c6c51c-88d4-4a5c-b977-999f64d65996,Namespace:calico-system,Attempt:0,}" Dec 12 18:36:44.962866 containerd[1555]: time="2025-12-12T18:36:44.962162052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f4dfd9b79-gmfhk,Uid:433cc46b-ce9f-4fdf-9392-bdd29bdc4330,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:36:45.101538 kubelet[2727]: I1212 18:36:45.100455 2727 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df0e4c6b-2ef9-409a-ab59-75e3dc03ae44-whisker-ca-bundle\") pod \"df0e4c6b-2ef9-409a-ab59-75e3dc03ae44\" (UID: \"df0e4c6b-2ef9-409a-ab59-75e3dc03ae44\") " Dec 12 18:36:45.104230 kubelet[2727]: I1212 18:36:45.101466 2727 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/df0e4c6b-2ef9-409a-ab59-75e3dc03ae44-whisker-backend-key-pair\") pod \"df0e4c6b-2ef9-409a-ab59-75e3dc03ae44\" (UID: \"df0e4c6b-2ef9-409a-ab59-75e3dc03ae44\") " Dec 12 18:36:45.104426 kubelet[2727]: I1212 18:36:45.101097 2727 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df0e4c6b-2ef9-409a-ab59-75e3dc03ae44-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "df0e4c6b-2ef9-409a-ab59-75e3dc03ae44" (UID: "df0e4c6b-2ef9-409a-ab59-75e3dc03ae44"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 18:36:45.105300 kubelet[2727]: I1212 18:36:45.104335 2727 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29qgw\" (UniqueName: \"kubernetes.io/projected/df0e4c6b-2ef9-409a-ab59-75e3dc03ae44-kube-api-access-29qgw\") pod \"df0e4c6b-2ef9-409a-ab59-75e3dc03ae44\" (UID: \"df0e4c6b-2ef9-409a-ab59-75e3dc03ae44\") " Dec 12 18:36:45.105300 kubelet[2727]: I1212 18:36:45.105230 2727 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df0e4c6b-2ef9-409a-ab59-75e3dc03ae44-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 12 18:36:45.124832 kubelet[2727]: I1212 18:36:45.121370 2727 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df0e4c6b-2ef9-409a-ab59-75e3dc03ae44-kube-api-access-29qgw" (OuterVolumeSpecName: "kube-api-access-29qgw") pod "df0e4c6b-2ef9-409a-ab59-75e3dc03ae44" (UID: "df0e4c6b-2ef9-409a-ab59-75e3dc03ae44"). InnerVolumeSpecName "kube-api-access-29qgw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 18:36:45.132501 kubelet[2727]: I1212 18:36:45.131904 2727 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df0e4c6b-2ef9-409a-ab59-75e3dc03ae44-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "df0e4c6b-2ef9-409a-ab59-75e3dc03ae44" (UID: "df0e4c6b-2ef9-409a-ab59-75e3dc03ae44"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 18:36:45.207229 kubelet[2727]: I1212 18:36:45.207045 2727 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/df0e4c6b-2ef9-409a-ab59-75e3dc03ae44-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Dec 12 18:36:45.207229 kubelet[2727]: I1212 18:36:45.207096 2727 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-29qgw\" (UniqueName: \"kubernetes.io/projected/df0e4c6b-2ef9-409a-ab59-75e3dc03ae44-kube-api-access-29qgw\") on node \"localhost\" DevicePath \"\"" Dec 12 18:36:45.313971 systemd[1]: var-lib-kubelet-pods-df0e4c6b\x2d2ef9\x2d409a\x2dab59\x2d75e3dc03ae44-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d29qgw.mount: Deactivated successfully. Dec 12 18:36:45.314160 systemd[1]: var-lib-kubelet-pods-df0e4c6b\x2d2ef9\x2d409a\x2dab59\x2d75e3dc03ae44-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 12 18:36:45.693506 kubelet[2727]: E1212 18:36:45.693444 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:36:45.760215 systemd[1]: Removed slice kubepods-besteffort-poddf0e4c6b_2ef9_409a_ab59_75e3dc03ae44.slice - libcontainer container kubepods-besteffort-poddf0e4c6b_2ef9_409a_ab59_75e3dc03ae44.slice. Dec 12 18:36:45.903099 systemd-networkd[1468]: cali0c5b436d1ad: Link UP Dec 12 18:36:45.907547 systemd-networkd[1468]: cali0c5b436d1ad: Gained carrier Dec 12 18:36:46.017103 containerd[1555]: 2025-12-12 18:36:45.141 [INFO][4124] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:36:46.017103 containerd[1555]: 2025-12-12 18:36:45.177 [INFO][4124] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5f4dfd9b79--gmfhk-eth0 calico-apiserver-5f4dfd9b79- calico-apiserver 433cc46b-ce9f-4fdf-9392-bdd29bdc4330 957 0 2025-12-12 18:35:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f4dfd9b79 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5f4dfd9b79-gmfhk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0c5b436d1ad [] [] }} ContainerID="04d4c0892f34e3b2af5712a9603d93539e448519212f611e9254b1f45d456d9c" Namespace="calico-apiserver" Pod="calico-apiserver-5f4dfd9b79-gmfhk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f4dfd9b79--gmfhk-" Dec 12 18:36:46.017103 containerd[1555]: 2025-12-12 18:36:45.177 [INFO][4124] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="04d4c0892f34e3b2af5712a9603d93539e448519212f611e9254b1f45d456d9c" Namespace="calico-apiserver" Pod="calico-apiserver-5f4dfd9b79-gmfhk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f4dfd9b79--gmfhk-eth0" Dec 12 18:36:46.017103 containerd[1555]: 2025-12-12 18:36:45.456 [INFO][4172] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="04d4c0892f34e3b2af5712a9603d93539e448519212f611e9254b1f45d456d9c" HandleID="k8s-pod-network.04d4c0892f34e3b2af5712a9603d93539e448519212f611e9254b1f45d456d9c" Workload="localhost-k8s-calico--apiserver--5f4dfd9b79--gmfhk-eth0" Dec 12 18:36:46.017833 containerd[1555]: 2025-12-12 18:36:45.457 [INFO][4172] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="04d4c0892f34e3b2af5712a9603d93539e448519212f611e9254b1f45d456d9c" HandleID="k8s-pod-network.04d4c0892f34e3b2af5712a9603d93539e448519212f611e9254b1f45d456d9c" Workload="localhost-k8s-calico--apiserver--5f4dfd9b79--gmfhk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ab200), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5f4dfd9b79-gmfhk", "timestamp":"2025-12-12 18:36:45.456661116 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:36:46.017833 containerd[1555]: 2025-12-12 18:36:45.457 [INFO][4172] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:36:46.017833 containerd[1555]: 2025-12-12 18:36:45.459 [INFO][4172] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:36:46.017833 containerd[1555]: 2025-12-12 18:36:45.460 [INFO][4172] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 18:36:46.017833 containerd[1555]: 2025-12-12 18:36:45.497 [INFO][4172] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.04d4c0892f34e3b2af5712a9603d93539e448519212f611e9254b1f45d456d9c" host="localhost" Dec 12 18:36:46.017833 containerd[1555]: 2025-12-12 18:36:45.551 [INFO][4172] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 18:36:46.017833 containerd[1555]: 2025-12-12 18:36:45.582 [INFO][4172] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 18:36:46.017833 containerd[1555]: 2025-12-12 18:36:45.608 [INFO][4172] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 18:36:46.017833 containerd[1555]: 2025-12-12 18:36:45.622 [INFO][4172] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 18:36:46.017833 containerd[1555]: 2025-12-12 18:36:45.625 [INFO][4172] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.04d4c0892f34e3b2af5712a9603d93539e448519212f611e9254b1f45d456d9c" host="localhost" Dec 12 18:36:46.018189 containerd[1555]: 2025-12-12 18:36:45.650 [INFO][4172] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.04d4c0892f34e3b2af5712a9603d93539e448519212f611e9254b1f45d456d9c Dec 12 18:36:46.018189 containerd[1555]: 2025-12-12 18:36:45.678 [INFO][4172] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.04d4c0892f34e3b2af5712a9603d93539e448519212f611e9254b1f45d456d9c" host="localhost" Dec 12 18:36:46.018189 containerd[1555]: 2025-12-12 18:36:45.722 [INFO][4172] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.04d4c0892f34e3b2af5712a9603d93539e448519212f611e9254b1f45d456d9c" host="localhost" Dec 12 18:36:46.018189 containerd[1555]: 2025-12-12 18:36:45.722 [INFO][4172] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.04d4c0892f34e3b2af5712a9603d93539e448519212f611e9254b1f45d456d9c" host="localhost" Dec 12 18:36:46.018189 containerd[1555]: 2025-12-12 18:36:45.722 [INFO][4172] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:36:46.018189 containerd[1555]: 2025-12-12 18:36:45.722 [INFO][4172] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="04d4c0892f34e3b2af5712a9603d93539e448519212f611e9254b1f45d456d9c" HandleID="k8s-pod-network.04d4c0892f34e3b2af5712a9603d93539e448519212f611e9254b1f45d456d9c" Workload="localhost-k8s-calico--apiserver--5f4dfd9b79--gmfhk-eth0" Dec 12 18:36:46.018404 containerd[1555]: 2025-12-12 18:36:45.737 [INFO][4124] cni-plugin/k8s.go 418: Populated endpoint ContainerID="04d4c0892f34e3b2af5712a9603d93539e448519212f611e9254b1f45d456d9c" Namespace="calico-apiserver" Pod="calico-apiserver-5f4dfd9b79-gmfhk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f4dfd9b79--gmfhk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f4dfd9b79--gmfhk-eth0", GenerateName:"calico-apiserver-5f4dfd9b79-", Namespace:"calico-apiserver", SelfLink:"", UID:"433cc46b-ce9f-4fdf-9392-bdd29bdc4330", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 35, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f4dfd9b79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5f4dfd9b79-gmfhk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0c5b436d1ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:36:46.018494 containerd[1555]: 2025-12-12 18:36:45.737 [INFO][4124] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="04d4c0892f34e3b2af5712a9603d93539e448519212f611e9254b1f45d456d9c" Namespace="calico-apiserver" Pod="calico-apiserver-5f4dfd9b79-gmfhk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f4dfd9b79--gmfhk-eth0" Dec 12 18:36:46.018494 containerd[1555]: 2025-12-12 18:36:45.737 [INFO][4124] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0c5b436d1ad ContainerID="04d4c0892f34e3b2af5712a9603d93539e448519212f611e9254b1f45d456d9c" Namespace="calico-apiserver" Pod="calico-apiserver-5f4dfd9b79-gmfhk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f4dfd9b79--gmfhk-eth0" Dec 12 18:36:46.018494 containerd[1555]: 2025-12-12 18:36:45.911 [INFO][4124] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="04d4c0892f34e3b2af5712a9603d93539e448519212f611e9254b1f45d456d9c" Namespace="calico-apiserver" Pod="calico-apiserver-5f4dfd9b79-gmfhk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f4dfd9b79--gmfhk-eth0" Dec 12 18:36:46.018601 containerd[1555]: 2025-12-12 18:36:45.913 [INFO][4124] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="04d4c0892f34e3b2af5712a9603d93539e448519212f611e9254b1f45d456d9c" Namespace="calico-apiserver" Pod="calico-apiserver-5f4dfd9b79-gmfhk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f4dfd9b79--gmfhk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f4dfd9b79--gmfhk-eth0", GenerateName:"calico-apiserver-5f4dfd9b79-", Namespace:"calico-apiserver", SelfLink:"", UID:"433cc46b-ce9f-4fdf-9392-bdd29bdc4330", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 35, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f4dfd9b79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"04d4c0892f34e3b2af5712a9603d93539e448519212f611e9254b1f45d456d9c", Pod:"calico-apiserver-5f4dfd9b79-gmfhk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0c5b436d1ad", MAC:"da:76:74:fd:e7:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:36:46.018681 containerd[1555]: 2025-12-12 18:36:46.003 [INFO][4124] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="04d4c0892f34e3b2af5712a9603d93539e448519212f611e9254b1f45d456d9c" Namespace="calico-apiserver" Pod="calico-apiserver-5f4dfd9b79-gmfhk" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f4dfd9b79--gmfhk-eth0" Dec 12 18:36:46.101628 systemd-networkd[1468]: calid8110dfbd89: Link UP Dec 12 18:36:46.102771 systemd-networkd[1468]: calid8110dfbd89: Gained carrier Dec 12 18:36:46.157364 containerd[1555]: 2025-12-12 18:36:45.132 [INFO][4136] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:36:46.157364 containerd[1555]: 2025-12-12 18:36:45.167 [INFO][4136] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--845fcfc7bd--n97cm-eth0 calico-kube-controllers-845fcfc7bd- calico-system 97c6c51c-88d4-4a5c-b977-999f64d65996 961 0 2025-12-12 18:36:08 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:845fcfc7bd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-845fcfc7bd-n97cm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid8110dfbd89 [] [] }} ContainerID="4c999992386d9aada8ebb5ab8c879af6df4bb632b650f4639c3e3c9fac7ee571" Namespace="calico-system" Pod="calico-kube-controllers-845fcfc7bd-n97cm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--845fcfc7bd--n97cm-" Dec 12 18:36:46.157364 containerd[1555]: 2025-12-12 18:36:45.167 [INFO][4136] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4c999992386d9aada8ebb5ab8c879af6df4bb632b650f4639c3e3c9fac7ee571" Namespace="calico-system" Pod="calico-kube-controllers-845fcfc7bd-n97cm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--845fcfc7bd--n97cm-eth0" Dec 12 18:36:46.157364 containerd[1555]: 2025-12-12 18:36:45.454 [INFO][4178] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c999992386d9aada8ebb5ab8c879af6df4bb632b650f4639c3e3c9fac7ee571" HandleID="k8s-pod-network.4c999992386d9aada8ebb5ab8c879af6df4bb632b650f4639c3e3c9fac7ee571" Workload="localhost-k8s-calico--kube--controllers--845fcfc7bd--n97cm-eth0" Dec 12 18:36:46.157672 containerd[1555]: 2025-12-12 18:36:45.455 [INFO][4178] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4c999992386d9aada8ebb5ab8c879af6df4bb632b650f4639c3e3c9fac7ee571" HandleID="k8s-pod-network.4c999992386d9aada8ebb5ab8c879af6df4bb632b650f4639c3e3c9fac7ee571" Workload="localhost-k8s-calico--kube--controllers--845fcfc7bd--n97cm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c65c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-845fcfc7bd-n97cm", "timestamp":"2025-12-12 18:36:45.454521602 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:36:46.157672 containerd[1555]: 2025-12-12 18:36:45.456 [INFO][4178] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:36:46.157672 containerd[1555]: 2025-12-12 18:36:45.722 [INFO][4178] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:36:46.157672 containerd[1555]: 2025-12-12 18:36:45.723 [INFO][4178] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 18:36:46.157672 containerd[1555]: 2025-12-12 18:36:45.807 [INFO][4178] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4c999992386d9aada8ebb5ab8c879af6df4bb632b650f4639c3e3c9fac7ee571" host="localhost" Dec 12 18:36:46.157672 containerd[1555]: 2025-12-12 18:36:45.842 [INFO][4178] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 18:36:46.157672 containerd[1555]: 2025-12-12 18:36:45.918 [INFO][4178] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 18:36:46.157672 containerd[1555]: 2025-12-12 18:36:45.927 [INFO][4178] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 18:36:46.157672 containerd[1555]: 2025-12-12 18:36:45.954 [INFO][4178] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 18:36:46.157672 containerd[1555]: 2025-12-12 18:36:45.955 [INFO][4178] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4c999992386d9aada8ebb5ab8c879af6df4bb632b650f4639c3e3c9fac7ee571" host="localhost" Dec 12 18:36:46.157949 containerd[1555]: 2025-12-12 18:36:46.002 [INFO][4178] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4c999992386d9aada8ebb5ab8c879af6df4bb632b650f4639c3e3c9fac7ee571 Dec 12 18:36:46.157949 containerd[1555]: 2025-12-12 18:36:46.016 [INFO][4178] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4c999992386d9aada8ebb5ab8c879af6df4bb632b650f4639c3e3c9fac7ee571" host="localhost" Dec 12 18:36:46.157949 containerd[1555]: 2025-12-12 18:36:46.062 [INFO][4178] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.4c999992386d9aada8ebb5ab8c879af6df4bb632b650f4639c3e3c9fac7ee571" host="localhost" Dec 12 18:36:46.157949 containerd[1555]: 2025-12-12 18:36:46.068 [INFO][4178] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.4c999992386d9aada8ebb5ab8c879af6df4bb632b650f4639c3e3c9fac7ee571" host="localhost" Dec 12 18:36:46.157949 containerd[1555]: 2025-12-12 18:36:46.068 [INFO][4178] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:36:46.157949 containerd[1555]: 2025-12-12 18:36:46.068 [INFO][4178] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="4c999992386d9aada8ebb5ab8c879af6df4bb632b650f4639c3e3c9fac7ee571" HandleID="k8s-pod-network.4c999992386d9aada8ebb5ab8c879af6df4bb632b650f4639c3e3c9fac7ee571" Workload="localhost-k8s-calico--kube--controllers--845fcfc7bd--n97cm-eth0" Dec 12 18:36:46.158101 containerd[1555]: 2025-12-12 18:36:46.097 [INFO][4136] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4c999992386d9aada8ebb5ab8c879af6df4bb632b650f4639c3e3c9fac7ee571" Namespace="calico-system" Pod="calico-kube-controllers-845fcfc7bd-n97cm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--845fcfc7bd--n97cm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--845fcfc7bd--n97cm-eth0", GenerateName:"calico-kube-controllers-845fcfc7bd-", Namespace:"calico-system", SelfLink:"", UID:"97c6c51c-88d4-4a5c-b977-999f64d65996", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 36, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"845fcfc7bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-845fcfc7bd-n97cm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid8110dfbd89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:36:46.158167 containerd[1555]: 2025-12-12 18:36:46.097 [INFO][4136] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="4c999992386d9aada8ebb5ab8c879af6df4bb632b650f4639c3e3c9fac7ee571" Namespace="calico-system" Pod="calico-kube-controllers-845fcfc7bd-n97cm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--845fcfc7bd--n97cm-eth0" Dec 12 18:36:46.158167 containerd[1555]: 2025-12-12 18:36:46.097 [INFO][4136] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid8110dfbd89 ContainerID="4c999992386d9aada8ebb5ab8c879af6df4bb632b650f4639c3e3c9fac7ee571" Namespace="calico-system" Pod="calico-kube-controllers-845fcfc7bd-n97cm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--845fcfc7bd--n97cm-eth0" Dec 12 18:36:46.158167 containerd[1555]: 2025-12-12 18:36:46.103 [INFO][4136] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c999992386d9aada8ebb5ab8c879af6df4bb632b650f4639c3e3c9fac7ee571" Namespace="calico-system" Pod="calico-kube-controllers-845fcfc7bd-n97cm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--845fcfc7bd--n97cm-eth0" Dec 12 18:36:46.158248 containerd[1555]: 2025-12-12 18:36:46.104 [INFO][4136] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4c999992386d9aada8ebb5ab8c879af6df4bb632b650f4639c3e3c9fac7ee571" Namespace="calico-system" Pod="calico-kube-controllers-845fcfc7bd-n97cm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--845fcfc7bd--n97cm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--845fcfc7bd--n97cm-eth0", GenerateName:"calico-kube-controllers-845fcfc7bd-", Namespace:"calico-system", SelfLink:"", UID:"97c6c51c-88d4-4a5c-b977-999f64d65996", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 36, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"845fcfc7bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c999992386d9aada8ebb5ab8c879af6df4bb632b650f4639c3e3c9fac7ee571", Pod:"calico-kube-controllers-845fcfc7bd-n97cm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid8110dfbd89", MAC:"be:08:53:b7:69:3f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:36:46.158322 containerd[1555]: 2025-12-12 18:36:46.148 [INFO][4136] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4c999992386d9aada8ebb5ab8c879af6df4bb632b650f4639c3e3c9fac7ee571" Namespace="calico-system" Pod="calico-kube-controllers-845fcfc7bd-n97cm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--845fcfc7bd--n97cm-eth0" Dec 12 18:36:46.169655 containerd[1555]: 2025-12-12 18:36:45.094 [INFO][4098] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e08fc3afc1918b66dd5752bccad0d88533cdb363cb889732b1f4112fd20bec68" Dec 12 18:36:46.169655 containerd[1555]: 2025-12-12 18:36:45.094 [INFO][4098] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e08fc3afc1918b66dd5752bccad0d88533cdb363cb889732b1f4112fd20bec68" iface="eth0" netns="/var/run/netns/cni-eec47a83-4194-611d-7508-663c87f526eb" Dec 12 18:36:46.169655 containerd[1555]: 2025-12-12 18:36:45.096 [INFO][4098] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e08fc3afc1918b66dd5752bccad0d88533cdb363cb889732b1f4112fd20bec68" iface="eth0" netns="/var/run/netns/cni-eec47a83-4194-611d-7508-663c87f526eb" Dec 12 18:36:46.169655 containerd[1555]: 2025-12-12 18:36:45.098 [INFO][4098] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e08fc3afc1918b66dd5752bccad0d88533cdb363cb889732b1f4112fd20bec68" iface="eth0" netns="/var/run/netns/cni-eec47a83-4194-611d-7508-663c87f526eb" Dec 12 18:36:46.169655 containerd[1555]: 2025-12-12 18:36:45.098 [INFO][4098] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e08fc3afc1918b66dd5752bccad0d88533cdb363cb889732b1f4112fd20bec68" Dec 12 18:36:46.169655 containerd[1555]: 2025-12-12 18:36:45.098 [INFO][4098] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e08fc3afc1918b66dd5752bccad0d88533cdb363cb889732b1f4112fd20bec68" Dec 12 18:36:46.169655 containerd[1555]: 2025-12-12 18:36:45.458 [INFO][4153] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e08fc3afc1918b66dd5752bccad0d88533cdb363cb889732b1f4112fd20bec68" HandleID="k8s-pod-network.e08fc3afc1918b66dd5752bccad0d88533cdb363cb889732b1f4112fd20bec68" Workload="localhost-k8s-calico--apiserver--5f4dfd9b79--68db6-eth0" Dec 12 18:36:46.169655 containerd[1555]: 2025-12-12 18:36:45.460 [INFO][4153] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:36:46.169655 containerd[1555]: 2025-12-12 18:36:46.073 [INFO][4153] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:36:46.170078 containerd[1555]: 2025-12-12 18:36:46.105 [WARNING][4153] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e08fc3afc1918b66dd5752bccad0d88533cdb363cb889732b1f4112fd20bec68" HandleID="k8s-pod-network.e08fc3afc1918b66dd5752bccad0d88533cdb363cb889732b1f4112fd20bec68" Workload="localhost-k8s-calico--apiserver--5f4dfd9b79--68db6-eth0" Dec 12 18:36:46.170078 containerd[1555]: 2025-12-12 18:36:46.105 [INFO][4153] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e08fc3afc1918b66dd5752bccad0d88533cdb363cb889732b1f4112fd20bec68" HandleID="k8s-pod-network.e08fc3afc1918b66dd5752bccad0d88533cdb363cb889732b1f4112fd20bec68" Workload="localhost-k8s-calico--apiserver--5f4dfd9b79--68db6-eth0" Dec 12 18:36:46.170078 containerd[1555]: 2025-12-12 18:36:46.116 [INFO][4153] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:36:46.170078 containerd[1555]: 2025-12-12 18:36:46.152 [INFO][4098] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e08fc3afc1918b66dd5752bccad0d88533cdb363cb889732b1f4112fd20bec68" Dec 12 18:36:46.177009 systemd[1]: run-netns-cni\x2deec47a83\x2d4194\x2d611d\x2d7508\x2d663c87f526eb.mount: Deactivated successfully. Dec 12 18:36:46.225368 containerd[1555]: time="2025-12-12T18:36:46.221494495Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f4dfd9b79-68db6,Uid:69dcfde4-a416-44cb-b592-faa494483016,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e08fc3afc1918b66dd5752bccad0d88533cdb363cb889732b1f4112fd20bec68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:46.225569 kubelet[2727]: E1212 18:36:46.224980 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e08fc3afc1918b66dd5752bccad0d88533cdb363cb889732b1f4112fd20bec68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:46.226005 kubelet[2727]: E1212 18:36:46.225533 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e08fc3afc1918b66dd5752bccad0d88533cdb363cb889732b1f4112fd20bec68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f4dfd9b79-68db6" Dec 12 18:36:46.226005 kubelet[2727]: E1212 18:36:46.225996 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e08fc3afc1918b66dd5752bccad0d88533cdb363cb889732b1f4112fd20bec68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f4dfd9b79-68db6" Dec 12 18:36:46.226551 kubelet[2727]: E1212 18:36:46.226309 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f4dfd9b79-68db6_calico-apiserver(69dcfde4-a416-44cb-b592-faa494483016)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f4dfd9b79-68db6_calico-apiserver(69dcfde4-a416-44cb-b592-faa494483016)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e08fc3afc1918b66dd5752bccad0d88533cdb363cb889732b1f4112fd20bec68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f4dfd9b79-68db6" podUID="69dcfde4-a416-44cb-b592-faa494483016" Dec 12 18:36:46.230286 kubelet[2727]: I1212 18:36:46.229032 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4469ce36-6286-4f2c-84e3-e653a9c04ba0-whisker-ca-bundle\") pod \"whisker-5d6876f9d4-45g7h\" (UID: \"4469ce36-6286-4f2c-84e3-e653a9c04ba0\") " pod="calico-system/whisker-5d6876f9d4-45g7h" Dec 12 18:36:46.230286 kubelet[2727]: I1212 18:36:46.229086 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dpcq\" (UniqueName: \"kubernetes.io/projected/4469ce36-6286-4f2c-84e3-e653a9c04ba0-kube-api-access-2dpcq\") pod \"whisker-5d6876f9d4-45g7h\" (UID: \"4469ce36-6286-4f2c-84e3-e653a9c04ba0\") " pod="calico-system/whisker-5d6876f9d4-45g7h" Dec 12 18:36:46.230286 kubelet[2727]: I1212 18:36:46.229114 2727 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4469ce36-6286-4f2c-84e3-e653a9c04ba0-whisker-backend-key-pair\") pod \"whisker-5d6876f9d4-45g7h\" (UID: \"4469ce36-6286-4f2c-84e3-e653a9c04ba0\") " pod="calico-system/whisker-5d6876f9d4-45g7h" Dec 12 18:36:46.266530 systemd[1]: Created slice kubepods-besteffort-pod4469ce36_6286_4f2c_84e3_e653a9c04ba0.slice - libcontainer container kubepods-besteffort-pod4469ce36_6286_4f2c_84e3_e653a9c04ba0.slice. Dec 12 18:36:46.288567 containerd[1555]: 2025-12-12 18:36:45.097 [INFO][4104] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dc841e5a0a44e1b464d10f611c64216b33513b0c4380ddd7354f1bfecba114b9" Dec 12 18:36:46.288567 containerd[1555]: 2025-12-12 18:36:45.097 [INFO][4104] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dc841e5a0a44e1b464d10f611c64216b33513b0c4380ddd7354f1bfecba114b9" iface="eth0" netns="/var/run/netns/cni-56e507cc-5d3a-553d-afaf-7d6904498c77" Dec 12 18:36:46.288567 containerd[1555]: 2025-12-12 18:36:45.098 [INFO][4104] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dc841e5a0a44e1b464d10f611c64216b33513b0c4380ddd7354f1bfecba114b9" iface="eth0" netns="/var/run/netns/cni-56e507cc-5d3a-553d-afaf-7d6904498c77" Dec 12 18:36:46.288567 containerd[1555]: 2025-12-12 18:36:45.104 [INFO][4104] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dc841e5a0a44e1b464d10f611c64216b33513b0c4380ddd7354f1bfecba114b9" iface="eth0" netns="/var/run/netns/cni-56e507cc-5d3a-553d-afaf-7d6904498c77" Dec 12 18:36:46.288567 containerd[1555]: 2025-12-12 18:36:45.104 [INFO][4104] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dc841e5a0a44e1b464d10f611c64216b33513b0c4380ddd7354f1bfecba114b9" Dec 12 18:36:46.288567 containerd[1555]: 2025-12-12 18:36:45.104 [INFO][4104] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc841e5a0a44e1b464d10f611c64216b33513b0c4380ddd7354f1bfecba114b9" Dec 12 18:36:46.288567 containerd[1555]: 2025-12-12 18:36:45.452 [INFO][4157] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dc841e5a0a44e1b464d10f611c64216b33513b0c4380ddd7354f1bfecba114b9" HandleID="k8s-pod-network.dc841e5a0a44e1b464d10f611c64216b33513b0c4380ddd7354f1bfecba114b9" Workload="localhost-k8s-coredns--674b8bbfcf--gp587-eth0" Dec 12 18:36:46.288567 containerd[1555]: 2025-12-12 18:36:45.460 [INFO][4157] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:36:46.288567 containerd[1555]: 2025-12-12 18:36:46.123 [INFO][4157] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:36:46.289209 containerd[1555]: 2025-12-12 18:36:46.185 [WARNING][4157] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="dc841e5a0a44e1b464d10f611c64216b33513b0c4380ddd7354f1bfecba114b9" HandleID="k8s-pod-network.dc841e5a0a44e1b464d10f611c64216b33513b0c4380ddd7354f1bfecba114b9" Workload="localhost-k8s-coredns--674b8bbfcf--gp587-eth0" Dec 12 18:36:46.289209 containerd[1555]: 2025-12-12 18:36:46.185 [INFO][4157] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dc841e5a0a44e1b464d10f611c64216b33513b0c4380ddd7354f1bfecba114b9" HandleID="k8s-pod-network.dc841e5a0a44e1b464d10f611c64216b33513b0c4380ddd7354f1bfecba114b9" Workload="localhost-k8s-coredns--674b8bbfcf--gp587-eth0" Dec 12 18:36:46.289209 containerd[1555]: 2025-12-12 18:36:46.220 [INFO][4157] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:36:46.289209 containerd[1555]: 2025-12-12 18:36:46.273 [INFO][4104] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dc841e5a0a44e1b464d10f611c64216b33513b0c4380ddd7354f1bfecba114b9" Dec 12 18:36:46.313022 systemd[1]: run-netns-cni\x2d56e507cc\x2d5d3a\x2d553d\x2dafaf\x2d7d6904498c77.mount: Deactivated successfully. Dec 12 18:36:46.322029 containerd[1555]: time="2025-12-12T18:36:46.321952584Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gp587,Uid:eca2884d-7c05-43ed-b865-dfa363821ba5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc841e5a0a44e1b464d10f611c64216b33513b0c4380ddd7354f1bfecba114b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:46.333562 kubelet[2727]: E1212 18:36:46.326114 2727 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc841e5a0a44e1b464d10f611c64216b33513b0c4380ddd7354f1bfecba114b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 18:36:46.333562 kubelet[2727]: E1212 18:36:46.326201 2727 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc841e5a0a44e1b464d10f611c64216b33513b0c4380ddd7354f1bfecba114b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gp587" Dec 12 18:36:46.333562 kubelet[2727]: E1212 18:36:46.326234 2727 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc841e5a0a44e1b464d10f611c64216b33513b0c4380ddd7354f1bfecba114b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gp587" Dec 12 18:36:46.334008 kubelet[2727]: E1212 18:36:46.329710 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-gp587_kube-system(eca2884d-7c05-43ed-b865-dfa363821ba5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-gp587_kube-system(eca2884d-7c05-43ed-b865-dfa363821ba5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc841e5a0a44e1b464d10f611c64216b33513b0c4380ddd7354f1bfecba114b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-gp587" podUID="eca2884d-7c05-43ed-b865-dfa363821ba5" Dec 12 18:36:46.336671 containerd[1555]: time="2025-12-12T18:36:46.336570015Z" level=info msg="connecting to shim 04d4c0892f34e3b2af5712a9603d93539e448519212f611e9254b1f45d456d9c" address="unix:///run/containerd/s/5eb69a22998f3366096f2458c55a6a9bac6de05474316b13539bfaff7339c32b" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:36:46.340455 containerd[1555]: time="2025-12-12T18:36:46.340402282Z" level=info msg="connecting to shim 4c999992386d9aada8ebb5ab8c879af6df4bb632b650f4639c3e3c9fac7ee571" address="unix:///run/containerd/s/089725bb243ae4b4c78d2d9827f884fd5b36c3272a6f22c7237c6ef9621b7cf0" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:36:46.439511 systemd[1]: Started cri-containerd-04d4c0892f34e3b2af5712a9603d93539e448519212f611e9254b1f45d456d9c.scope - libcontainer container 04d4c0892f34e3b2af5712a9603d93539e448519212f611e9254b1f45d456d9c. Dec 12 18:36:46.441690 systemd[1]: Started cri-containerd-4c999992386d9aada8ebb5ab8c879af6df4bb632b650f4639c3e3c9fac7ee571.scope - libcontainer container 4c999992386d9aada8ebb5ab8c879af6df4bb632b650f4639c3e3c9fac7ee571. Dec 12 18:36:46.474688 systemd-resolved[1471]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 18:36:46.480367 systemd-resolved[1471]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 18:36:46.585393 containerd[1555]: time="2025-12-12T18:36:46.581981660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-845fcfc7bd-n97cm,Uid:97c6c51c-88d4-4a5c-b977-999f64d65996,Namespace:calico-system,Attempt:0,} returns sandbox id \"4c999992386d9aada8ebb5ab8c879af6df4bb632b650f4639c3e3c9fac7ee571\"" Dec 12 18:36:46.585393 containerd[1555]: time="2025-12-12T18:36:46.583508041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:36:46.585979 containerd[1555]: time="2025-12-12T18:36:46.585954428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d6876f9d4-45g7h,Uid:4469ce36-6286-4f2c-84e3-e653a9c04ba0,Namespace:calico-system,Attempt:0,}" Dec 12 18:36:46.595702 containerd[1555]: time="2025-12-12T18:36:46.595554313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f4dfd9b79-gmfhk,Uid:433cc46b-ce9f-4fdf-9392-bdd29bdc4330,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"04d4c0892f34e3b2af5712a9603d93539e448519212f611e9254b1f45d456d9c\"" Dec 12 18:36:46.703206 kubelet[2727]: E1212 18:36:46.703159 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:36:46.708088 containerd[1555]: time="2025-12-12T18:36:46.704060951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gp587,Uid:eca2884d-7c05-43ed-b865-dfa363821ba5,Namespace:kube-system,Attempt:0,}" Dec 12 18:36:46.708471 containerd[1555]: time="2025-12-12T18:36:46.708434903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f4dfd9b79-68db6,Uid:69dcfde4-a416-44cb-b592-faa494483016,Namespace:calico-apiserver,Attempt:0,}" Dec 12 18:36:46.972411 kubelet[2727]: I1212 18:36:46.970049 2727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df0e4c6b-2ef9-409a-ab59-75e3dc03ae44" path="/var/lib/kubelet/pods/df0e4c6b-2ef9-409a-ab59-75e3dc03ae44/volumes" Dec 12 18:36:46.993532 containerd[1555]: time="2025-12-12T18:36:46.992445027Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:36:47.006569 containerd[1555]: time="2025-12-12T18:36:47.005317540Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:36:47.057060 containerd[1555]: time="2025-12-12T18:36:47.056861756Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:36:47.063858 containerd[1555]: time="2025-12-12T18:36:47.063605454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:36:47.063910 kubelet[2727]: E1212 18:36:47.057461 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:36:47.063910 kubelet[2727]: E1212 18:36:47.057573 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:36:47.064039 kubelet[2727]: E1212 18:36:47.059921 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f79zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-845fcfc7bd-n97cm_calico-system(97c6c51c-88d4-4a5c-b977-999f64d65996): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:36:47.067317 kubelet[2727]: E1212 18:36:47.066104 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-845fcfc7bd-n97cm" podUID="97c6c51c-88d4-4a5c-b977-999f64d65996" Dec 12 18:36:47.118197 systemd-networkd[1468]: cali5b8f181191c: Link UP Dec 12 18:36:47.129403 systemd-networkd[1468]: cali5b8f181191c: Gained carrier Dec 12 18:36:47.172524 containerd[1555]: 2025-12-12 18:36:46.724 [INFO][4356] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:36:47.172524 containerd[1555]: 2025-12-12 18:36:46.775 [INFO][4356] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5d6876f9d4--45g7h-eth0 whisker-5d6876f9d4- calico-system 4469ce36-6286-4f2c-84e3-e653a9c04ba0 1065 0 2025-12-12 18:36:46 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5d6876f9d4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5d6876f9d4-45g7h eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5b8f181191c [] [] }} ContainerID="45cd6add9f42abeadeec4cdc561b573d5e9a0f1270898e36d0662acbd41c0c38" Namespace="calico-system" Pod="whisker-5d6876f9d4-45g7h" WorkloadEndpoint="localhost-k8s-whisker--5d6876f9d4--45g7h-" Dec 12 18:36:47.172524 containerd[1555]: 2025-12-12 18:36:46.775 [INFO][4356] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="45cd6add9f42abeadeec4cdc561b573d5e9a0f1270898e36d0662acbd41c0c38" Namespace="calico-system" Pod="whisker-5d6876f9d4-45g7h" WorkloadEndpoint="localhost-k8s-whisker--5d6876f9d4--45g7h-eth0" Dec 12 18:36:47.172524 containerd[1555]: 2025-12-12 18:36:46.881 [INFO][4397] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="45cd6add9f42abeadeec4cdc561b573d5e9a0f1270898e36d0662acbd41c0c38" HandleID="k8s-pod-network.45cd6add9f42abeadeec4cdc561b573d5e9a0f1270898e36d0662acbd41c0c38" Workload="localhost-k8s-whisker--5d6876f9d4--45g7h-eth0" Dec 12 18:36:47.172862 containerd[1555]: 2025-12-12 18:36:46.881 [INFO][4397] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="45cd6add9f42abeadeec4cdc561b573d5e9a0f1270898e36d0662acbd41c0c38" HandleID="k8s-pod-network.45cd6add9f42abeadeec4cdc561b573d5e9a0f1270898e36d0662acbd41c0c38" Workload="localhost-k8s-whisker--5d6876f9d4--45g7h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000388760), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5d6876f9d4-45g7h", "timestamp":"2025-12-12 18:36:46.881352213 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:36:47.172862 containerd[1555]: 2025-12-12 18:36:46.882 [INFO][4397] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:36:47.172862 containerd[1555]: 2025-12-12 18:36:46.882 [INFO][4397] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:36:47.172862 containerd[1555]: 2025-12-12 18:36:46.882 [INFO][4397] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 18:36:47.172862 containerd[1555]: 2025-12-12 18:36:46.922 [INFO][4397] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.45cd6add9f42abeadeec4cdc561b573d5e9a0f1270898e36d0662acbd41c0c38" host="localhost" Dec 12 18:36:47.172862 containerd[1555]: 2025-12-12 18:36:46.951 [INFO][4397] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 18:36:47.172862 containerd[1555]: 2025-12-12 18:36:46.980 [INFO][4397] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 18:36:47.172862 containerd[1555]: 2025-12-12 18:36:46.996 [INFO][4397] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 18:36:47.172862 containerd[1555]: 2025-12-12 18:36:47.010 [INFO][4397] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 18:36:47.172862 containerd[1555]: 2025-12-12 18:36:47.011 [INFO][4397] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.45cd6add9f42abeadeec4cdc561b573d5e9a0f1270898e36d0662acbd41c0c38" host="localhost" Dec 12 18:36:47.173124 containerd[1555]: 2025-12-12 18:36:47.024 [INFO][4397] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.45cd6add9f42abeadeec4cdc561b573d5e9a0f1270898e36d0662acbd41c0c38 Dec 12 18:36:47.173124 containerd[1555]: 2025-12-12 18:36:47.058 [INFO][4397] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.45cd6add9f42abeadeec4cdc561b573d5e9a0f1270898e36d0662acbd41c0c38" host="localhost" Dec 12 18:36:47.173124 containerd[1555]: 2025-12-12 18:36:47.101 [INFO][4397] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.45cd6add9f42abeadeec4cdc561b573d5e9a0f1270898e36d0662acbd41c0c38" host="localhost" Dec 12 18:36:47.173124 containerd[1555]: 2025-12-12 18:36:47.101 [INFO][4397] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.45cd6add9f42abeadeec4cdc561b573d5e9a0f1270898e36d0662acbd41c0c38" host="localhost" Dec 12 18:36:47.173124 containerd[1555]: 2025-12-12 18:36:47.103 [INFO][4397] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:36:47.173124 containerd[1555]: 2025-12-12 18:36:47.103 [INFO][4397] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="45cd6add9f42abeadeec4cdc561b573d5e9a0f1270898e36d0662acbd41c0c38" HandleID="k8s-pod-network.45cd6add9f42abeadeec4cdc561b573d5e9a0f1270898e36d0662acbd41c0c38" Workload="localhost-k8s-whisker--5d6876f9d4--45g7h-eth0" Dec 12 18:36:47.173291 containerd[1555]: 2025-12-12 18:36:47.109 [INFO][4356] cni-plugin/k8s.go 418: Populated endpoint ContainerID="45cd6add9f42abeadeec4cdc561b573d5e9a0f1270898e36d0662acbd41c0c38" Namespace="calico-system" Pod="whisker-5d6876f9d4-45g7h" WorkloadEndpoint="localhost-k8s-whisker--5d6876f9d4--45g7h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5d6876f9d4--45g7h-eth0", GenerateName:"whisker-5d6876f9d4-", Namespace:"calico-system", SelfLink:"", UID:"4469ce36-6286-4f2c-84e3-e653a9c04ba0", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 36, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5d6876f9d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5d6876f9d4-45g7h", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5b8f181191c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:36:47.173291 containerd[1555]: 2025-12-12 18:36:47.109 [INFO][4356] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="45cd6add9f42abeadeec4cdc561b573d5e9a0f1270898e36d0662acbd41c0c38" Namespace="calico-system" Pod="whisker-5d6876f9d4-45g7h" WorkloadEndpoint="localhost-k8s-whisker--5d6876f9d4--45g7h-eth0" Dec 12 18:36:47.173387 containerd[1555]: 2025-12-12 18:36:47.110 [INFO][4356] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5b8f181191c ContainerID="45cd6add9f42abeadeec4cdc561b573d5e9a0f1270898e36d0662acbd41c0c38" Namespace="calico-system" Pod="whisker-5d6876f9d4-45g7h" WorkloadEndpoint="localhost-k8s-whisker--5d6876f9d4--45g7h-eth0" Dec 12 18:36:47.173387 containerd[1555]: 2025-12-12 18:36:47.135 [INFO][4356] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="45cd6add9f42abeadeec4cdc561b573d5e9a0f1270898e36d0662acbd41c0c38" Namespace="calico-system" Pod="whisker-5d6876f9d4-45g7h" WorkloadEndpoint="localhost-k8s-whisker--5d6876f9d4--45g7h-eth0" Dec 12 18:36:47.173439 containerd[1555]: 2025-12-12 18:36:47.140 [INFO][4356] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="45cd6add9f42abeadeec4cdc561b573d5e9a0f1270898e36d0662acbd41c0c38" Namespace="calico-system" Pod="whisker-5d6876f9d4-45g7h" WorkloadEndpoint="localhost-k8s-whisker--5d6876f9d4--45g7h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5d6876f9d4--45g7h-eth0", GenerateName:"whisker-5d6876f9d4-", Namespace:"calico-system", SelfLink:"", UID:"4469ce36-6286-4f2c-84e3-e653a9c04ba0", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 36, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5d6876f9d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"45cd6add9f42abeadeec4cdc561b573d5e9a0f1270898e36d0662acbd41c0c38", Pod:"whisker-5d6876f9d4-45g7h", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5b8f181191c", MAC:"9a:bd:80:08:47:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:36:47.173501 containerd[1555]: 2025-12-12 18:36:47.164 [INFO][4356] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="45cd6add9f42abeadeec4cdc561b573d5e9a0f1270898e36d0662acbd41c0c38" Namespace="calico-system" Pod="whisker-5d6876f9d4-45g7h" WorkloadEndpoint="localhost-k8s-whisker--5d6876f9d4--45g7h-eth0" Dec 12 18:36:47.217503 systemd-networkd[1468]: calid8110dfbd89: Gained IPv6LL Dec 12 18:36:47.278740 systemd-networkd[1468]: cali0c5b436d1ad: Gained IPv6LL Dec 12 18:36:47.337911 systemd-networkd[1468]: cali54e7b884f46: Link UP Dec 12 18:36:47.340546 systemd-networkd[1468]: cali54e7b884f46: Gained carrier Dec 12 18:36:47.420987 containerd[1555]: 2025-12-12 18:36:46.813 [INFO][4371] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:36:47.420987 containerd[1555]: 2025-12-12 18:36:46.832 [INFO][4371] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5f4dfd9b79--68db6-eth0 calico-apiserver-5f4dfd9b79- calico-apiserver 69dcfde4-a416-44cb-b592-faa494483016 1033 0 2025-12-12 18:35:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f4dfd9b79 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5f4dfd9b79-68db6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali54e7b884f46 [] [] }} ContainerID="d672646a4f4a0e1a9f9aca3b357caf74378060585c716971d85226f9533aed77" Namespace="calico-apiserver" Pod="calico-apiserver-5f4dfd9b79-68db6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f4dfd9b79--68db6-" Dec 12 18:36:47.420987 containerd[1555]: 2025-12-12 18:36:46.833 [INFO][4371] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d672646a4f4a0e1a9f9aca3b357caf74378060585c716971d85226f9533aed77" Namespace="calico-apiserver" Pod="calico-apiserver-5f4dfd9b79-68db6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f4dfd9b79--68db6-eth0" Dec 12 18:36:47.420987 containerd[1555]: 2025-12-12 18:36:46.931 [INFO][4408] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d672646a4f4a0e1a9f9aca3b357caf74378060585c716971d85226f9533aed77" HandleID="k8s-pod-network.d672646a4f4a0e1a9f9aca3b357caf74378060585c716971d85226f9533aed77" Workload="localhost-k8s-calico--apiserver--5f4dfd9b79--68db6-eth0" Dec 12 18:36:47.421350 containerd[1555]: 2025-12-12 18:36:46.931 [INFO][4408] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d672646a4f4a0e1a9f9aca3b357caf74378060585c716971d85226f9533aed77" HandleID="k8s-pod-network.d672646a4f4a0e1a9f9aca3b357caf74378060585c716971d85226f9533aed77" Workload="localhost-k8s-calico--apiserver--5f4dfd9b79--68db6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032d390), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5f4dfd9b79-68db6", "timestamp":"2025-12-12 18:36:46.931350626 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:36:47.421350 containerd[1555]: 2025-12-12 18:36:46.931 [INFO][4408] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:36:47.421350 containerd[1555]: 2025-12-12 18:36:47.101 [INFO][4408] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:36:47.421350 containerd[1555]: 2025-12-12 18:36:47.103 [INFO][4408] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 18:36:47.421350 containerd[1555]: 2025-12-12 18:36:47.140 [INFO][4408] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d672646a4f4a0e1a9f9aca3b357caf74378060585c716971d85226f9533aed77" host="localhost" Dec 12 18:36:47.421350 containerd[1555]: 2025-12-12 18:36:47.168 [INFO][4408] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 18:36:47.421350 containerd[1555]: 2025-12-12 18:36:47.185 [INFO][4408] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 18:36:47.421350 containerd[1555]: 2025-12-12 18:36:47.195 [INFO][4408] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 18:36:47.421350 containerd[1555]: 2025-12-12 18:36:47.202 [INFO][4408] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 18:36:47.421350 containerd[1555]: 2025-12-12 18:36:47.204 [INFO][4408] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d672646a4f4a0e1a9f9aca3b357caf74378060585c716971d85226f9533aed77" host="localhost" Dec 12 18:36:47.422594 containerd[1555]: 2025-12-12 18:36:47.210 [INFO][4408] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d672646a4f4a0e1a9f9aca3b357caf74378060585c716971d85226f9533aed77 Dec 12 18:36:47.422594 containerd[1555]: 2025-12-12 18:36:47.231 [INFO][4408] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d672646a4f4a0e1a9f9aca3b357caf74378060585c716971d85226f9533aed77" host="localhost" Dec 12 18:36:47.422594 containerd[1555]: 2025-12-12 18:36:47.276 [INFO][4408] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.d672646a4f4a0e1a9f9aca3b357caf74378060585c716971d85226f9533aed77" host="localhost" Dec 12 18:36:47.422594 containerd[1555]: 2025-12-12 18:36:47.277 [INFO][4408] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.d672646a4f4a0e1a9f9aca3b357caf74378060585c716971d85226f9533aed77" host="localhost" Dec 12 18:36:47.422594 containerd[1555]: 2025-12-12 18:36:47.277 [INFO][4408] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:36:47.422594 containerd[1555]: 2025-12-12 18:36:47.292 [INFO][4408] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="d672646a4f4a0e1a9f9aca3b357caf74378060585c716971d85226f9533aed77" HandleID="k8s-pod-network.d672646a4f4a0e1a9f9aca3b357caf74378060585c716971d85226f9533aed77" Workload="localhost-k8s-calico--apiserver--5f4dfd9b79--68db6-eth0" Dec 12 18:36:47.422776 containerd[1555]: 2025-12-12 18:36:47.324 [INFO][4371] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d672646a4f4a0e1a9f9aca3b357caf74378060585c716971d85226f9533aed77" Namespace="calico-apiserver" Pod="calico-apiserver-5f4dfd9b79-68db6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f4dfd9b79--68db6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f4dfd9b79--68db6-eth0", GenerateName:"calico-apiserver-5f4dfd9b79-", Namespace:"calico-apiserver", SelfLink:"", UID:"69dcfde4-a416-44cb-b592-faa494483016", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 35, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f4dfd9b79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5f4dfd9b79-68db6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali54e7b884f46", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:36:47.422901 containerd[1555]: 2025-12-12 18:36:47.324 [INFO][4371] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="d672646a4f4a0e1a9f9aca3b357caf74378060585c716971d85226f9533aed77" Namespace="calico-apiserver" Pod="calico-apiserver-5f4dfd9b79-68db6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f4dfd9b79--68db6-eth0" Dec 12 18:36:47.422901 containerd[1555]: 2025-12-12 18:36:47.324 [INFO][4371] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali54e7b884f46 ContainerID="d672646a4f4a0e1a9f9aca3b357caf74378060585c716971d85226f9533aed77" Namespace="calico-apiserver" Pod="calico-apiserver-5f4dfd9b79-68db6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f4dfd9b79--68db6-eth0" Dec 12 18:36:47.422901 containerd[1555]: 2025-12-12 18:36:47.341 [INFO][4371] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d672646a4f4a0e1a9f9aca3b357caf74378060585c716971d85226f9533aed77" Namespace="calico-apiserver" Pod="calico-apiserver-5f4dfd9b79-68db6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f4dfd9b79--68db6-eth0" Dec 12 18:36:47.423021 containerd[1555]: 2025-12-12 18:36:47.346 [INFO][4371] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d672646a4f4a0e1a9f9aca3b357caf74378060585c716971d85226f9533aed77" Namespace="calico-apiserver" Pod="calico-apiserver-5f4dfd9b79-68db6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f4dfd9b79--68db6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f4dfd9b79--68db6-eth0", GenerateName:"calico-apiserver-5f4dfd9b79-", Namespace:"calico-apiserver", SelfLink:"", UID:"69dcfde4-a416-44cb-b592-faa494483016", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 35, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f4dfd9b79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d672646a4f4a0e1a9f9aca3b357caf74378060585c716971d85226f9533aed77", Pod:"calico-apiserver-5f4dfd9b79-68db6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali54e7b884f46", MAC:"1a:f0:a8:3f:b4:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:36:47.425687 containerd[1555]: 2025-12-12 18:36:47.405 [INFO][4371] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d672646a4f4a0e1a9f9aca3b357caf74378060585c716971d85226f9533aed77" Namespace="calico-apiserver" Pod="calico-apiserver-5f4dfd9b79-68db6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f4dfd9b79--68db6-eth0" Dec 12 18:36:47.432824 containerd[1555]: time="2025-12-12T18:36:47.432367706Z" level=info msg="connecting to shim 45cd6add9f42abeadeec4cdc561b573d5e9a0f1270898e36d0662acbd41c0c38" address="unix:///run/containerd/s/18fa79df8021528ff471ceead41dfb474fb7ab60d1f1dae7a000c66f6d6e3dda" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:36:47.440565 containerd[1555]: time="2025-12-12T18:36:47.438434788Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:36:47.478225 containerd[1555]: time="2025-12-12T18:36:47.475380113Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:36:47.478428 kubelet[2727]: E1212 18:36:47.476256 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:36:47.478428 kubelet[2727]: E1212 18:36:47.476326 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:36:47.478428 kubelet[2727]: E1212 18:36:47.476519 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p4j8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f4dfd9b79-gmfhk_calico-apiserver(433cc46b-ce9f-4fdf-9392-bdd29bdc4330): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:36:47.478428 kubelet[2727]: E1212 18:36:47.477909 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f4dfd9b79-gmfhk" podUID="433cc46b-ce9f-4fdf-9392-bdd29bdc4330" Dec 12 18:36:47.552499 containerd[1555]: time="2025-12-12T18:36:47.475544138Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:36:47.587449 systemd-networkd[1468]: cali532f3e758f0: Link UP Dec 12 18:36:47.587696 systemd-networkd[1468]: cali532f3e758f0: Gained carrier Dec 12 18:36:47.633363 containerd[1555]: time="2025-12-12T18:36:47.630837029Z" level=info msg="connecting to shim d672646a4f4a0e1a9f9aca3b357caf74378060585c716971d85226f9533aed77" address="unix:///run/containerd/s/da5dade43b8bd232e3d7ee67145e19a9230810fb6e7be717f4b0a2ec0edff265" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:36:47.661768 containerd[1555]: 2025-12-12 18:36:46.837 [INFO][4370] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 18:36:47.661768 containerd[1555]: 2025-12-12 18:36:46.877 [INFO][4370] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--gp587-eth0 coredns-674b8bbfcf- kube-system eca2884d-7c05-43ed-b865-dfa363821ba5 1032 0 2025-12-12 18:35:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-gp587 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali532f3e758f0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="21811eab216263bf27a68b1eac68407a8d16ca6089dd995034e47afd2f6e2b75" Namespace="kube-system" Pod="coredns-674b8bbfcf-gp587" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gp587-" Dec 12 18:36:47.661768 containerd[1555]: 2025-12-12 18:36:46.877 [INFO][4370] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="21811eab216263bf27a68b1eac68407a8d16ca6089dd995034e47afd2f6e2b75" Namespace="kube-system" Pod="coredns-674b8bbfcf-gp587" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gp587-eth0" Dec 12 18:36:47.661768 containerd[1555]: 2025-12-12 18:36:47.035 [INFO][4418] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="21811eab216263bf27a68b1eac68407a8d16ca6089dd995034e47afd2f6e2b75" HandleID="k8s-pod-network.21811eab216263bf27a68b1eac68407a8d16ca6089dd995034e47afd2f6e2b75" Workload="localhost-k8s-coredns--674b8bbfcf--gp587-eth0" Dec 12 18:36:47.662118 containerd[1555]: 2025-12-12 18:36:47.036 [INFO][4418] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="21811eab216263bf27a68b1eac68407a8d16ca6089dd995034e47afd2f6e2b75" HandleID="k8s-pod-network.21811eab216263bf27a68b1eac68407a8d16ca6089dd995034e47afd2f6e2b75" Workload="localhost-k8s-coredns--674b8bbfcf--gp587-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000343b30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-gp587", "timestamp":"2025-12-12 18:36:47.0359575 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:36:47.662118 containerd[1555]: 2025-12-12 18:36:47.039 [INFO][4418] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:36:47.662118 containerd[1555]: 2025-12-12 18:36:47.277 [INFO][4418] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:36:47.662118 containerd[1555]: 2025-12-12 18:36:47.277 [INFO][4418] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 18:36:47.662118 containerd[1555]: 2025-12-12 18:36:47.312 [INFO][4418] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.21811eab216263bf27a68b1eac68407a8d16ca6089dd995034e47afd2f6e2b75" host="localhost" Dec 12 18:36:47.662118 containerd[1555]: 2025-12-12 18:36:47.338 [INFO][4418] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 18:36:47.662118 containerd[1555]: 2025-12-12 18:36:47.390 [INFO][4418] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 18:36:47.662118 containerd[1555]: 2025-12-12 18:36:47.418 [INFO][4418] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 18:36:47.662118 containerd[1555]: 2025-12-12 18:36:47.426 [INFO][4418] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 18:36:47.662118 containerd[1555]: 2025-12-12 18:36:47.428 [INFO][4418] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.21811eab216263bf27a68b1eac68407a8d16ca6089dd995034e47afd2f6e2b75" host="localhost" Dec 12 18:36:47.662537 containerd[1555]: 2025-12-12 18:36:47.445 [INFO][4418] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.21811eab216263bf27a68b1eac68407a8d16ca6089dd995034e47afd2f6e2b75 Dec 12 18:36:47.662537 containerd[1555]: 2025-12-12 18:36:47.463 [INFO][4418] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.21811eab216263bf27a68b1eac68407a8d16ca6089dd995034e47afd2f6e2b75" host="localhost" Dec 12 18:36:47.662537 containerd[1555]: 2025-12-12 18:36:47.512 [INFO][4418] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.21811eab216263bf27a68b1eac68407a8d16ca6089dd995034e47afd2f6e2b75" host="localhost" Dec 12 18:36:47.662537 containerd[1555]: 2025-12-12 18:36:47.515 [INFO][4418] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.21811eab216263bf27a68b1eac68407a8d16ca6089dd995034e47afd2f6e2b75" host="localhost" Dec 12 18:36:47.662537 containerd[1555]: 2025-12-12 18:36:47.515 [INFO][4418] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:36:47.662537 containerd[1555]: 2025-12-12 18:36:47.516 [INFO][4418] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="21811eab216263bf27a68b1eac68407a8d16ca6089dd995034e47afd2f6e2b75" HandleID="k8s-pod-network.21811eab216263bf27a68b1eac68407a8d16ca6089dd995034e47afd2f6e2b75" Workload="localhost-k8s-coredns--674b8bbfcf--gp587-eth0" Dec 12 18:36:47.662749 containerd[1555]: 2025-12-12 18:36:47.548 [INFO][4370] cni-plugin/k8s.go 418: Populated endpoint ContainerID="21811eab216263bf27a68b1eac68407a8d16ca6089dd995034e47afd2f6e2b75" Namespace="kube-system" Pod="coredns-674b8bbfcf-gp587" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gp587-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--gp587-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"eca2884d-7c05-43ed-b865-dfa363821ba5", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 35, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-gp587", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali532f3e758f0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:36:47.662859 containerd[1555]: 2025-12-12 18:36:47.549 [INFO][4370] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="21811eab216263bf27a68b1eac68407a8d16ca6089dd995034e47afd2f6e2b75" Namespace="kube-system" Pod="coredns-674b8bbfcf-gp587" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gp587-eth0" Dec 12 18:36:47.662859 containerd[1555]: 2025-12-12 18:36:47.550 [INFO][4370] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali532f3e758f0 ContainerID="21811eab216263bf27a68b1eac68407a8d16ca6089dd995034e47afd2f6e2b75" Namespace="kube-system" Pod="coredns-674b8bbfcf-gp587" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gp587-eth0" Dec 12 18:36:47.662859 containerd[1555]: 2025-12-12 18:36:47.582 [INFO][4370] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="21811eab216263bf27a68b1eac68407a8d16ca6089dd995034e47afd2f6e2b75" Namespace="kube-system" Pod="coredns-674b8bbfcf-gp587" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gp587-eth0" Dec 12 18:36:47.662976 containerd[1555]: 2025-12-12 18:36:47.582 [INFO][4370] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="21811eab216263bf27a68b1eac68407a8d16ca6089dd995034e47afd2f6e2b75" Namespace="kube-system" Pod="coredns-674b8bbfcf-gp587" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gp587-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--gp587-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"eca2884d-7c05-43ed-b865-dfa363821ba5", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 35, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"21811eab216263bf27a68b1eac68407a8d16ca6089dd995034e47afd2f6e2b75", Pod:"coredns-674b8bbfcf-gp587", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali532f3e758f0", MAC:"66:d7:56:cf:ca:a6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:36:47.662976 containerd[1555]: 2025-12-12 18:36:47.646 [INFO][4370] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="21811eab216263bf27a68b1eac68407a8d16ca6089dd995034e47afd2f6e2b75" Namespace="kube-system" Pod="coredns-674b8bbfcf-gp587" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--gp587-eth0" Dec 12 18:36:47.710412 kubelet[2727]: E1212 18:36:47.710114 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-845fcfc7bd-n97cm" podUID="97c6c51c-88d4-4a5c-b977-999f64d65996" Dec 12 18:36:47.717151 kubelet[2727]: E1212 18:36:47.711380 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f4dfd9b79-gmfhk" podUID="433cc46b-ce9f-4fdf-9392-bdd29bdc4330" Dec 12 18:36:47.729162 systemd[1]: Started cri-containerd-45cd6add9f42abeadeec4cdc561b573d5e9a0f1270898e36d0662acbd41c0c38.scope - libcontainer container 45cd6add9f42abeadeec4cdc561b573d5e9a0f1270898e36d0662acbd41c0c38. Dec 12 18:36:47.828691 systemd[1]: Started cri-containerd-d672646a4f4a0e1a9f9aca3b357caf74378060585c716971d85226f9533aed77.scope - libcontainer container d672646a4f4a0e1a9f9aca3b357caf74378060585c716971d85226f9533aed77. Dec 12 18:36:47.865055 systemd-resolved[1471]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 18:36:47.917424 systemd-resolved[1471]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 18:36:47.921819 containerd[1555]: time="2025-12-12T18:36:47.921425070Z" level=info msg="connecting to shim 21811eab216263bf27a68b1eac68407a8d16ca6089dd995034e47afd2f6e2b75" address="unix:///run/containerd/s/380edbf1b5b60be85c2274e27d9b54fea7539be041ac63e4379317f8fa1a7b54" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:36:47.975146 systemd[1]: Started cri-containerd-21811eab216263bf27a68b1eac68407a8d16ca6089dd995034e47afd2f6e2b75.scope - libcontainer container 21811eab216263bf27a68b1eac68407a8d16ca6089dd995034e47afd2f6e2b75. Dec 12 18:36:48.018551 containerd[1555]: time="2025-12-12T18:36:48.017674843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d6876f9d4-45g7h,Uid:4469ce36-6286-4f2c-84e3-e653a9c04ba0,Namespace:calico-system,Attempt:0,} returns sandbox id \"45cd6add9f42abeadeec4cdc561b573d5e9a0f1270898e36d0662acbd41c0c38\"" Dec 12 18:36:48.022040 containerd[1555]: time="2025-12-12T18:36:48.021985893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:36:48.037055 containerd[1555]: time="2025-12-12T18:36:48.036715541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f4dfd9b79-68db6,Uid:69dcfde4-a416-44cb-b592-faa494483016,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d672646a4f4a0e1a9f9aca3b357caf74378060585c716971d85226f9533aed77\"" Dec 12 18:36:48.066142 systemd-resolved[1471]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 18:36:48.149307 containerd[1555]: time="2025-12-12T18:36:48.149168648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gp587,Uid:eca2884d-7c05-43ed-b865-dfa363821ba5,Namespace:kube-system,Attempt:0,} returns sandbox id \"21811eab216263bf27a68b1eac68407a8d16ca6089dd995034e47afd2f6e2b75\"" Dec 12 18:36:48.150633 kubelet[2727]: E1212 18:36:48.150133 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:36:48.175781 systemd-networkd[1468]: cali5b8f181191c: Gained IPv6LL Dec 12 18:36:48.189262 containerd[1555]: time="2025-12-12T18:36:48.188420869Z" level=info msg="CreateContainer within sandbox \"21811eab216263bf27a68b1eac68407a8d16ca6089dd995034e47afd2f6e2b75\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:36:48.282140 containerd[1555]: time="2025-12-12T18:36:48.281981404Z" level=info msg="Container c4040cef34112fbbe75e19895fff60e30b34fdeafb8ea75553f6ded6d77c8436: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:36:48.386138 containerd[1555]: time="2025-12-12T18:36:48.386045394Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:36:48.458042 containerd[1555]: time="2025-12-12T18:36:48.456437873Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:36:48.458042 containerd[1555]: time="2025-12-12T18:36:48.456615583Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:36:48.458042 containerd[1555]: time="2025-12-12T18:36:48.457866197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:36:48.458271 kubelet[2727]: E1212 18:36:48.456927 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:36:48.458271 kubelet[2727]: E1212 18:36:48.456994 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:36:48.458271 kubelet[2727]: E1212 18:36:48.457290 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c925c1a2ddfa4954a8cd0226d83a2f39,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2dpcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d6876f9d4-45g7h_calico-system(4469ce36-6286-4f2c-84e3-e653a9c04ba0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:36:48.469249 containerd[1555]: time="2025-12-12T18:36:48.468756451Z" level=info msg="CreateContainer within sandbox \"21811eab216263bf27a68b1eac68407a8d16ca6089dd995034e47afd2f6e2b75\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c4040cef34112fbbe75e19895fff60e30b34fdeafb8ea75553f6ded6d77c8436\"" Dec 12 18:36:48.470116 containerd[1555]: time="2025-12-12T18:36:48.470064662Z" level=info msg="StartContainer for \"c4040cef34112fbbe75e19895fff60e30b34fdeafb8ea75553f6ded6d77c8436\"" Dec 12 18:36:48.476323 containerd[1555]: time="2025-12-12T18:36:48.476144037Z" level=info msg="connecting to shim c4040cef34112fbbe75e19895fff60e30b34fdeafb8ea75553f6ded6d77c8436" address="unix:///run/containerd/s/380edbf1b5b60be85c2274e27d9b54fea7539be041ac63e4379317f8fa1a7b54" protocol=ttrpc version=3 Dec 12 18:36:48.554847 systemd[1]: Started cri-containerd-c4040cef34112fbbe75e19895fff60e30b34fdeafb8ea75553f6ded6d77c8436.scope - libcontainer container c4040cef34112fbbe75e19895fff60e30b34fdeafb8ea75553f6ded6d77c8436. Dec 12 18:36:48.702375 systemd-networkd[1468]: vxlan.calico: Link UP Dec 12 18:36:48.702397 systemd-networkd[1468]: vxlan.calico: Gained carrier Dec 12 18:36:48.738494 containerd[1555]: time="2025-12-12T18:36:48.738338995Z" level=info msg="StartContainer for \"c4040cef34112fbbe75e19895fff60e30b34fdeafb8ea75553f6ded6d77c8436\" returns successfully" Dec 12 18:36:48.755876 systemd-networkd[1468]: cali54e7b884f46: Gained IPv6LL Dec 12 18:36:48.796898 kubelet[2727]: E1212 18:36:48.796818 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:36:48.876203 containerd[1555]: time="2025-12-12T18:36:48.875713536Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:36:48.884351 containerd[1555]: time="2025-12-12T18:36:48.883589500Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:36:48.884351 containerd[1555]: time="2025-12-12T18:36:48.883745339Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:36:48.885033 kubelet[2727]: E1212 18:36:48.884532 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:36:48.885033 kubelet[2727]: E1212 18:36:48.884593 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:36:48.885033 kubelet[2727]: E1212 18:36:48.884874 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bmk7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f4dfd9b79-68db6_calico-apiserver(69dcfde4-a416-44cb-b592-faa494483016): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:36:48.890393 kubelet[2727]: E1212 18:36:48.889191 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f4dfd9b79-68db6" podUID="69dcfde4-a416-44cb-b592-faa494483016" Dec 12 18:36:48.899268 containerd[1555]: time="2025-12-12T18:36:48.889607652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:36:49.066933 systemd-networkd[1468]: cali532f3e758f0: Gained IPv6LL Dec 12 18:36:49.295681 containerd[1555]: time="2025-12-12T18:36:49.294915223Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:36:49.298522 containerd[1555]: time="2025-12-12T18:36:49.297231981Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:36:49.298522 containerd[1555]: time="2025-12-12T18:36:49.297379145Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:36:49.299714 kubelet[2727]: E1212 18:36:49.299580 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:36:49.299714 kubelet[2727]: E1212 18:36:49.299695 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:36:49.300047 kubelet[2727]: E1212 18:36:49.299944 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2dpcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d6876f9d4-45g7h_calico-system(4469ce36-6286-4f2c-84e3-e653a9c04ba0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:36:49.304343 kubelet[2727]: E1212 18:36:49.301204 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d6876f9d4-45g7h" podUID="4469ce36-6286-4f2c-84e3-e653a9c04ba0" Dec 12 18:36:49.807949 kubelet[2727]: E1212 18:36:49.807479 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f4dfd9b79-68db6" podUID="69dcfde4-a416-44cb-b592-faa494483016" Dec 12 18:36:49.809452 kubelet[2727]: E1212 18:36:49.809155 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d6876f9d4-45g7h" podUID="4469ce36-6286-4f2c-84e3-e653a9c04ba0" Dec 12 18:36:49.809452 kubelet[2727]: E1212 18:36:49.809400 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:36:49.860575 kubelet[2727]: I1212 18:36:49.857000 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-gp587" podStartSLOduration=69.85697962 podStartE2EDuration="1m9.85697962s" podCreationTimestamp="2025-12-12 18:35:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:36:48.866887999 +0000 UTC m=+70.482254928" watchObservedRunningTime="2025-12-12 18:36:49.85697962 +0000 UTC m=+71.472346519" Dec 12 18:36:50.296702 systemd-networkd[1468]: vxlan.calico: Gained IPv6LL Dec 12 18:36:50.816355 kubelet[2727]: E1212 18:36:50.814978 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:36:51.952980 kubelet[2727]: E1212 18:36:51.950245 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:36:51.953590 containerd[1555]: time="2025-12-12T18:36:51.952415394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mtm8f,Uid:8652b687-d41e-47f6-a864-e604e24deb5b,Namespace:calico-system,Attempt:0,}" Dec 12 18:36:52.677972 systemd-networkd[1468]: calic7d0f448b2e: Link UP Dec 12 18:36:52.678617 systemd-networkd[1468]: calic7d0f448b2e: Gained carrier Dec 12 18:36:52.727107 containerd[1555]: 2025-12-12 18:36:52.353 [INFO][4842] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--mtm8f-eth0 csi-node-driver- calico-system 8652b687-d41e-47f6-a864-e604e24deb5b 826 0 2025-12-12 18:36:08 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-mtm8f eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic7d0f448b2e [] [] }} ContainerID="ff1be521972f8397831f98c351cc6efc55ffe9ba066726d9896cf46c5649c1ae" Namespace="calico-system" Pod="csi-node-driver-mtm8f" WorkloadEndpoint="localhost-k8s-csi--node--driver--mtm8f-" Dec 12 18:36:52.727107 containerd[1555]: 2025-12-12 18:36:52.353 [INFO][4842] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ff1be521972f8397831f98c351cc6efc55ffe9ba066726d9896cf46c5649c1ae" Namespace="calico-system" Pod="csi-node-driver-mtm8f" WorkloadEndpoint="localhost-k8s-csi--node--driver--mtm8f-eth0" Dec 12 18:36:52.727107 containerd[1555]: 2025-12-12 18:36:52.442 [INFO][4851] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff1be521972f8397831f98c351cc6efc55ffe9ba066726d9896cf46c5649c1ae" HandleID="k8s-pod-network.ff1be521972f8397831f98c351cc6efc55ffe9ba066726d9896cf46c5649c1ae" Workload="localhost-k8s-csi--node--driver--mtm8f-eth0" Dec 12 18:36:52.727107 containerd[1555]: 2025-12-12 18:36:52.443 [INFO][4851] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ff1be521972f8397831f98c351cc6efc55ffe9ba066726d9896cf46c5649c1ae" HandleID="k8s-pod-network.ff1be521972f8397831f98c351cc6efc55ffe9ba066726d9896cf46c5649c1ae" Workload="localhost-k8s-csi--node--driver--mtm8f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad620), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-mtm8f", "timestamp":"2025-12-12 18:36:52.442704266 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:36:52.727107 containerd[1555]: 2025-12-12 18:36:52.443 [INFO][4851] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:36:52.727107 containerd[1555]: 2025-12-12 18:36:52.443 [INFO][4851] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:36:52.727107 containerd[1555]: 2025-12-12 18:36:52.443 [INFO][4851] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 18:36:52.727107 containerd[1555]: 2025-12-12 18:36:52.496 [INFO][4851] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ff1be521972f8397831f98c351cc6efc55ffe9ba066726d9896cf46c5649c1ae" host="localhost" Dec 12 18:36:52.727107 containerd[1555]: 2025-12-12 18:36:52.536 [INFO][4851] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 18:36:52.727107 containerd[1555]: 2025-12-12 18:36:52.554 [INFO][4851] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 18:36:52.727107 containerd[1555]: 2025-12-12 18:36:52.561 [INFO][4851] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 18:36:52.727107 containerd[1555]: 2025-12-12 18:36:52.571 [INFO][4851] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 18:36:52.727107 containerd[1555]: 2025-12-12 18:36:52.571 [INFO][4851] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ff1be521972f8397831f98c351cc6efc55ffe9ba066726d9896cf46c5649c1ae" host="localhost" Dec 12 18:36:52.727107 containerd[1555]: 2025-12-12 18:36:52.577 [INFO][4851] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ff1be521972f8397831f98c351cc6efc55ffe9ba066726d9896cf46c5649c1ae Dec 12 18:36:52.727107 containerd[1555]: 2025-12-12 18:36:52.606 [INFO][4851] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ff1be521972f8397831f98c351cc6efc55ffe9ba066726d9896cf46c5649c1ae" host="localhost" Dec 12 18:36:52.727107 containerd[1555]: 2025-12-12 18:36:52.656 [INFO][4851] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.ff1be521972f8397831f98c351cc6efc55ffe9ba066726d9896cf46c5649c1ae" host="localhost" Dec 12 18:36:52.727107 containerd[1555]: 2025-12-12 18:36:52.656 [INFO][4851] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.ff1be521972f8397831f98c351cc6efc55ffe9ba066726d9896cf46c5649c1ae" host="localhost" Dec 12 18:36:52.727107 containerd[1555]: 2025-12-12 18:36:52.656 [INFO][4851] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:36:52.727107 containerd[1555]: 2025-12-12 18:36:52.656 [INFO][4851] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="ff1be521972f8397831f98c351cc6efc55ffe9ba066726d9896cf46c5649c1ae" HandleID="k8s-pod-network.ff1be521972f8397831f98c351cc6efc55ffe9ba066726d9896cf46c5649c1ae" Workload="localhost-k8s-csi--node--driver--mtm8f-eth0" Dec 12 18:36:52.727953 containerd[1555]: 2025-12-12 18:36:52.667 [INFO][4842] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ff1be521972f8397831f98c351cc6efc55ffe9ba066726d9896cf46c5649c1ae" Namespace="calico-system" Pod="csi-node-driver-mtm8f" WorkloadEndpoint="localhost-k8s-csi--node--driver--mtm8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mtm8f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8652b687-d41e-47f6-a864-e604e24deb5b", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 36, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-mtm8f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic7d0f448b2e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:36:52.727953 containerd[1555]: 2025-12-12 18:36:52.667 [INFO][4842] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="ff1be521972f8397831f98c351cc6efc55ffe9ba066726d9896cf46c5649c1ae" Namespace="calico-system" Pod="csi-node-driver-mtm8f" WorkloadEndpoint="localhost-k8s-csi--node--driver--mtm8f-eth0" Dec 12 18:36:52.727953 containerd[1555]: 2025-12-12 18:36:52.667 [INFO][4842] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic7d0f448b2e ContainerID="ff1be521972f8397831f98c351cc6efc55ffe9ba066726d9896cf46c5649c1ae" Namespace="calico-system" Pod="csi-node-driver-mtm8f" WorkloadEndpoint="localhost-k8s-csi--node--driver--mtm8f-eth0" Dec 12 18:36:52.727953 containerd[1555]: 2025-12-12 18:36:52.678 [INFO][4842] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ff1be521972f8397831f98c351cc6efc55ffe9ba066726d9896cf46c5649c1ae" Namespace="calico-system" Pod="csi-node-driver-mtm8f" WorkloadEndpoint="localhost-k8s-csi--node--driver--mtm8f-eth0" Dec 12 18:36:52.727953 containerd[1555]: 2025-12-12 18:36:52.679 [INFO][4842] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ff1be521972f8397831f98c351cc6efc55ffe9ba066726d9896cf46c5649c1ae" Namespace="calico-system" Pod="csi-node-driver-mtm8f" WorkloadEndpoint="localhost-k8s-csi--node--driver--mtm8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mtm8f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8652b687-d41e-47f6-a864-e604e24deb5b", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 36, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ff1be521972f8397831f98c351cc6efc55ffe9ba066726d9896cf46c5649c1ae", Pod:"csi-node-driver-mtm8f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic7d0f448b2e", MAC:"26:e9:d3:4a:b8:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:36:52.727953 containerd[1555]: 2025-12-12 18:36:52.710 [INFO][4842] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ff1be521972f8397831f98c351cc6efc55ffe9ba066726d9896cf46c5649c1ae" Namespace="calico-system" Pod="csi-node-driver-mtm8f" WorkloadEndpoint="localhost-k8s-csi--node--driver--mtm8f-eth0" Dec 12 18:36:52.842544 containerd[1555]: time="2025-12-12T18:36:52.842469195Z" level=info msg="connecting to shim ff1be521972f8397831f98c351cc6efc55ffe9ba066726d9896cf46c5649c1ae" address="unix:///run/containerd/s/aae7e1100730846b5d6ca0dbd6df578840255b04c6fce1087faab317599aa03d" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:36:52.941497 systemd[1]: Started cri-containerd-ff1be521972f8397831f98c351cc6efc55ffe9ba066726d9896cf46c5649c1ae.scope - libcontainer container ff1be521972f8397831f98c351cc6efc55ffe9ba066726d9896cf46c5649c1ae. Dec 12 18:36:52.954716 containerd[1555]: time="2025-12-12T18:36:52.954655640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-clgcw,Uid:b2de3129-f188-4d80-9725-7a97224ed672,Namespace:calico-system,Attempt:0,}" Dec 12 18:36:52.990473 systemd-resolved[1471]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 18:36:53.070975 containerd[1555]: time="2025-12-12T18:36:53.070921775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mtm8f,Uid:8652b687-d41e-47f6-a864-e604e24deb5b,Namespace:calico-system,Attempt:0,} returns sandbox id \"ff1be521972f8397831f98c351cc6efc55ffe9ba066726d9896cf46c5649c1ae\"" Dec 12 18:36:53.078219 containerd[1555]: time="2025-12-12T18:36:53.078154687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:36:53.314345 systemd-networkd[1468]: cali1de38805080: Link UP Dec 12 18:36:53.316086 systemd-networkd[1468]: cali1de38805080: Gained carrier Dec 12 18:36:53.370810 containerd[1555]: 2025-12-12 18:36:53.054 [INFO][4907] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--clgcw-eth0 goldmane-666569f655- calico-system b2de3129-f188-4d80-9725-7a97224ed672 960 0 2025-12-12 18:36:04 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-clgcw eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1de38805080 [] [] }} ContainerID="180b9d4295d6a344afd8438d232c5c898f4612ef9565f69e50634c75b6eb674f" Namespace="calico-system" Pod="goldmane-666569f655-clgcw" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--clgcw-" Dec 12 18:36:53.370810 containerd[1555]: 2025-12-12 18:36:53.055 [INFO][4907] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="180b9d4295d6a344afd8438d232c5c898f4612ef9565f69e50634c75b6eb674f" Namespace="calico-system" Pod="goldmane-666569f655-clgcw" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--clgcw-eth0" Dec 12 18:36:53.370810 containerd[1555]: 2025-12-12 18:36:53.126 [INFO][4928] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="180b9d4295d6a344afd8438d232c5c898f4612ef9565f69e50634c75b6eb674f" HandleID="k8s-pod-network.180b9d4295d6a344afd8438d232c5c898f4612ef9565f69e50634c75b6eb674f" Workload="localhost-k8s-goldmane--666569f655--clgcw-eth0" Dec 12 18:36:53.370810 containerd[1555]: 2025-12-12 18:36:53.126 [INFO][4928] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="180b9d4295d6a344afd8438d232c5c898f4612ef9565f69e50634c75b6eb674f" HandleID="k8s-pod-network.180b9d4295d6a344afd8438d232c5c898f4612ef9565f69e50634c75b6eb674f" Workload="localhost-k8s-goldmane--666569f655--clgcw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-clgcw", "timestamp":"2025-12-12 18:36:53.126445638 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:36:53.370810 containerd[1555]: 2025-12-12 18:36:53.126 [INFO][4928] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:36:53.370810 containerd[1555]: 2025-12-12 18:36:53.129 [INFO][4928] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:36:53.370810 containerd[1555]: 2025-12-12 18:36:53.130 [INFO][4928] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 18:36:53.370810 containerd[1555]: 2025-12-12 18:36:53.154 [INFO][4928] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.180b9d4295d6a344afd8438d232c5c898f4612ef9565f69e50634c75b6eb674f" host="localhost" Dec 12 18:36:53.370810 containerd[1555]: 2025-12-12 18:36:53.180 [INFO][4928] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 18:36:53.370810 containerd[1555]: 2025-12-12 18:36:53.197 [INFO][4928] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 18:36:53.370810 containerd[1555]: 2025-12-12 18:36:53.208 [INFO][4928] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 18:36:53.370810 containerd[1555]: 2025-12-12 18:36:53.222 [INFO][4928] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 18:36:53.370810 containerd[1555]: 2025-12-12 18:36:53.223 [INFO][4928] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.180b9d4295d6a344afd8438d232c5c898f4612ef9565f69e50634c75b6eb674f" host="localhost" Dec 12 18:36:53.370810 containerd[1555]: 2025-12-12 18:36:53.238 [INFO][4928] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.180b9d4295d6a344afd8438d232c5c898f4612ef9565f69e50634c75b6eb674f Dec 12 18:36:53.370810 containerd[1555]: 2025-12-12 18:36:53.262 [INFO][4928] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.180b9d4295d6a344afd8438d232c5c898f4612ef9565f69e50634c75b6eb674f" host="localhost" Dec 12 18:36:53.370810 containerd[1555]: 2025-12-12 18:36:53.301 [INFO][4928] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.180b9d4295d6a344afd8438d232c5c898f4612ef9565f69e50634c75b6eb674f" host="localhost" Dec 12 18:36:53.370810 containerd[1555]: 2025-12-12 18:36:53.301 [INFO][4928] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.180b9d4295d6a344afd8438d232c5c898f4612ef9565f69e50634c75b6eb674f" host="localhost" Dec 12 18:36:53.370810 containerd[1555]: 2025-12-12 18:36:53.301 [INFO][4928] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:36:53.370810 containerd[1555]: 2025-12-12 18:36:53.301 [INFO][4928] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="180b9d4295d6a344afd8438d232c5c898f4612ef9565f69e50634c75b6eb674f" HandleID="k8s-pod-network.180b9d4295d6a344afd8438d232c5c898f4612ef9565f69e50634c75b6eb674f" Workload="localhost-k8s-goldmane--666569f655--clgcw-eth0" Dec 12 18:36:53.371583 containerd[1555]: 2025-12-12 18:36:53.307 [INFO][4907] cni-plugin/k8s.go 418: Populated endpoint ContainerID="180b9d4295d6a344afd8438d232c5c898f4612ef9565f69e50634c75b6eb674f" Namespace="calico-system" Pod="goldmane-666569f655-clgcw" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--clgcw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--clgcw-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"b2de3129-f188-4d80-9725-7a97224ed672", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 36, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-clgcw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1de38805080", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:36:53.371583 containerd[1555]: 2025-12-12 18:36:53.308 [INFO][4907] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="180b9d4295d6a344afd8438d232c5c898f4612ef9565f69e50634c75b6eb674f" Namespace="calico-system" Pod="goldmane-666569f655-clgcw" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--clgcw-eth0" Dec 12 18:36:53.371583 containerd[1555]: 2025-12-12 18:36:53.309 [INFO][4907] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1de38805080 ContainerID="180b9d4295d6a344afd8438d232c5c898f4612ef9565f69e50634c75b6eb674f" Namespace="calico-system" Pod="goldmane-666569f655-clgcw" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--clgcw-eth0" Dec 12 18:36:53.371583 containerd[1555]: 2025-12-12 18:36:53.311 [INFO][4907] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="180b9d4295d6a344afd8438d232c5c898f4612ef9565f69e50634c75b6eb674f" Namespace="calico-system" Pod="goldmane-666569f655-clgcw" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--clgcw-eth0" Dec 12 18:36:53.371583 containerd[1555]: 2025-12-12 18:36:53.312 [INFO][4907] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="180b9d4295d6a344afd8438d232c5c898f4612ef9565f69e50634c75b6eb674f" Namespace="calico-system" Pod="goldmane-666569f655-clgcw" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--clgcw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--clgcw-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"b2de3129-f188-4d80-9725-7a97224ed672", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 36, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"180b9d4295d6a344afd8438d232c5c898f4612ef9565f69e50634c75b6eb674f", Pod:"goldmane-666569f655-clgcw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1de38805080", MAC:"72:56:25:af:bb:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:36:53.371583 containerd[1555]: 2025-12-12 18:36:53.352 [INFO][4907] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="180b9d4295d6a344afd8438d232c5c898f4612ef9565f69e50634c75b6eb674f" Namespace="calico-system" Pod="goldmane-666569f655-clgcw" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--clgcw-eth0" Dec 12 18:36:53.458993 containerd[1555]: time="2025-12-12T18:36:53.458317805Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:36:53.464122 containerd[1555]: time="2025-12-12T18:36:53.463294157Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:36:53.464122 containerd[1555]: time="2025-12-12T18:36:53.463832261Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:36:53.464570 kubelet[2727]: E1212 18:36:53.463717 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:36:53.464570 kubelet[2727]: E1212 18:36:53.464002 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:36:53.465534 kubelet[2727]: E1212 18:36:53.465025 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h6v6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mtm8f_calico-system(8652b687-d41e-47f6-a864-e604e24deb5b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:36:53.469915 containerd[1555]: time="2025-12-12T18:36:53.469857731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:36:53.470603 containerd[1555]: time="2025-12-12T18:36:53.470572585Z" level=info msg="connecting to shim 180b9d4295d6a344afd8438d232c5c898f4612ef9565f69e50634c75b6eb674f" address="unix:///run/containerd/s/d70231ca549547406577013df533144649ab076e8de4e937706868b585285c62" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:36:53.527644 systemd[1]: Started cri-containerd-180b9d4295d6a344afd8438d232c5c898f4612ef9565f69e50634c75b6eb674f.scope - libcontainer container 180b9d4295d6a344afd8438d232c5c898f4612ef9565f69e50634c75b6eb674f. Dec 12 18:36:53.581028 systemd-resolved[1471]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 18:36:53.667382 containerd[1555]: time="2025-12-12T18:36:53.667319512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-clgcw,Uid:b2de3129-f188-4d80-9725-7a97224ed672,Namespace:calico-system,Attempt:0,} returns sandbox id \"180b9d4295d6a344afd8438d232c5c898f4612ef9565f69e50634c75b6eb674f\"" Dec 12 18:36:53.796604 containerd[1555]: time="2025-12-12T18:36:53.796491826Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:36:53.798927 containerd[1555]: time="2025-12-12T18:36:53.798845607Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:36:53.799094 containerd[1555]: time="2025-12-12T18:36:53.799032626Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:36:53.799408 kubelet[2727]: E1212 18:36:53.799355 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:36:53.799484 kubelet[2727]: E1212 18:36:53.799422 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:36:53.799770 kubelet[2727]: E1212 18:36:53.799694 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h6v6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mtm8f_calico-system(8652b687-d41e-47f6-a864-e604e24deb5b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:36:53.802667 containerd[1555]: time="2025-12-12T18:36:53.802617903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:36:53.805399 kubelet[2727]: E1212 18:36:53.805346 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mtm8f" podUID="8652b687-d41e-47f6-a864-e604e24deb5b" Dec 12 18:36:53.829388 kubelet[2727]: E1212 18:36:53.829243 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mtm8f" podUID="8652b687-d41e-47f6-a864-e604e24deb5b" Dec 12 18:36:54.176722 containerd[1555]: time="2025-12-12T18:36:54.176624411Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:36:54.181181 containerd[1555]: time="2025-12-12T18:36:54.181055613Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:36:54.181181 containerd[1555]: time="2025-12-12T18:36:54.181128138Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 18:36:54.181563 kubelet[2727]: E1212 18:36:54.181341 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:36:54.181563 kubelet[2727]: E1212 18:36:54.181480 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:36:54.182171 kubelet[2727]: E1212 18:36:54.181728 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m9dsm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-clgcw_calico-system(b2de3129-f188-4d80-9725-7a97224ed672): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:36:54.183313 kubelet[2727]: E1212 18:36:54.183215 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-clgcw" podUID="b2de3129-f188-4d80-9725-7a97224ed672" Dec 12 18:36:54.581418 systemd-networkd[1468]: calic7d0f448b2e: Gained IPv6LL Dec 12 18:36:54.842670 kubelet[2727]: E1212 18:36:54.840202 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-clgcw" podUID="b2de3129-f188-4d80-9725-7a97224ed672" Dec 12 18:36:54.845320 kubelet[2727]: E1212 18:36:54.845225 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mtm8f" podUID="8652b687-d41e-47f6-a864-e604e24deb5b" Dec 12 18:36:54.959056 kubelet[2727]: E1212 18:36:54.958384 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:36:55.018220 systemd-networkd[1468]: cali1de38805080: Gained IPv6LL Dec 12 18:36:55.948591 kubelet[2727]: E1212 18:36:55.948505 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:36:55.951848 containerd[1555]: time="2025-12-12T18:36:55.949691867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-btnsn,Uid:e5b51e08-d9b5-4edc-977d-d07e96ed0aed,Namespace:kube-system,Attempt:0,}" Dec 12 18:36:56.407437 systemd-networkd[1468]: calia51e0b1078b: Link UP Dec 12 18:36:56.419461 systemd-networkd[1468]: calia51e0b1078b: Gained carrier Dec 12 18:36:56.563366 containerd[1555]: 2025-12-12 18:36:56.098 [INFO][4999] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--btnsn-eth0 coredns-674b8bbfcf- kube-system e5b51e08-d9b5-4edc-977d-d07e96ed0aed 965 0 2025-12-12 18:35:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-btnsn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia51e0b1078b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="21441165125f3c771db65490db74470db8d85ce3f46523240bdb732818966c47" Namespace="kube-system" Pod="coredns-674b8bbfcf-btnsn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--btnsn-" Dec 12 18:36:56.563366 containerd[1555]: 2025-12-12 18:36:56.098 [INFO][4999] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="21441165125f3c771db65490db74470db8d85ce3f46523240bdb732818966c47" Namespace="kube-system" Pod="coredns-674b8bbfcf-btnsn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--btnsn-eth0" Dec 12 18:36:56.563366 containerd[1555]: 2025-12-12 18:36:56.179 [INFO][5014] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="21441165125f3c771db65490db74470db8d85ce3f46523240bdb732818966c47" HandleID="k8s-pod-network.21441165125f3c771db65490db74470db8d85ce3f46523240bdb732818966c47" Workload="localhost-k8s-coredns--674b8bbfcf--btnsn-eth0" Dec 12 18:36:56.563366 containerd[1555]: 2025-12-12 18:36:56.179 [INFO][5014] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="21441165125f3c771db65490db74470db8d85ce3f46523240bdb732818966c47" HandleID="k8s-pod-network.21441165125f3c771db65490db74470db8d85ce3f46523240bdb732818966c47" Workload="localhost-k8s-coredns--674b8bbfcf--btnsn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032c2f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-btnsn", "timestamp":"2025-12-12 18:36:56.179151944 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 18:36:56.563366 containerd[1555]: 2025-12-12 18:36:56.179 [INFO][5014] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 18:36:56.563366 containerd[1555]: 2025-12-12 18:36:56.179 [INFO][5014] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 18:36:56.563366 containerd[1555]: 2025-12-12 18:36:56.179 [INFO][5014] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 18:36:56.563366 containerd[1555]: 2025-12-12 18:36:56.199 [INFO][5014] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.21441165125f3c771db65490db74470db8d85ce3f46523240bdb732818966c47" host="localhost" Dec 12 18:36:56.563366 containerd[1555]: 2025-12-12 18:36:56.218 [INFO][5014] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 18:36:56.563366 containerd[1555]: 2025-12-12 18:36:56.230 [INFO][5014] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 18:36:56.563366 containerd[1555]: 2025-12-12 18:36:56.244 [INFO][5014] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 18:36:56.563366 containerd[1555]: 2025-12-12 18:36:56.252 [INFO][5014] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 18:36:56.563366 containerd[1555]: 2025-12-12 18:36:56.252 [INFO][5014] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.21441165125f3c771db65490db74470db8d85ce3f46523240bdb732818966c47" host="localhost" Dec 12 18:36:56.563366 containerd[1555]: 2025-12-12 18:36:56.261 [INFO][5014] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.21441165125f3c771db65490db74470db8d85ce3f46523240bdb732818966c47 Dec 12 18:36:56.563366 containerd[1555]: 2025-12-12 18:36:56.274 [INFO][5014] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.21441165125f3c771db65490db74470db8d85ce3f46523240bdb732818966c47" host="localhost" Dec 12 18:36:56.563366 containerd[1555]: 2025-12-12 18:36:56.347 [INFO][5014] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.21441165125f3c771db65490db74470db8d85ce3f46523240bdb732818966c47" host="localhost" Dec 12 18:36:56.563366 containerd[1555]: 2025-12-12 18:36:56.347 [INFO][5014] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.21441165125f3c771db65490db74470db8d85ce3f46523240bdb732818966c47" host="localhost" Dec 12 18:36:56.563366 containerd[1555]: 2025-12-12 18:36:56.347 [INFO][5014] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 18:36:56.563366 containerd[1555]: 2025-12-12 18:36:56.347 [INFO][5014] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="21441165125f3c771db65490db74470db8d85ce3f46523240bdb732818966c47" HandleID="k8s-pod-network.21441165125f3c771db65490db74470db8d85ce3f46523240bdb732818966c47" Workload="localhost-k8s-coredns--674b8bbfcf--btnsn-eth0" Dec 12 18:36:56.564377 containerd[1555]: 2025-12-12 18:36:56.376 [INFO][4999] cni-plugin/k8s.go 418: Populated endpoint ContainerID="21441165125f3c771db65490db74470db8d85ce3f46523240bdb732818966c47" Namespace="kube-system" Pod="coredns-674b8bbfcf-btnsn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--btnsn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--btnsn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e5b51e08-d9b5-4edc-977d-d07e96ed0aed", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 35, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-btnsn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia51e0b1078b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:36:56.564377 containerd[1555]: 2025-12-12 18:36:56.377 [INFO][4999] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="21441165125f3c771db65490db74470db8d85ce3f46523240bdb732818966c47" Namespace="kube-system" Pod="coredns-674b8bbfcf-btnsn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--btnsn-eth0" Dec 12 18:36:56.564377 containerd[1555]: 2025-12-12 18:36:56.377 [INFO][4999] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia51e0b1078b ContainerID="21441165125f3c771db65490db74470db8d85ce3f46523240bdb732818966c47" Namespace="kube-system" Pod="coredns-674b8bbfcf-btnsn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--btnsn-eth0" Dec 12 18:36:56.564377 containerd[1555]: 2025-12-12 18:36:56.399 [INFO][4999] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="21441165125f3c771db65490db74470db8d85ce3f46523240bdb732818966c47" Namespace="kube-system" Pod="coredns-674b8bbfcf-btnsn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--btnsn-eth0" Dec 12 18:36:56.564377 containerd[1555]: 2025-12-12 18:36:56.439 [INFO][4999] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="21441165125f3c771db65490db74470db8d85ce3f46523240bdb732818966c47" Namespace="kube-system" Pod="coredns-674b8bbfcf-btnsn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--btnsn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--btnsn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e5b51e08-d9b5-4edc-977d-d07e96ed0aed", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 18, 35, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"21441165125f3c771db65490db74470db8d85ce3f46523240bdb732818966c47", Pod:"coredns-674b8bbfcf-btnsn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia51e0b1078b", MAC:"52:8b:4d:8a:97:2a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 18:36:56.564377 containerd[1555]: 2025-12-12 18:36:56.513 [INFO][4999] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="21441165125f3c771db65490db74470db8d85ce3f46523240bdb732818966c47" Namespace="kube-system" Pod="coredns-674b8bbfcf-btnsn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--btnsn-eth0" Dec 12 18:36:56.754751 containerd[1555]: time="2025-12-12T18:36:56.753519223Z" level=info msg="connecting to shim 21441165125f3c771db65490db74470db8d85ce3f46523240bdb732818966c47" address="unix:///run/containerd/s/f4b6bc4248262eb2194cd3eb5827168f913624e00db142059a4e9f564a13ecf0" namespace=k8s.io protocol=ttrpc version=3 Dec 12 18:36:56.906113 systemd[1]: Started cri-containerd-21441165125f3c771db65490db74470db8d85ce3f46523240bdb732818966c47.scope - libcontainer container 21441165125f3c771db65490db74470db8d85ce3f46523240bdb732818966c47. Dec 12 18:36:57.008456 systemd-resolved[1471]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 18:36:57.167340 containerd[1555]: time="2025-12-12T18:36:57.167098588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-btnsn,Uid:e5b51e08-d9b5-4edc-977d-d07e96ed0aed,Namespace:kube-system,Attempt:0,} returns sandbox id \"21441165125f3c771db65490db74470db8d85ce3f46523240bdb732818966c47\"" Dec 12 18:36:57.169217 kubelet[2727]: E1212 18:36:57.169163 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:36:57.203112 containerd[1555]: time="2025-12-12T18:36:57.203046187Z" level=info msg="CreateContainer within sandbox \"21441165125f3c771db65490db74470db8d85ce3f46523240bdb732818966c47\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 18:36:57.300016 containerd[1555]: time="2025-12-12T18:36:57.299814702Z" level=info msg="Container 810e1de1202a04990fb00d6e6cac30fbc9a9507ad5762b6f010c88cf034656fa: CDI devices from CRI Config.CDIDevices: []" Dec 12 18:36:57.311688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount435199782.mount: Deactivated successfully. Dec 12 18:36:57.345705 containerd[1555]: time="2025-12-12T18:36:57.345518751Z" level=info msg="CreateContainer within sandbox \"21441165125f3c771db65490db74470db8d85ce3f46523240bdb732818966c47\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"810e1de1202a04990fb00d6e6cac30fbc9a9507ad5762b6f010c88cf034656fa\"" Dec 12 18:36:57.346577 containerd[1555]: time="2025-12-12T18:36:57.346535343Z" level=info msg="StartContainer for \"810e1de1202a04990fb00d6e6cac30fbc9a9507ad5762b6f010c88cf034656fa\"" Dec 12 18:36:57.354879 containerd[1555]: time="2025-12-12T18:36:57.354702970Z" level=info msg="connecting to shim 810e1de1202a04990fb00d6e6cac30fbc9a9507ad5762b6f010c88cf034656fa" address="unix:///run/containerd/s/f4b6bc4248262eb2194cd3eb5827168f913624e00db142059a4e9f564a13ecf0" protocol=ttrpc version=3 Dec 12 18:36:57.426760 systemd[1]: Started cri-containerd-810e1de1202a04990fb00d6e6cac30fbc9a9507ad5762b6f010c88cf034656fa.scope - libcontainer container 810e1de1202a04990fb00d6e6cac30fbc9a9507ad5762b6f010c88cf034656fa. Dec 12 18:36:57.517244 systemd-networkd[1468]: calia51e0b1078b: Gained IPv6LL Dec 12 18:36:57.545961 containerd[1555]: time="2025-12-12T18:36:57.544952801Z" level=info msg="StartContainer for \"810e1de1202a04990fb00d6e6cac30fbc9a9507ad5762b6f010c88cf034656fa\" returns successfully" Dec 12 18:36:57.868516 kubelet[2727]: E1212 18:36:57.868436 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:36:57.923831 kubelet[2727]: I1212 18:36:57.923723 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-btnsn" podStartSLOduration=77.923693512 podStartE2EDuration="1m17.923693512s" podCreationTimestamp="2025-12-12 18:35:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 18:36:57.919956512 +0000 UTC m=+79.535323431" watchObservedRunningTime="2025-12-12 18:36:57.923693512 +0000 UTC m=+79.539060411" Dec 12 18:36:58.882657 kubelet[2727]: E1212 18:36:58.874355 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:36:58.964503 containerd[1555]: time="2025-12-12T18:36:58.964448232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:36:59.326175 containerd[1555]: time="2025-12-12T18:36:59.325728149Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:36:59.329265 containerd[1555]: time="2025-12-12T18:36:59.329163482Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:36:59.329455 containerd[1555]: time="2025-12-12T18:36:59.329348649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:36:59.329661 kubelet[2727]: E1212 18:36:59.329596 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:36:59.329728 kubelet[2727]: E1212 18:36:59.329676 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:36:59.330088 kubelet[2727]: E1212 18:36:59.329930 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f79zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-845fcfc7bd-n97cm_calico-system(97c6c51c-88d4-4a5c-b977-999f64d65996): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:36:59.331525 kubelet[2727]: E1212 18:36:59.331405 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-845fcfc7bd-n97cm" podUID="97c6c51c-88d4-4a5c-b977-999f64d65996" Dec 12 18:36:59.877611 kubelet[2727]: E1212 18:36:59.877543 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:37:00.982393 containerd[1555]: time="2025-12-12T18:37:00.982337358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:37:01.356905 containerd[1555]: time="2025-12-12T18:37:01.356598495Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:37:01.362118 containerd[1555]: time="2025-12-12T18:37:01.360719542Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:37:01.362118 containerd[1555]: time="2025-12-12T18:37:01.360907845Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:37:01.362877 kubelet[2727]: E1212 18:37:01.362708 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:37:01.362877 kubelet[2727]: E1212 18:37:01.362836 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:37:01.364181 kubelet[2727]: E1212 18:37:01.364122 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c925c1a2ddfa4954a8cd0226d83a2f39,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2dpcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d6876f9d4-45g7h_calico-system(4469ce36-6286-4f2c-84e3-e653a9c04ba0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:37:01.368646 containerd[1555]: time="2025-12-12T18:37:01.368596321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:37:01.706834 containerd[1555]: time="2025-12-12T18:37:01.706679025Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:37:01.716934 containerd[1555]: time="2025-12-12T18:37:01.710389031Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:37:01.716934 containerd[1555]: time="2025-12-12T18:37:01.710517121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:37:01.717238 kubelet[2727]: E1212 18:37:01.710728 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:37:01.717238 kubelet[2727]: E1212 18:37:01.710826 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:37:01.717238 kubelet[2727]: E1212 18:37:01.710981 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2dpcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d6876f9d4-45g7h_calico-system(4469ce36-6286-4f2c-84e3-e653a9c04ba0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:37:01.718144 kubelet[2727]: E1212 18:37:01.718062 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d6876f9d4-45g7h" podUID="4469ce36-6286-4f2c-84e3-e653a9c04ba0" Dec 12 18:37:01.916058 systemd[1]: Started sshd@7-10.0.0.51:22-10.0.0.1:37030.service - OpenSSH per-connection server daemon (10.0.0.1:37030). Dec 12 18:37:01.950317 kubelet[2727]: E1212 18:37:01.948840 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:37:01.961761 containerd[1555]: time="2025-12-12T18:37:01.952702458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:37:02.160449 sshd[5130]: Accepted publickey for core from 10.0.0.1 port 37030 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:37:02.167854 sshd-session[5130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:37:02.183339 systemd-logind[1539]: New session 8 of user core. Dec 12 18:37:02.202343 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 18:37:02.320647 containerd[1555]: time="2025-12-12T18:37:02.319951578Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:37:02.327187 containerd[1555]: time="2025-12-12T18:37:02.324146638Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:37:02.327187 containerd[1555]: time="2025-12-12T18:37:02.324307020Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:37:02.327442 kubelet[2727]: E1212 18:37:02.324611 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:37:02.327442 kubelet[2727]: E1212 18:37:02.324695 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:37:02.327442 kubelet[2727]: E1212 18:37:02.324908 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p4j8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f4dfd9b79-gmfhk_calico-apiserver(433cc46b-ce9f-4fdf-9392-bdd29bdc4330): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:37:02.330299 kubelet[2727]: E1212 18:37:02.330054 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f4dfd9b79-gmfhk" podUID="433cc46b-ce9f-4fdf-9392-bdd29bdc4330" Dec 12 18:37:02.591153 sshd[5133]: Connection closed by 10.0.0.1 port 37030 Dec 12 18:37:02.592025 sshd-session[5130]: pam_unix(sshd:session): session closed for user core Dec 12 18:37:02.608997 systemd[1]: sshd@7-10.0.0.51:22-10.0.0.1:37030.service: Deactivated successfully. Dec 12 18:37:02.618954 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 18:37:02.624021 systemd-logind[1539]: Session 8 logged out. Waiting for processes to exit. Dec 12 18:37:02.634576 systemd-logind[1539]: Removed session 8. Dec 12 18:37:02.957227 containerd[1555]: time="2025-12-12T18:37:02.954509771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:37:03.315308 containerd[1555]: time="2025-12-12T18:37:03.314944766Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:37:03.321095 containerd[1555]: time="2025-12-12T18:37:03.320900695Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:37:03.321095 containerd[1555]: time="2025-12-12T18:37:03.320986215Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:37:03.322176 kubelet[2727]: E1212 18:37:03.321533 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:37:03.322176 kubelet[2727]: E1212 18:37:03.321607 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:37:03.322176 kubelet[2727]: E1212 18:37:03.321902 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bmk7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f4dfd9b79-68db6_calico-apiserver(69dcfde4-a416-44cb-b592-faa494483016): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:37:03.323628 kubelet[2727]: E1212 18:37:03.323217 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f4dfd9b79-68db6" podUID="69dcfde4-a416-44cb-b592-faa494483016" Dec 12 18:37:06.952385 containerd[1555]: time="2025-12-12T18:37:06.952303958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:37:07.339841 containerd[1555]: time="2025-12-12T18:37:07.339309822Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:37:07.345367 containerd[1555]: time="2025-12-12T18:37:07.345245846Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:37:07.345598 containerd[1555]: time="2025-12-12T18:37:07.345413402Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 18:37:07.346857 kubelet[2727]: E1212 18:37:07.346769 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:37:07.347459 kubelet[2727]: E1212 18:37:07.346870 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:37:07.349069 kubelet[2727]: E1212 18:37:07.348983 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m9dsm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-clgcw_calico-system(b2de3129-f188-4d80-9725-7a97224ed672): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:37:07.350818 kubelet[2727]: E1212 18:37:07.350696 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-clgcw" podUID="b2de3129-f188-4d80-9725-7a97224ed672" Dec 12 18:37:07.622136 systemd[1]: Started sshd@8-10.0.0.51:22-10.0.0.1:37044.service - OpenSSH per-connection server daemon (10.0.0.1:37044). Dec 12 18:37:07.790843 sshd[5149]: Accepted publickey for core from 10.0.0.1 port 37044 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:37:07.795915 sshd-session[5149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:37:07.819635 systemd-logind[1539]: New session 9 of user core. Dec 12 18:37:07.835934 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 18:37:07.961875 kubelet[2727]: E1212 18:37:07.957801 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:37:08.231417 sshd[5152]: Connection closed by 10.0.0.1 port 37044 Dec 12 18:37:08.229500 sshd-session[5149]: pam_unix(sshd:session): session closed for user core Dec 12 18:37:08.237991 systemd[1]: sshd@8-10.0.0.51:22-10.0.0.1:37044.service: Deactivated successfully. Dec 12 18:37:08.250535 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 18:37:08.263569 systemd-logind[1539]: Session 9 logged out. Waiting for processes to exit. Dec 12 18:37:08.275006 systemd-logind[1539]: Removed session 9. Dec 12 18:37:09.969295 containerd[1555]: time="2025-12-12T18:37:09.968643328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:37:10.415708 containerd[1555]: time="2025-12-12T18:37:10.410859511Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:37:10.415708 containerd[1555]: time="2025-12-12T18:37:10.412569572Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:37:10.415708 containerd[1555]: time="2025-12-12T18:37:10.413732602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:37:10.416033 kubelet[2727]: E1212 18:37:10.414026 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:37:10.416033 kubelet[2727]: E1212 18:37:10.414120 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:37:10.416033 kubelet[2727]: E1212 18:37:10.414903 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h6v6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mtm8f_calico-system(8652b687-d41e-47f6-a864-e604e24deb5b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:37:10.417379 containerd[1555]: time="2025-12-12T18:37:10.417327633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:37:10.805332 containerd[1555]: time="2025-12-12T18:37:10.804913750Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:37:10.820330 containerd[1555]: time="2025-12-12T18:37:10.820132070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:37:10.820743 containerd[1555]: time="2025-12-12T18:37:10.820204266Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:37:10.824494 kubelet[2727]: E1212 18:37:10.821840 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:37:10.840928 kubelet[2727]: E1212 18:37:10.824407 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:37:10.840928 kubelet[2727]: E1212 18:37:10.833107 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h6v6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mtm8f_calico-system(8652b687-d41e-47f6-a864-e604e24deb5b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:37:10.842831 kubelet[2727]: E1212 18:37:10.842731 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mtm8f" podUID="8652b687-d41e-47f6-a864-e604e24deb5b" Dec 12 18:37:11.950826 kubelet[2727]: E1212 18:37:11.950710 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-845fcfc7bd-n97cm" podUID="97c6c51c-88d4-4a5c-b977-999f64d65996" Dec 12 18:37:13.276299 systemd[1]: Started sshd@9-10.0.0.51:22-10.0.0.1:56900.service - OpenSSH per-connection server daemon (10.0.0.1:56900). Dec 12 18:37:13.597720 sshd[5174]: Accepted publickey for core from 10.0.0.1 port 56900 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:37:13.605142 sshd-session[5174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:37:13.637662 systemd-logind[1539]: New session 10 of user core. Dec 12 18:37:13.644700 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 18:37:14.014413 sshd[5177]: Connection closed by 10.0.0.1 port 56900 Dec 12 18:37:14.015498 sshd-session[5174]: pam_unix(sshd:session): session closed for user core Dec 12 18:37:14.035445 systemd[1]: sshd@9-10.0.0.51:22-10.0.0.1:56900.service: Deactivated successfully. Dec 12 18:37:14.045493 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 18:37:14.057243 systemd-logind[1539]: Session 10 logged out. Waiting for processes to exit. Dec 12 18:37:14.068182 systemd-logind[1539]: Removed session 10. Dec 12 18:37:14.965967 kubelet[2727]: E1212 18:37:14.964837 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f4dfd9b79-gmfhk" podUID="433cc46b-ce9f-4fdf-9392-bdd29bdc4330" Dec 12 18:37:14.973462 kubelet[2727]: E1212 18:37:14.973398 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d6876f9d4-45g7h" podUID="4469ce36-6286-4f2c-84e3-e653a9c04ba0" Dec 12 18:37:16.032414 kubelet[2727]: E1212 18:37:16.029956 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:37:17.952567 kubelet[2727]: E1212 18:37:17.952432 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f4dfd9b79-68db6" podUID="69dcfde4-a416-44cb-b592-faa494483016" Dec 12 18:37:19.063222 systemd[1]: Started sshd@10-10.0.0.51:22-10.0.0.1:56912.service - OpenSSH per-connection server daemon (10.0.0.1:56912). Dec 12 18:37:19.248454 sshd[5220]: Accepted publickey for core from 10.0.0.1 port 56912 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:37:19.253340 sshd-session[5220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:37:19.280886 systemd-logind[1539]: New session 11 of user core. Dec 12 18:37:19.310972 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 18:37:19.601910 sshd[5223]: Connection closed by 10.0.0.1 port 56912 Dec 12 18:37:19.604452 sshd-session[5220]: pam_unix(sshd:session): session closed for user core Dec 12 18:37:19.617772 systemd[1]: sshd@10-10.0.0.51:22-10.0.0.1:56912.service: Deactivated successfully. Dec 12 18:37:19.622397 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 18:37:19.627246 systemd-logind[1539]: Session 11 logged out. Waiting for processes to exit. Dec 12 18:37:19.632543 systemd-logind[1539]: Removed session 11. Dec 12 18:37:19.957672 kubelet[2727]: E1212 18:37:19.954041 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-clgcw" podUID="b2de3129-f188-4d80-9725-7a97224ed672" Dec 12 18:37:21.958604 kubelet[2727]: E1212 18:37:21.958019 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mtm8f" podUID="8652b687-d41e-47f6-a864-e604e24deb5b" Dec 12 18:37:24.645530 systemd[1]: Started sshd@11-10.0.0.51:22-10.0.0.1:36440.service - OpenSSH per-connection server daemon (10.0.0.1:36440). Dec 12 18:37:24.866900 sshd[5238]: Accepted publickey for core from 10.0.0.1 port 36440 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:37:24.879866 sshd-session[5238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:37:24.928328 systemd-logind[1539]: New session 12 of user core. Dec 12 18:37:24.943369 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 18:37:25.421669 sshd[5241]: Connection closed by 10.0.0.1 port 36440 Dec 12 18:37:25.427660 sshd-session[5238]: pam_unix(sshd:session): session closed for user core Dec 12 18:37:25.450704 systemd[1]: sshd@11-10.0.0.51:22-10.0.0.1:36440.service: Deactivated successfully. Dec 12 18:37:25.469055 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 18:37:25.481094 systemd-logind[1539]: Session 12 logged out. Waiting for processes to exit. Dec 12 18:37:25.491473 systemd-logind[1539]: Removed session 12. Dec 12 18:37:25.959680 containerd[1555]: time="2025-12-12T18:37:25.954724135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:37:26.366688 containerd[1555]: time="2025-12-12T18:37:26.364275744Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:37:26.394633 containerd[1555]: time="2025-12-12T18:37:26.391353228Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:37:26.394633 containerd[1555]: time="2025-12-12T18:37:26.393678410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:37:26.397532 kubelet[2727]: E1212 18:37:26.394334 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:37:26.397532 kubelet[2727]: E1212 18:37:26.395505 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:37:26.397532 kubelet[2727]: E1212 18:37:26.395870 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p4j8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f4dfd9b79-gmfhk_calico-apiserver(433cc46b-ce9f-4fdf-9392-bdd29bdc4330): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:37:26.397532 kubelet[2727]: E1212 18:37:26.397249 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f4dfd9b79-gmfhk" podUID="433cc46b-ce9f-4fdf-9392-bdd29bdc4330" Dec 12 18:37:26.398341 containerd[1555]: time="2025-12-12T18:37:26.396705120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:37:26.775771 containerd[1555]: time="2025-12-12T18:37:26.775507407Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:37:26.783565 containerd[1555]: time="2025-12-12T18:37:26.783331841Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:37:26.783565 containerd[1555]: time="2025-12-12T18:37:26.783506030Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:37:26.784108 kubelet[2727]: E1212 18:37:26.783938 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:37:26.784108 kubelet[2727]: E1212 18:37:26.784013 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:37:26.787094 kubelet[2727]: E1212 18:37:26.785034 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f79zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-845fcfc7bd-n97cm_calico-system(97c6c51c-88d4-4a5c-b977-999f64d65996): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:37:26.787094 kubelet[2727]: E1212 18:37:26.786949 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-845fcfc7bd-n97cm" podUID="97c6c51c-88d4-4a5c-b977-999f64d65996" Dec 12 18:37:28.960317 containerd[1555]: time="2025-12-12T18:37:28.959809733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:37:29.323171 containerd[1555]: time="2025-12-12T18:37:29.322339319Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:37:29.335885 containerd[1555]: time="2025-12-12T18:37:29.328944362Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:37:29.335885 containerd[1555]: time="2025-12-12T18:37:29.329112962Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:37:29.335885 containerd[1555]: time="2025-12-12T18:37:29.330990930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:37:29.336358 kubelet[2727]: E1212 18:37:29.329461 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:37:29.336358 kubelet[2727]: E1212 18:37:29.329536 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:37:29.336358 kubelet[2727]: E1212 18:37:29.329837 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bmk7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f4dfd9b79-68db6_calico-apiserver(69dcfde4-a416-44cb-b592-faa494483016): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:37:29.336358 kubelet[2727]: E1212 18:37:29.331714 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f4dfd9b79-68db6" podUID="69dcfde4-a416-44cb-b592-faa494483016" Dec 12 18:37:29.724983 containerd[1555]: time="2025-12-12T18:37:29.723205654Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:37:29.733520 containerd[1555]: time="2025-12-12T18:37:29.730465469Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:37:29.733520 containerd[1555]: time="2025-12-12T18:37:29.730498331Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:37:29.733759 kubelet[2727]: E1212 18:37:29.731233 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:37:29.733759 kubelet[2727]: E1212 18:37:29.731325 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:37:29.733759 kubelet[2727]: E1212 18:37:29.731533 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c925c1a2ddfa4954a8cd0226d83a2f39,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2dpcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d6876f9d4-45g7h_calico-system(4469ce36-6286-4f2c-84e3-e653a9c04ba0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:37:29.738759 containerd[1555]: time="2025-12-12T18:37:29.738692776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:37:30.129043 containerd[1555]: time="2025-12-12T18:37:30.128948385Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:37:30.158801 containerd[1555]: time="2025-12-12T18:37:30.158690246Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:37:30.159192 containerd[1555]: time="2025-12-12T18:37:30.159123748Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:37:30.165736 kubelet[2727]: E1212 18:37:30.162619 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:37:30.165736 kubelet[2727]: E1212 18:37:30.162727 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:37:30.165736 kubelet[2727]: E1212 18:37:30.162934 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2dpcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d6876f9d4-45g7h_calico-system(4469ce36-6286-4f2c-84e3-e653a9c04ba0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:37:30.165736 kubelet[2727]: E1212 18:37:30.164180 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d6876f9d4-45g7h" podUID="4469ce36-6286-4f2c-84e3-e653a9c04ba0" Dec 12 18:37:30.482960 systemd[1]: Started sshd@12-10.0.0.51:22-10.0.0.1:56010.service - OpenSSH per-connection server daemon (10.0.0.1:56010). Dec 12 18:37:30.704867 sshd[5262]: Accepted publickey for core from 10.0.0.1 port 56010 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:37:30.711347 sshd-session[5262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:37:30.745899 systemd-logind[1539]: New session 13 of user core. Dec 12 18:37:30.763811 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 18:37:31.056303 sshd[5265]: Connection closed by 10.0.0.1 port 56010 Dec 12 18:37:31.057011 sshd-session[5262]: pam_unix(sshd:session): session closed for user core Dec 12 18:37:31.071526 systemd[1]: sshd@12-10.0.0.51:22-10.0.0.1:56010.service: Deactivated successfully. Dec 12 18:37:31.077184 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 18:37:31.079954 systemd-logind[1539]: Session 13 logged out. Waiting for processes to exit. Dec 12 18:37:31.091437 systemd[1]: Started sshd@13-10.0.0.51:22-10.0.0.1:56020.service - OpenSSH per-connection server daemon (10.0.0.1:56020). Dec 12 18:37:31.095010 systemd-logind[1539]: Removed session 13. Dec 12 18:37:31.246679 sshd[5281]: Accepted publickey for core from 10.0.0.1 port 56020 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:37:31.247493 sshd-session[5281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:37:31.262655 systemd-logind[1539]: New session 14 of user core. Dec 12 18:37:31.286893 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 18:37:31.854129 sshd[5284]: Connection closed by 10.0.0.1 port 56020 Dec 12 18:37:31.856252 sshd-session[5281]: pam_unix(sshd:session): session closed for user core Dec 12 18:37:31.882523 systemd[1]: sshd@13-10.0.0.51:22-10.0.0.1:56020.service: Deactivated successfully. Dec 12 18:37:31.897866 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 18:37:31.906713 systemd-logind[1539]: Session 14 logged out. Waiting for processes to exit. Dec 12 18:37:31.920700 systemd[1]: Started sshd@14-10.0.0.51:22-10.0.0.1:56028.service - OpenSSH per-connection server daemon (10.0.0.1:56028). Dec 12 18:37:31.928739 systemd-logind[1539]: Removed session 14. Dec 12 18:37:32.056694 sshd[5296]: Accepted publickey for core from 10.0.0.1 port 56028 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:37:32.060222 sshd-session[5296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:37:32.083540 systemd-logind[1539]: New session 15 of user core. Dec 12 18:37:32.105318 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 18:37:32.497671 sshd[5299]: Connection closed by 10.0.0.1 port 56028 Dec 12 18:37:32.500039 sshd-session[5296]: pam_unix(sshd:session): session closed for user core Dec 12 18:37:32.511047 systemd-logind[1539]: Session 15 logged out. Waiting for processes to exit. Dec 12 18:37:32.512018 systemd[1]: sshd@14-10.0.0.51:22-10.0.0.1:56028.service: Deactivated successfully. Dec 12 18:37:32.526915 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 18:37:32.535918 systemd-logind[1539]: Removed session 15. Dec 12 18:37:34.959838 containerd[1555]: time="2025-12-12T18:37:34.959611534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:37:35.311363 containerd[1555]: time="2025-12-12T18:37:35.310716440Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:37:35.321038 containerd[1555]: time="2025-12-12T18:37:35.320678047Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:37:35.321038 containerd[1555]: time="2025-12-12T18:37:35.320972626Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:37:35.321583 kubelet[2727]: E1212 18:37:35.321511 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:37:35.324825 kubelet[2727]: E1212 18:37:35.321902 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:37:35.324825 kubelet[2727]: E1212 18:37:35.322358 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h6v6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mtm8f_calico-system(8652b687-d41e-47f6-a864-e604e24deb5b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:37:35.325078 containerd[1555]: time="2025-12-12T18:37:35.323886463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:37:35.737858 containerd[1555]: time="2025-12-12T18:37:35.737748554Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:37:35.742449 containerd[1555]: time="2025-12-12T18:37:35.742248840Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:37:35.742449 containerd[1555]: time="2025-12-12T18:37:35.742452456Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 18:37:35.743018 kubelet[2727]: E1212 18:37:35.742949 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:37:35.748865 kubelet[2727]: E1212 18:37:35.743190 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:37:35.748865 kubelet[2727]: E1212 18:37:35.743636 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m9dsm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-clgcw_calico-system(b2de3129-f188-4d80-9725-7a97224ed672): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:37:35.748865 kubelet[2727]: E1212 18:37:35.745771 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-clgcw" podUID="b2de3129-f188-4d80-9725-7a97224ed672" Dec 12 18:37:35.749547 containerd[1555]: time="2025-12-12T18:37:35.746567372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:37:36.130147 containerd[1555]: time="2025-12-12T18:37:36.128998554Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:37:36.148554 containerd[1555]: time="2025-12-12T18:37:36.148205346Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:37:36.148554 containerd[1555]: time="2025-12-12T18:37:36.148298693Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:37:36.148847 kubelet[2727]: E1212 18:37:36.148693 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:37:36.148955 kubelet[2727]: E1212 18:37:36.148774 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:37:36.151462 kubelet[2727]: E1212 18:37:36.149522 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h6v6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mtm8f_calico-system(8652b687-d41e-47f6-a864-e604e24deb5b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:37:36.163230 kubelet[2727]: E1212 18:37:36.157361 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mtm8f" podUID="8652b687-d41e-47f6-a864-e604e24deb5b" Dec 12 18:37:37.557864 systemd[1]: Started sshd@15-10.0.0.51:22-10.0.0.1:56042.service - OpenSSH per-connection server daemon (10.0.0.1:56042). Dec 12 18:37:37.720755 sshd[5314]: Accepted publickey for core from 10.0.0.1 port 56042 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:37:37.724947 sshd-session[5314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:37:37.748774 systemd-logind[1539]: New session 16 of user core. Dec 12 18:37:37.764234 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 18:37:37.965615 kubelet[2727]: E1212 18:37:37.965017 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-845fcfc7bd-n97cm" podUID="97c6c51c-88d4-4a5c-b977-999f64d65996" Dec 12 18:37:38.114906 sshd[5317]: Connection closed by 10.0.0.1 port 56042 Dec 12 18:37:38.118962 sshd-session[5314]: pam_unix(sshd:session): session closed for user core Dec 12 18:37:38.144903 systemd[1]: sshd@15-10.0.0.51:22-10.0.0.1:56042.service: Deactivated successfully. Dec 12 18:37:38.149708 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 18:37:38.160008 systemd-logind[1539]: Session 16 logged out. Waiting for processes to exit. Dec 12 18:37:38.170294 systemd-logind[1539]: Removed session 16. Dec 12 18:37:39.981903 kubelet[2727]: E1212 18:37:39.979055 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f4dfd9b79-gmfhk" podUID="433cc46b-ce9f-4fdf-9392-bdd29bdc4330" Dec 12 18:37:42.951852 kubelet[2727]: E1212 18:37:42.951717 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f4dfd9b79-68db6" podUID="69dcfde4-a416-44cb-b592-faa494483016" Dec 12 18:37:43.161342 systemd[1]: Started sshd@16-10.0.0.51:22-10.0.0.1:55488.service - OpenSSH per-connection server daemon (10.0.0.1:55488). Dec 12 18:37:43.346883 sshd[5334]: Accepted publickey for core from 10.0.0.1 port 55488 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:37:43.350585 sshd-session[5334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:37:43.371993 systemd-logind[1539]: New session 17 of user core. Dec 12 18:37:43.386682 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 18:37:43.726995 sshd[5337]: Connection closed by 10.0.0.1 port 55488 Dec 12 18:37:43.726093 sshd-session[5334]: pam_unix(sshd:session): session closed for user core Dec 12 18:37:43.738870 systemd[1]: sshd@16-10.0.0.51:22-10.0.0.1:55488.service: Deactivated successfully. Dec 12 18:37:43.754187 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 18:37:43.767106 systemd-logind[1539]: Session 17 logged out. Waiting for processes to exit. Dec 12 18:37:43.770631 systemd-logind[1539]: Removed session 17. Dec 12 18:37:44.958898 kubelet[2727]: E1212 18:37:44.954237 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d6876f9d4-45g7h" podUID="4469ce36-6286-4f2c-84e3-e653a9c04ba0" Dec 12 18:37:46.961345 kubelet[2727]: E1212 18:37:46.960734 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:37:47.950064 kubelet[2727]: E1212 18:37:47.949526 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-clgcw" podUID="b2de3129-f188-4d80-9725-7a97224ed672" Dec 12 18:37:48.766923 systemd[1]: Started sshd@17-10.0.0.51:22-10.0.0.1:55502.service - OpenSSH per-connection server daemon (10.0.0.1:55502). Dec 12 18:37:48.894496 sshd[5376]: Accepted publickey for core from 10.0.0.1 port 55502 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:37:48.898972 sshd-session[5376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:37:48.923346 systemd-logind[1539]: New session 18 of user core. Dec 12 18:37:48.939185 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 18:37:49.294324 sshd[5382]: Connection closed by 10.0.0.1 port 55502 Dec 12 18:37:49.296615 sshd-session[5376]: pam_unix(sshd:session): session closed for user core Dec 12 18:37:49.323535 systemd[1]: sshd@17-10.0.0.51:22-10.0.0.1:55502.service: Deactivated successfully. Dec 12 18:37:49.329178 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 18:37:49.336754 systemd-logind[1539]: Session 18 logged out. Waiting for processes to exit. Dec 12 18:37:49.343058 systemd-logind[1539]: Removed session 18. Dec 12 18:37:51.957415 kubelet[2727]: E1212 18:37:51.955816 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-845fcfc7bd-n97cm" podUID="97c6c51c-88d4-4a5c-b977-999f64d65996" Dec 12 18:37:51.957415 kubelet[2727]: E1212 18:37:51.956972 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mtm8f" podUID="8652b687-d41e-47f6-a864-e604e24deb5b" Dec 12 18:37:53.965980 kubelet[2727]: E1212 18:37:53.958856 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f4dfd9b79-gmfhk" podUID="433cc46b-ce9f-4fdf-9392-bdd29bdc4330" Dec 12 18:37:53.973614 kubelet[2727]: E1212 18:37:53.972094 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f4dfd9b79-68db6" podUID="69dcfde4-a416-44cb-b592-faa494483016" Dec 12 18:37:54.331588 systemd[1]: Started sshd@18-10.0.0.51:22-10.0.0.1:51562.service - OpenSSH per-connection server daemon (10.0.0.1:51562). Dec 12 18:37:54.465039 sshd[5397]: Accepted publickey for core from 10.0.0.1 port 51562 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:37:54.484983 sshd-session[5397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:37:54.517267 systemd-logind[1539]: New session 19 of user core. Dec 12 18:37:54.539214 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 18:37:54.888608 sshd[5400]: Connection closed by 10.0.0.1 port 51562 Dec 12 18:37:54.889067 sshd-session[5397]: pam_unix(sshd:session): session closed for user core Dec 12 18:37:54.908740 systemd[1]: sshd@18-10.0.0.51:22-10.0.0.1:51562.service: Deactivated successfully. Dec 12 18:37:54.915725 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 18:37:54.921328 systemd-logind[1539]: Session 19 logged out. Waiting for processes to exit. Dec 12 18:37:54.932956 systemd-logind[1539]: Removed session 19. Dec 12 18:37:56.961695 kubelet[2727]: E1212 18:37:56.957016 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d6876f9d4-45g7h" podUID="4469ce36-6286-4f2c-84e3-e653a9c04ba0" Dec 12 18:37:59.929209 systemd[1]: Started sshd@19-10.0.0.51:22-10.0.0.1:51568.service - OpenSSH per-connection server daemon (10.0.0.1:51568). Dec 12 18:37:59.951220 kubelet[2727]: E1212 18:37:59.950800 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-clgcw" podUID="b2de3129-f188-4d80-9725-7a97224ed672" Dec 12 18:38:00.050529 sshd[5413]: Accepted publickey for core from 10.0.0.1 port 51568 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:38:00.050144 sshd-session[5413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:38:00.075142 systemd-logind[1539]: New session 20 of user core. Dec 12 18:38:00.085133 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 18:38:00.318697 sshd[5416]: Connection closed by 10.0.0.1 port 51568 Dec 12 18:38:00.319040 sshd-session[5413]: pam_unix(sshd:session): session closed for user core Dec 12 18:38:00.325095 systemd[1]: sshd@19-10.0.0.51:22-10.0.0.1:51568.service: Deactivated successfully. Dec 12 18:38:00.328414 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 18:38:00.331777 systemd-logind[1539]: Session 20 logged out. Waiting for processes to exit. Dec 12 18:38:00.340085 systemd-logind[1539]: Removed session 20. Dec 12 18:38:01.952892 kubelet[2727]: E1212 18:38:01.952824 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:38:02.952238 kubelet[2727]: E1212 18:38:02.952169 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:38:04.957014 kubelet[2727]: E1212 18:38:04.955941 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f4dfd9b79-gmfhk" podUID="433cc46b-ce9f-4fdf-9392-bdd29bdc4330" Dec 12 18:38:05.344233 systemd[1]: Started sshd@20-10.0.0.51:22-10.0.0.1:37066.service - OpenSSH per-connection server daemon (10.0.0.1:37066). Dec 12 18:38:05.479516 sshd[5429]: Accepted publickey for core from 10.0.0.1 port 37066 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:38:05.483725 sshd-session[5429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:38:05.494830 systemd-logind[1539]: New session 21 of user core. Dec 12 18:38:05.508933 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 18:38:05.883755 sshd[5432]: Connection closed by 10.0.0.1 port 37066 Dec 12 18:38:05.887369 sshd-session[5429]: pam_unix(sshd:session): session closed for user core Dec 12 18:38:05.897353 systemd[1]: sshd@20-10.0.0.51:22-10.0.0.1:37066.service: Deactivated successfully. Dec 12 18:38:05.907505 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 18:38:05.910704 systemd-logind[1539]: Session 21 logged out. Waiting for processes to exit. Dec 12 18:38:05.918858 systemd-logind[1539]: Removed session 21. Dec 12 18:38:06.959930 kubelet[2727]: E1212 18:38:06.959781 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mtm8f" podUID="8652b687-d41e-47f6-a864-e604e24deb5b" Dec 12 18:38:06.961349 containerd[1555]: time="2025-12-12T18:38:06.961052447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 18:38:07.360088 containerd[1555]: time="2025-12-12T18:38:07.359850180Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:38:07.377651 containerd[1555]: time="2025-12-12T18:38:07.376755611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 18:38:07.377651 containerd[1555]: time="2025-12-12T18:38:07.376985128Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 18:38:07.378002 kubelet[2727]: E1212 18:38:07.377297 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:38:07.378002 kubelet[2727]: E1212 18:38:07.377373 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 18:38:07.378002 kubelet[2727]: E1212 18:38:07.377578 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f79zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-845fcfc7bd-n97cm_calico-system(97c6c51c-88d4-4a5c-b977-999f64d65996): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 18:38:07.379560 kubelet[2727]: E1212 18:38:07.379240 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-845fcfc7bd-n97cm" podUID="97c6c51c-88d4-4a5c-b977-999f64d65996" Dec 12 18:38:08.954749 kubelet[2727]: E1212 18:38:08.952717 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:38:08.962050 kubelet[2727]: E1212 18:38:08.961751 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f4dfd9b79-68db6" podUID="69dcfde4-a416-44cb-b592-faa494483016" Dec 12 18:38:10.923592 systemd[1]: Started sshd@21-10.0.0.51:22-10.0.0.1:37362.service - OpenSSH per-connection server daemon (10.0.0.1:37362). Dec 12 18:38:10.961990 kubelet[2727]: E1212 18:38:10.961902 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-clgcw" podUID="b2de3129-f188-4d80-9725-7a97224ed672" Dec 12 18:38:11.070456 sshd[5451]: Accepted publickey for core from 10.0.0.1 port 37362 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:38:11.083697 sshd-session[5451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:38:11.103004 systemd-logind[1539]: New session 22 of user core. Dec 12 18:38:11.114081 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 18:38:11.405585 sshd[5454]: Connection closed by 10.0.0.1 port 37362 Dec 12 18:38:11.403011 sshd-session[5451]: pam_unix(sshd:session): session closed for user core Dec 12 18:38:11.410435 systemd[1]: sshd@21-10.0.0.51:22-10.0.0.1:37362.service: Deactivated successfully. Dec 12 18:38:11.431384 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 18:38:11.448547 systemd-logind[1539]: Session 22 logged out. Waiting for processes to exit. Dec 12 18:38:11.461207 systemd-logind[1539]: Removed session 22. Dec 12 18:38:11.952754 containerd[1555]: time="2025-12-12T18:38:11.950977558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 18:38:12.397199 containerd[1555]: time="2025-12-12T18:38:12.396769415Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:38:12.431756 containerd[1555]: time="2025-12-12T18:38:12.428368143Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 18:38:12.431756 containerd[1555]: time="2025-12-12T18:38:12.428553566Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 18:38:12.432299 kubelet[2727]: E1212 18:38:12.428811 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:38:12.432299 kubelet[2727]: E1212 18:38:12.428885 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 18:38:12.432299 kubelet[2727]: E1212 18:38:12.429044 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c925c1a2ddfa4954a8cd0226d83a2f39,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2dpcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d6876f9d4-45g7h_calico-system(4469ce36-6286-4f2c-84e3-e653a9c04ba0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 18:38:12.443661 containerd[1555]: time="2025-12-12T18:38:12.443585509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 18:38:12.853195 containerd[1555]: time="2025-12-12T18:38:12.852603304Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:38:12.855934 containerd[1555]: time="2025-12-12T18:38:12.855728157Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 18:38:12.855934 containerd[1555]: time="2025-12-12T18:38:12.855888381Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 18:38:12.856324 kubelet[2727]: E1212 18:38:12.856274 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:38:12.856531 kubelet[2727]: E1212 18:38:12.856500 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 18:38:12.860548 kubelet[2727]: E1212 18:38:12.858541 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2dpcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5d6876f9d4-45g7h_calico-system(4469ce36-6286-4f2c-84e3-e653a9c04ba0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 18:38:12.860548 kubelet[2727]: E1212 18:38:12.859946 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d6876f9d4-45g7h" podUID="4469ce36-6286-4f2c-84e3-e653a9c04ba0" Dec 12 18:38:14.953589 kubelet[2727]: E1212 18:38:14.952942 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:38:16.431383 systemd[1]: Started sshd@22-10.0.0.51:22-10.0.0.1:37368.service - OpenSSH per-connection server daemon (10.0.0.1:37368). Dec 12 18:38:16.587805 sshd[5494]: Accepted publickey for core from 10.0.0.1 port 37368 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:38:16.599847 sshd-session[5494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:38:16.618105 systemd-logind[1539]: New session 23 of user core. Dec 12 18:38:16.642279 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 18:38:16.964827 sshd[5497]: Connection closed by 10.0.0.1 port 37368 Dec 12 18:38:16.968566 sshd-session[5494]: pam_unix(sshd:session): session closed for user core Dec 12 18:38:16.996766 systemd[1]: sshd@22-10.0.0.51:22-10.0.0.1:37368.service: Deactivated successfully. Dec 12 18:38:17.005326 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 18:38:17.007870 systemd-logind[1539]: Session 23 logged out. Waiting for processes to exit. Dec 12 18:38:17.024445 systemd[1]: Started sshd@23-10.0.0.51:22-10.0.0.1:37376.service - OpenSSH per-connection server daemon (10.0.0.1:37376). Dec 12 18:38:17.030953 systemd-logind[1539]: Removed session 23. Dec 12 18:38:17.142235 sshd[5512]: Accepted publickey for core from 10.0.0.1 port 37376 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:38:17.146717 sshd-session[5512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:38:17.169322 systemd-logind[1539]: New session 24 of user core. Dec 12 18:38:17.186721 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 12 18:38:17.931044 sshd[5515]: Connection closed by 10.0.0.1 port 37376 Dec 12 18:38:17.934641 sshd-session[5512]: pam_unix(sshd:session): session closed for user core Dec 12 18:38:17.955499 kubelet[2727]: E1212 18:38:17.955429 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-845fcfc7bd-n97cm" podUID="97c6c51c-88d4-4a5c-b977-999f64d65996" Dec 12 18:38:17.958683 systemd[1]: sshd@23-10.0.0.51:22-10.0.0.1:37376.service: Deactivated successfully. Dec 12 18:38:17.971609 systemd[1]: session-24.scope: Deactivated successfully. Dec 12 18:38:17.976587 systemd-logind[1539]: Session 24 logged out. Waiting for processes to exit. Dec 12 18:38:17.982043 systemd[1]: Started sshd@24-10.0.0.51:22-10.0.0.1:37390.service - OpenSSH per-connection server daemon (10.0.0.1:37390). Dec 12 18:38:17.990868 systemd-logind[1539]: Removed session 24. Dec 12 18:38:18.118867 sshd[5526]: Accepted publickey for core from 10.0.0.1 port 37390 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:38:18.136238 sshd-session[5526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:38:18.188325 systemd-logind[1539]: New session 25 of user core. Dec 12 18:38:18.213913 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 12 18:38:18.969153 containerd[1555]: time="2025-12-12T18:38:18.969092217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:38:19.348920 containerd[1555]: time="2025-12-12T18:38:19.348592800Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:38:19.354647 containerd[1555]: time="2025-12-12T18:38:19.354212678Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:38:19.354647 containerd[1555]: time="2025-12-12T18:38:19.354368815Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:38:19.355201 kubelet[2727]: E1212 18:38:19.355126 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:38:19.355201 kubelet[2727]: E1212 18:38:19.355206 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:38:19.355964 kubelet[2727]: E1212 18:38:19.355407 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p4j8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f4dfd9b79-gmfhk_calico-apiserver(433cc46b-ce9f-4fdf-9392-bdd29bdc4330): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:38:19.365440 kubelet[2727]: E1212 18:38:19.364856 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f4dfd9b79-gmfhk" podUID="433cc46b-ce9f-4fdf-9392-bdd29bdc4330" Dec 12 18:38:19.733863 sshd[5529]: Connection closed by 10.0.0.1 port 37390 Dec 12 18:38:19.739085 sshd-session[5526]: pam_unix(sshd:session): session closed for user core Dec 12 18:38:19.758734 systemd[1]: sshd@24-10.0.0.51:22-10.0.0.1:37390.service: Deactivated successfully. Dec 12 18:38:19.764471 systemd[1]: session-25.scope: Deactivated successfully. Dec 12 18:38:19.766712 systemd-logind[1539]: Session 25 logged out. Waiting for processes to exit. Dec 12 18:38:19.771387 systemd-logind[1539]: Removed session 25. Dec 12 18:38:19.775355 systemd[1]: Started sshd@25-10.0.0.51:22-10.0.0.1:37396.service - OpenSSH per-connection server daemon (10.0.0.1:37396). Dec 12 18:38:19.945872 sshd[5548]: Accepted publickey for core from 10.0.0.1 port 37396 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:38:19.951553 sshd-session[5548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:38:19.958947 systemd-logind[1539]: New session 26 of user core. Dec 12 18:38:19.967413 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 12 18:38:20.627299 sshd[5551]: Connection closed by 10.0.0.1 port 37396 Dec 12 18:38:20.633891 sshd-session[5548]: pam_unix(sshd:session): session closed for user core Dec 12 18:38:20.656973 systemd[1]: sshd@25-10.0.0.51:22-10.0.0.1:37396.service: Deactivated successfully. Dec 12 18:38:20.661322 systemd[1]: session-26.scope: Deactivated successfully. Dec 12 18:38:20.680728 systemd-logind[1539]: Session 26 logged out. Waiting for processes to exit. Dec 12 18:38:20.688644 systemd-logind[1539]: Removed session 26. Dec 12 18:38:20.692178 systemd[1]: Started sshd@26-10.0.0.51:22-10.0.0.1:42320.service - OpenSSH per-connection server daemon (10.0.0.1:42320). Dec 12 18:38:20.808547 sshd[5563]: Accepted publickey for core from 10.0.0.1 port 42320 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:38:20.813053 sshd-session[5563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:38:20.838128 systemd-logind[1539]: New session 27 of user core. Dec 12 18:38:20.846437 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 12 18:38:20.960045 containerd[1555]: time="2025-12-12T18:38:20.959574917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 18:38:21.109671 sshd[5566]: Connection closed by 10.0.0.1 port 42320 Dec 12 18:38:21.111459 sshd-session[5563]: pam_unix(sshd:session): session closed for user core Dec 12 18:38:21.124964 systemd[1]: sshd@26-10.0.0.51:22-10.0.0.1:42320.service: Deactivated successfully. Dec 12 18:38:21.127947 systemd[1]: session-27.scope: Deactivated successfully. Dec 12 18:38:21.132673 systemd-logind[1539]: Session 27 logged out. Waiting for processes to exit. Dec 12 18:38:21.139471 systemd-logind[1539]: Removed session 27. Dec 12 18:38:21.406749 containerd[1555]: time="2025-12-12T18:38:21.406222664Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:38:21.436213 containerd[1555]: time="2025-12-12T18:38:21.434625470Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 18:38:21.436213 containerd[1555]: time="2025-12-12T18:38:21.434780555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 18:38:21.436496 kubelet[2727]: E1212 18:38:21.435203 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:38:21.436496 kubelet[2727]: E1212 18:38:21.435302 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 18:38:21.436496 kubelet[2727]: E1212 18:38:21.435846 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h6v6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mtm8f_calico-system(8652b687-d41e-47f6-a864-e604e24deb5b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 18:38:21.454643 containerd[1555]: time="2025-12-12T18:38:21.453489678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 18:38:21.869478 containerd[1555]: time="2025-12-12T18:38:21.868934566Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:38:21.915909 containerd[1555]: time="2025-12-12T18:38:21.915147270Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 18:38:21.915909 containerd[1555]: time="2025-12-12T18:38:21.915295854Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 18:38:21.917267 kubelet[2727]: E1212 18:38:21.915521 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:38:21.917267 kubelet[2727]: E1212 18:38:21.915600 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 18:38:21.921191 kubelet[2727]: E1212 18:38:21.919991 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h6v6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mtm8f_calico-system(8652b687-d41e-47f6-a864-e604e24deb5b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 18:38:21.923037 kubelet[2727]: E1212 18:38:21.922206 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mtm8f" podUID="8652b687-d41e-47f6-a864-e604e24deb5b" Dec 12 18:38:21.948800 kubelet[2727]: E1212 18:38:21.948710 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:38:22.953323 containerd[1555]: time="2025-12-12T18:38:22.952968492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 18:38:23.351438 containerd[1555]: time="2025-12-12T18:38:23.347599396Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:38:23.432879 containerd[1555]: time="2025-12-12T18:38:23.431117455Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 18:38:23.432879 containerd[1555]: time="2025-12-12T18:38:23.431277369Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 18:38:23.433149 kubelet[2727]: E1212 18:38:23.431491 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:38:23.433149 kubelet[2727]: E1212 18:38:23.432947 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 18:38:23.433646 kubelet[2727]: E1212 18:38:23.433285 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bmk7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f4dfd9b79-68db6_calico-apiserver(69dcfde4-a416-44cb-b592-faa494483016): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 18:38:23.434375 containerd[1555]: time="2025-12-12T18:38:23.434314317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 18:38:23.435638 kubelet[2727]: E1212 18:38:23.435444 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f4dfd9b79-68db6" podUID="69dcfde4-a416-44cb-b592-faa494483016" Dec 12 18:38:23.904462 containerd[1555]: time="2025-12-12T18:38:23.904109815Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 18:38:24.006983 containerd[1555]: time="2025-12-12T18:38:24.006728676Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 18:38:24.006983 containerd[1555]: time="2025-12-12T18:38:24.006923988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 18:38:24.008935 kubelet[2727]: E1212 18:38:24.007877 2727 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:38:24.008935 kubelet[2727]: E1212 18:38:24.007959 2727 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 18:38:24.008935 kubelet[2727]: E1212 18:38:24.008144 2727 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m9dsm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-clgcw_calico-system(b2de3129-f188-4d80-9725-7a97224ed672): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 18:38:24.010554 kubelet[2727]: E1212 18:38:24.010460 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-clgcw" podUID="b2de3129-f188-4d80-9725-7a97224ed672" Dec 12 18:38:25.959929 kubelet[2727]: E1212 18:38:25.957275 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d6876f9d4-45g7h" podUID="4469ce36-6286-4f2c-84e3-e653a9c04ba0" Dec 12 18:38:26.155335 systemd[1]: Started sshd@27-10.0.0.51:22-10.0.0.1:42330.service - OpenSSH per-connection server daemon (10.0.0.1:42330). Dec 12 18:38:26.312271 sshd[5593]: Accepted publickey for core from 10.0.0.1 port 42330 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:38:26.308236 sshd-session[5593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:38:26.325135 systemd-logind[1539]: New session 28 of user core. Dec 12 18:38:26.342481 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 12 18:38:26.688166 sshd[5596]: Connection closed by 10.0.0.1 port 42330 Dec 12 18:38:26.687945 sshd-session[5593]: pam_unix(sshd:session): session closed for user core Dec 12 18:38:26.713685 systemd-logind[1539]: Session 28 logged out. Waiting for processes to exit. Dec 12 18:38:26.715167 systemd[1]: sshd@27-10.0.0.51:22-10.0.0.1:42330.service: Deactivated successfully. Dec 12 18:38:26.727510 systemd[1]: session-28.scope: Deactivated successfully. Dec 12 18:38:26.733830 systemd-logind[1539]: Removed session 28. Dec 12 18:38:28.954222 kubelet[2727]: E1212 18:38:28.954062 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:38:29.951846 kubelet[2727]: E1212 18:38:29.950985 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-845fcfc7bd-n97cm" podUID="97c6c51c-88d4-4a5c-b977-999f64d65996" Dec 12 18:38:31.722860 systemd[1]: Started sshd@28-10.0.0.51:22-10.0.0.1:53114.service - OpenSSH per-connection server daemon (10.0.0.1:53114). Dec 12 18:38:31.822990 sshd[5617]: Accepted publickey for core from 10.0.0.1 port 53114 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:38:31.823465 sshd-session[5617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:38:31.841405 systemd-logind[1539]: New session 29 of user core. Dec 12 18:38:31.860347 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 12 18:38:32.069211 sshd[5620]: Connection closed by 10.0.0.1 port 53114 Dec 12 18:38:32.070045 sshd-session[5617]: pam_unix(sshd:session): session closed for user core Dec 12 18:38:32.080150 systemd[1]: sshd@28-10.0.0.51:22-10.0.0.1:53114.service: Deactivated successfully. Dec 12 18:38:32.083455 systemd[1]: session-29.scope: Deactivated successfully. Dec 12 18:38:32.087252 systemd-logind[1539]: Session 29 logged out. Waiting for processes to exit. Dec 12 18:38:32.100999 systemd-logind[1539]: Removed session 29. Dec 12 18:38:32.952651 kubelet[2727]: E1212 18:38:32.952474 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f4dfd9b79-gmfhk" podUID="433cc46b-ce9f-4fdf-9392-bdd29bdc4330" Dec 12 18:38:35.954084 kubelet[2727]: E1212 18:38:35.953555 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f4dfd9b79-68db6" podUID="69dcfde4-a416-44cb-b592-faa494483016" Dec 12 18:38:35.958141 kubelet[2727]: E1212 18:38:35.958066 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mtm8f" podUID="8652b687-d41e-47f6-a864-e604e24deb5b" Dec 12 18:38:36.952239 kubelet[2727]: E1212 18:38:36.952090 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-clgcw" podUID="b2de3129-f188-4d80-9725-7a97224ed672" Dec 12 18:38:37.114630 systemd[1]: Started sshd@29-10.0.0.51:22-10.0.0.1:53130.service - OpenSSH per-connection server daemon (10.0.0.1:53130). Dec 12 18:38:37.281019 sshd[5635]: Accepted publickey for core from 10.0.0.1 port 53130 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:38:37.288554 sshd-session[5635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:38:37.305754 systemd-logind[1539]: New session 30 of user core. Dec 12 18:38:37.329695 systemd[1]: Started session-30.scope - Session 30 of User core. Dec 12 18:38:37.650476 sshd[5638]: Connection closed by 10.0.0.1 port 53130 Dec 12 18:38:37.657392 sshd-session[5635]: pam_unix(sshd:session): session closed for user core Dec 12 18:38:37.685653 systemd[1]: sshd@29-10.0.0.51:22-10.0.0.1:53130.service: Deactivated successfully. Dec 12 18:38:37.700330 systemd[1]: session-30.scope: Deactivated successfully. Dec 12 18:38:37.726988 systemd-logind[1539]: Session 30 logged out. Waiting for processes to exit. Dec 12 18:38:37.740745 systemd-logind[1539]: Removed session 30. Dec 12 18:38:38.963969 kubelet[2727]: E1212 18:38:38.963897 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5d6876f9d4-45g7h" podUID="4469ce36-6286-4f2c-84e3-e653a9c04ba0" Dec 12 18:38:41.951718 kubelet[2727]: E1212 18:38:41.951629 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-845fcfc7bd-n97cm" podUID="97c6c51c-88d4-4a5c-b977-999f64d65996" Dec 12 18:38:42.714960 systemd[1]: Started sshd@30-10.0.0.51:22-10.0.0.1:57086.service - OpenSSH per-connection server daemon (10.0.0.1:57086). Dec 12 18:38:42.875169 sshd[5656]: Accepted publickey for core from 10.0.0.1 port 57086 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:38:42.877011 sshd-session[5656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:38:42.887413 systemd-logind[1539]: New session 31 of user core. Dec 12 18:38:42.898207 systemd[1]: Started session-31.scope - Session 31 of User core. Dec 12 18:38:43.309597 sshd[5659]: Connection closed by 10.0.0.1 port 57086 Dec 12 18:38:43.311056 sshd-session[5656]: pam_unix(sshd:session): session closed for user core Dec 12 18:38:43.328036 systemd[1]: sshd@30-10.0.0.51:22-10.0.0.1:57086.service: Deactivated successfully. Dec 12 18:38:43.334396 systemd[1]: session-31.scope: Deactivated successfully. Dec 12 18:38:43.344104 systemd-logind[1539]: Session 31 logged out. Waiting for processes to exit. Dec 12 18:38:43.347546 systemd-logind[1539]: Removed session 31. Dec 12 18:38:43.953569 kubelet[2727]: E1212 18:38:43.951738 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 12 18:38:45.956445 kubelet[2727]: E1212 18:38:45.956367 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f4dfd9b79-gmfhk" podUID="433cc46b-ce9f-4fdf-9392-bdd29bdc4330" Dec 12 18:38:47.958435 kubelet[2727]: E1212 18:38:47.956646 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-clgcw" podUID="b2de3129-f188-4d80-9725-7a97224ed672" Dec 12 18:38:48.337063 systemd[1]: Started sshd@31-10.0.0.51:22-10.0.0.1:57092.service - OpenSSH per-connection server daemon (10.0.0.1:57092). Dec 12 18:38:48.574000 sshd[5698]: Accepted publickey for core from 10.0.0.1 port 57092 ssh2: RSA SHA256:P1s5gEg3hMj1tDtE6I6RWVrUOC+71cTuFOU1V+vviNE Dec 12 18:38:48.580215 sshd-session[5698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 18:38:48.600871 systemd-logind[1539]: New session 32 of user core. Dec 12 18:38:48.612677 systemd[1]: Started session-32.scope - Session 32 of User core. Dec 12 18:38:48.872291 sshd[5701]: Connection closed by 10.0.0.1 port 57092 Dec 12 18:38:48.873152 sshd-session[5698]: pam_unix(sshd:session): session closed for user core Dec 12 18:38:48.889595 systemd[1]: sshd@31-10.0.0.51:22-10.0.0.1:57092.service: Deactivated successfully. Dec 12 18:38:48.896387 systemd[1]: session-32.scope: Deactivated successfully. Dec 12 18:38:48.898708 systemd-logind[1539]: Session 32 logged out. Waiting for processes to exit. Dec 12 18:38:48.901238 systemd-logind[1539]: Removed session 32. Dec 12 18:38:48.958997 kubelet[2727]: E1212 18:38:48.957200 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mtm8f" podUID="8652b687-d41e-47f6-a864-e604e24deb5b" Dec 12 18:38:49.952830 kubelet[2727]: E1212 18:38:49.952039 2727 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f4dfd9b79-68db6" podUID="69dcfde4-a416-44cb-b592-faa494483016"