Jan 28 01:43:55.580205 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 27 22:30:15 -00 2026 Jan 28 01:43:55.580335 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8355b3b0767fa229204d694342ded3950b38c60ee1a03409aead6472a8d5e262 Jan 28 01:43:55.580348 kernel: BIOS-provided physical RAM map: Jan 28 01:43:55.580361 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 28 01:43:55.580370 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 28 01:43:55.580379 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 28 01:43:55.580389 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 28 01:43:55.580399 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 28 01:43:55.580408 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 28 01:43:55.580417 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 28 01:43:55.580426 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 28 01:43:55.580435 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 28 01:43:55.580447 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 28 01:43:55.580456 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 28 01:43:55.580467 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 28 01:43:55.580477 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 28 01:43:55.580487 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 28 01:43:55.580499 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 28 01:43:55.580509 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 28 01:43:55.580518 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 28 01:43:55.580528 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 28 01:43:55.580538 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 28 01:43:55.580547 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 28 01:43:55.580557 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 28 01:43:55.580567 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 28 01:43:55.580576 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 28 01:43:55.580586 kernel: NX (Execute Disable) protection: active Jan 28 01:43:55.580595 kernel: APIC: Static calls initialized Jan 28 01:43:55.580608 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jan 28 01:43:55.580618 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jan 28 01:43:55.580627 kernel: extended physical RAM map: Jan 28 01:43:55.580637 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 28 01:43:55.580646 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 28 01:43:55.580656 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 28 01:43:55.580666 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 28 01:43:55.580675 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 28 01:43:55.580685 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 28 01:43:55.580695 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 28 01:43:55.580705 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jan 28 01:43:55.580766 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jan 28 01:43:55.580783 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jan 28 01:43:55.580793 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jan 28 01:43:55.580801 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jan 28 01:43:55.580810 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 28 01:43:55.580823 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 28 01:43:55.580833 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 28 01:43:55.580844 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 28 01:43:55.580854 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 28 01:43:55.580864 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 28 01:43:55.580875 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 28 01:43:55.580885 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 28 01:43:55.580895 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 28 01:43:55.580905 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 28 01:43:55.580916 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 28 01:43:55.580926 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 28 01:43:55.580939 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 28 01:43:55.580950 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 28 01:43:55.580960 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 28 01:43:55.580970 kernel: efi: EFI v2.7 by EDK II Jan 28 01:43:55.580980 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jan 28 01:43:55.585991 kernel: random: crng init done Jan 28 01:43:55.586012 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 28 01:43:55.586049 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 28 01:43:55.586060 kernel: secureboot: Secure boot disabled Jan 28 01:43:55.586094 kernel: SMBIOS 2.8 present. Jan 28 01:43:55.586105 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 28 01:43:55.586350 kernel: DMI: Memory slots populated: 1/1 Jan 28 01:43:55.586361 kernel: Hypervisor detected: KVM Jan 28 01:43:55.586372 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 28 01:43:55.586382 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 28 01:43:55.586393 kernel: kvm-clock: using sched offset of 8141777169 cycles Jan 28 01:43:55.586405 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 28 01:43:55.586416 kernel: tsc: Detected 2445.426 MHz processor Jan 28 01:43:55.586426 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 28 01:43:55.586437 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 28 01:43:55.586447 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 28 01:43:55.586458 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 28 01:43:55.586473 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 28 01:43:55.586483 kernel: Using GB pages for direct mapping Jan 28 01:43:55.586494 kernel: ACPI: Early table checksum verification disabled Jan 28 01:43:55.586504 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 28 01:43:55.586515 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 28 01:43:55.586526 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:43:55.586537 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:43:55.586547 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 28 01:43:55.586558 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:43:55.586571 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:43:55.586582 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:43:55.586593 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 01:43:55.586604 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 28 01:43:55.586615 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 28 01:43:55.586625 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 28 01:43:55.586636 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 28 01:43:55.586647 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 28 01:43:55.586660 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 28 01:43:55.586671 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 28 01:43:55.586681 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 28 01:43:55.586692 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 28 01:43:55.586702 kernel: No NUMA configuration found Jan 28 01:43:55.586755 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 28 01:43:55.586769 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jan 28 01:43:55.586778 kernel: Zone ranges: Jan 28 01:43:55.586787 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 28 01:43:55.586797 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 28 01:43:55.586811 kernel: Normal empty Jan 28 01:43:55.586821 kernel: Device empty Jan 28 01:43:55.586832 kernel: Movable zone start for each node Jan 28 01:43:55.586842 kernel: Early memory node ranges Jan 28 01:43:55.586853 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 28 01:43:55.586864 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 28 01:43:55.586874 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 28 01:43:55.586885 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 28 01:43:55.586895 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 28 01:43:55.586908 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 28 01:43:55.586919 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jan 28 01:43:55.586929 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jan 28 01:43:55.586940 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 28 01:43:55.586950 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 28 01:43:55.586971 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 28 01:43:55.586985 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 28 01:43:55.586996 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 28 01:43:55.587006 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 28 01:43:55.587017 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 28 01:43:55.587029 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 28 01:43:55.587040 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 28 01:43:55.587054 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 28 01:43:55.587065 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 28 01:43:55.587076 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 28 01:43:55.587087 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 28 01:43:55.587098 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 28 01:43:55.587176 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 28 01:43:55.587187 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 28 01:43:55.587199 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 28 01:43:55.587210 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 28 01:43:55.587221 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 28 01:43:55.587232 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 28 01:43:55.587243 kernel: TSC deadline timer available Jan 28 01:43:55.587254 kernel: CPU topo: Max. logical packages: 1 Jan 28 01:43:55.587265 kernel: CPU topo: Max. logical dies: 1 Jan 28 01:43:55.587356 kernel: CPU topo: Max. dies per package: 1 Jan 28 01:43:55.587368 kernel: CPU topo: Max. threads per core: 1 Jan 28 01:43:55.587379 kernel: CPU topo: Num. cores per package: 4 Jan 28 01:43:55.587390 kernel: CPU topo: Num. threads per package: 4 Jan 28 01:43:55.587401 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 28 01:43:55.587412 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 28 01:43:55.587449 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 28 01:43:55.587460 kernel: kvm-guest: setup PV sched yield Jan 28 01:43:55.587471 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 28 01:43:55.587482 kernel: Booting paravirtualized kernel on KVM Jan 28 01:43:55.587498 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 28 01:43:55.587509 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 28 01:43:55.587520 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 28 01:43:55.587531 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 28 01:43:55.587543 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 28 01:43:55.587553 kernel: kvm-guest: PV spinlocks enabled Jan 28 01:43:55.587564 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 28 01:43:55.587577 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8355b3b0767fa229204d694342ded3950b38c60ee1a03409aead6472a8d5e262 Jan 28 01:43:55.587591 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 28 01:43:55.587603 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 28 01:43:55.587614 kernel: Fallback order for Node 0: 0 Jan 28 01:43:55.587625 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jan 28 01:43:55.587636 kernel: Policy zone: DMA32 Jan 28 01:43:55.587647 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 28 01:43:55.587658 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 28 01:43:55.587669 kernel: ftrace: allocating 40097 entries in 157 pages Jan 28 01:43:55.587680 kernel: ftrace: allocated 157 pages with 5 groups Jan 28 01:43:55.587694 kernel: Dynamic Preempt: voluntary Jan 28 01:43:55.587705 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 28 01:43:55.587758 kernel: rcu: RCU event tracing is enabled. Jan 28 01:43:55.587770 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 28 01:43:55.587780 kernel: Trampoline variant of Tasks RCU enabled. Jan 28 01:43:55.587789 kernel: Rude variant of Tasks RCU enabled. Jan 28 01:43:55.587801 kernel: Tracing variant of Tasks RCU enabled. Jan 28 01:43:55.587812 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 28 01:43:55.587823 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 28 01:43:55.587838 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 01:43:55.587850 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 01:43:55.587861 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 28 01:43:55.587872 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 28 01:43:55.587883 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 28 01:43:55.587894 kernel: Console: colour dummy device 80x25 Jan 28 01:43:55.587905 kernel: printk: legacy console [ttyS0] enabled Jan 28 01:43:55.587916 kernel: ACPI: Core revision 20240827 Jan 28 01:43:55.587928 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 28 01:43:55.587941 kernel: APIC: Switch to symmetric I/O mode setup Jan 28 01:43:55.587952 kernel: x2apic enabled Jan 28 01:43:55.587964 kernel: APIC: Switched APIC routing to: physical x2apic Jan 28 01:43:55.587975 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 28 01:43:55.587986 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 28 01:43:55.587997 kernel: kvm-guest: setup PV IPIs Jan 28 01:43:55.588008 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 28 01:43:55.588019 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 28 01:43:55.588031 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Jan 28 01:43:55.588045 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 28 01:43:55.588056 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 28 01:43:55.588068 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 28 01:43:55.588079 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 28 01:43:55.588090 kernel: Spectre V2 : Mitigation: Retpolines Jan 28 01:43:55.588101 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 28 01:43:55.588174 kernel: Speculative Store Bypass: Vulnerable Jan 28 01:43:55.588186 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 28 01:43:55.588202 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 28 01:43:55.588213 kernel: active return thunk: srso_alias_return_thunk Jan 28 01:43:55.588224 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 28 01:43:55.588235 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 28 01:43:55.588246 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 28 01:43:55.588258 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 28 01:43:55.588269 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 28 01:43:55.588280 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 28 01:43:55.588291 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 28 01:43:55.588305 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 28 01:43:55.588316 kernel: Freeing SMP alternatives memory: 32K Jan 28 01:43:55.588327 kernel: pid_max: default: 32768 minimum: 301 Jan 28 01:43:55.588339 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 28 01:43:55.588350 kernel: landlock: Up and running. Jan 28 01:43:55.588361 kernel: SELinux: Initializing. Jan 28 01:43:55.588372 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 01:43:55.588383 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 01:43:55.588394 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 28 01:43:55.588409 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 28 01:43:55.588420 kernel: signal: max sigframe size: 1776 Jan 28 01:43:55.588431 kernel: rcu: Hierarchical SRCU implementation. Jan 28 01:43:55.588443 kernel: rcu: Max phase no-delay instances is 400. Jan 28 01:43:55.588454 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 28 01:43:55.588465 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 28 01:43:55.588476 kernel: smp: Bringing up secondary CPUs ... Jan 28 01:43:55.588487 kernel: smpboot: x86: Booting SMP configuration: Jan 28 01:43:55.588498 kernel: .... node #0, CPUs: #1 #2 #3 Jan 28 01:43:55.588512 kernel: smp: Brought up 1 node, 4 CPUs Jan 28 01:43:55.588523 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Jan 28 01:43:55.588535 kernel: Memory: 2414472K/2565800K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 145388K reserved, 0K cma-reserved) Jan 28 01:43:55.588546 kernel: devtmpfs: initialized Jan 28 01:43:55.588557 kernel: x86/mm: Memory block size: 128MB Jan 28 01:43:55.588568 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 28 01:43:55.588579 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 28 01:43:55.588591 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 28 01:43:55.588602 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 28 01:43:55.588616 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jan 28 01:43:55.588627 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 28 01:43:55.588639 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 28 01:43:55.588651 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 28 01:43:55.588662 kernel: pinctrl core: initialized pinctrl subsystem Jan 28 01:43:55.588671 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 28 01:43:55.588681 kernel: audit: initializing netlink subsys (disabled) Jan 28 01:43:55.588691 kernel: audit: type=2000 audit(1769564631.002:1): state=initialized audit_enabled=0 res=1 Jan 28 01:43:55.588704 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 28 01:43:55.588756 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 28 01:43:55.588768 kernel: cpuidle: using governor menu Jan 28 01:43:55.588778 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 28 01:43:55.588788 kernel: dca service started, version 1.12.1 Jan 28 01:43:55.588799 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 28 01:43:55.588810 kernel: PCI: Using configuration type 1 for base access Jan 28 01:43:55.588819 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 28 01:43:55.588829 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 28 01:43:55.588843 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 28 01:43:55.588853 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 28 01:43:55.588864 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 28 01:43:55.588874 kernel: ACPI: Added _OSI(Module Device) Jan 28 01:43:55.588883 kernel: ACPI: Added _OSI(Processor Device) Jan 28 01:43:55.588893 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 28 01:43:55.588903 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 28 01:43:55.588915 kernel: ACPI: Interpreter enabled Jan 28 01:43:55.588925 kernel: ACPI: PM: (supports S0 S3 S5) Jan 28 01:43:55.588937 kernel: ACPI: Using IOAPIC for interrupt routing Jan 28 01:43:55.588947 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 28 01:43:55.588957 kernel: PCI: Using E820 reservations for host bridge windows Jan 28 01:43:55.588969 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 28 01:43:55.588979 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 28 01:43:55.589362 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 28 01:43:55.589540 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 28 01:43:55.589704 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 28 01:43:55.589778 kernel: PCI host bridge to bus 0000:00 Jan 28 01:43:55.589946 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 28 01:43:55.590094 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 28 01:43:55.590331 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 28 01:43:55.590476 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 28 01:43:55.590623 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 28 01:43:55.590814 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 28 01:43:55.590968 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 28 01:43:55.591228 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 28 01:43:55.591408 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 28 01:43:55.591566 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jan 28 01:43:55.591767 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jan 28 01:43:55.591929 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 28 01:43:55.592264 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 28 01:43:55.592447 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 28 01:43:55.592605 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jan 28 01:43:55.592810 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jan 28 01:43:55.592968 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jan 28 01:43:55.593209 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 28 01:43:55.593372 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jan 28 01:43:55.593535 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jan 28 01:43:55.593690 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jan 28 01:43:55.594040 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 28 01:43:55.614556 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jan 28 01:43:55.614968 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jan 28 01:43:55.624949 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 28 01:43:55.625299 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jan 28 01:43:55.625498 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 28 01:43:55.625661 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 28 01:43:55.627987 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 28 01:43:55.628230 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jan 28 01:43:55.628393 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jan 28 01:43:55.628560 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 28 01:43:55.628805 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jan 28 01:43:55.628823 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 28 01:43:55.628837 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 28 01:43:55.628847 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 28 01:43:55.628856 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 28 01:43:55.628866 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 28 01:43:55.628875 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 28 01:43:55.628885 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 28 01:43:55.628897 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 28 01:43:55.628914 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 28 01:43:55.629026 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 28 01:43:55.629036 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 28 01:43:55.629046 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 28 01:43:55.629056 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 28 01:43:55.629065 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 28 01:43:55.629076 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 28 01:43:55.629086 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 28 01:43:55.629097 kernel: iommu: Default domain type: Translated Jan 28 01:43:55.629190 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 28 01:43:55.629204 kernel: efivars: Registered efivars operations Jan 28 01:43:55.629214 kernel: PCI: Using ACPI for IRQ routing Jan 28 01:43:55.629224 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 28 01:43:55.629234 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 28 01:43:55.629243 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 28 01:43:55.629253 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jan 28 01:43:55.629265 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jan 28 01:43:55.629275 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 28 01:43:55.629289 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 28 01:43:55.629299 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jan 28 01:43:55.629309 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 28 01:43:55.629480 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 28 01:43:55.629643 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 28 01:43:55.630941 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 28 01:43:55.630960 kernel: vgaarb: loaded Jan 28 01:43:55.630973 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 28 01:43:55.630990 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 28 01:43:55.631001 kernel: clocksource: Switched to clocksource kvm-clock Jan 28 01:43:55.631012 kernel: VFS: Disk quotas dquot_6.6.0 Jan 28 01:43:55.631024 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 28 01:43:55.631036 kernel: pnp: PnP ACPI init Jan 28 01:43:55.631308 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 28 01:43:55.631328 kernel: pnp: PnP ACPI: found 6 devices Jan 28 01:43:55.631340 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 28 01:43:55.631456 kernel: NET: Registered PF_INET protocol family Jan 28 01:43:55.631512 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 28 01:43:55.631563 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 28 01:43:55.631597 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 28 01:43:55.631609 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 28 01:43:55.631643 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 28 01:43:55.631657 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 28 01:43:55.631669 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 01:43:55.631680 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 01:43:55.631695 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 28 01:43:55.631707 kernel: NET: Registered PF_XDP protocol family Jan 28 01:43:55.631931 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jan 28 01:43:55.632097 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jan 28 01:43:55.632332 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 28 01:43:55.633915 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 28 01:43:55.634082 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 28 01:43:55.634304 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 28 01:43:55.634452 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 28 01:43:55.634594 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 28 01:43:55.634609 kernel: PCI: CLS 0 bytes, default 64 Jan 28 01:43:55.634621 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Jan 28 01:43:55.634633 kernel: Initialise system trusted keyrings Jan 28 01:43:55.634651 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 28 01:43:55.634663 kernel: Key type asymmetric registered Jan 28 01:43:55.634675 kernel: Asymmetric key parser 'x509' registered Jan 28 01:43:55.634691 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 28 01:43:55.634703 kernel: io scheduler mq-deadline registered Jan 28 01:43:55.634758 kernel: io scheduler kyber registered Jan 28 01:43:55.634770 kernel: io scheduler bfq registered Jan 28 01:43:55.634780 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 28 01:43:55.634792 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 28 01:43:55.634804 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 28 01:43:55.634817 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 28 01:43:55.634832 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 28 01:43:55.634844 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 28 01:43:55.634856 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 28 01:43:55.634868 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 28 01:43:55.635668 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 28 01:43:55.635944 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 28 01:43:55.635969 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 28 01:43:55.636390 kernel: rtc_cmos 00:04: registered as rtc0 Jan 28 01:43:55.636547 kernel: rtc_cmos 00:04: setting system clock to 2026-01-28T01:43:54 UTC (1769564634) Jan 28 01:43:55.636696 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 28 01:43:55.636711 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 28 01:43:55.636767 kernel: efifb: probing for efifb Jan 28 01:43:55.636777 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 28 01:43:55.636787 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 28 01:43:55.636798 kernel: efifb: scrolling: redraw Jan 28 01:43:55.636815 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 28 01:43:55.636827 kernel: Console: switching to colour frame buffer device 160x50 Jan 28 01:43:55.636839 kernel: fb0: EFI VGA frame buffer device Jan 28 01:43:55.636853 kernel: pstore: Using crash dump compression: deflate Jan 28 01:43:55.636955 kernel: pstore: Registered efi_pstore as persistent store backend Jan 28 01:43:55.636967 kernel: NET: Registered PF_INET6 protocol family Jan 28 01:43:55.636979 kernel: Segment Routing with IPv6 Jan 28 01:43:55.636990 kernel: In-situ OAM (IOAM) with IPv6 Jan 28 01:43:55.637002 kernel: NET: Registered PF_PACKET protocol family Jan 28 01:43:55.637018 kernel: Key type dns_resolver registered Jan 28 01:43:55.637030 kernel: IPI shorthand broadcast: enabled Jan 28 01:43:55.637042 kernel: sched_clock: Marking stable (4449036245, 814154260)->(5592372167, -329181662) Jan 28 01:43:55.637054 kernel: registered taskstats version 1 Jan 28 01:43:55.637065 kernel: Loading compiled-in X.509 certificates Jan 28 01:43:55.637077 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 31c1e06975b690596c927b070a4cb9e218a3417b' Jan 28 01:43:55.637088 kernel: Demotion targets for Node 0: null Jan 28 01:43:55.637100 kernel: Key type .fscrypt registered Jan 28 01:43:55.637164 kernel: Key type fscrypt-provisioning registered Jan 28 01:43:55.637180 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 28 01:43:55.637192 kernel: ima: Allocated hash algorithm: sha1 Jan 28 01:43:55.637204 kernel: ima: No architecture policies found Jan 28 01:43:55.637215 kernel: clk: Disabling unused clocks Jan 28 01:43:55.637227 kernel: Warning: unable to open an initial console. Jan 28 01:43:55.637239 kernel: Freeing unused kernel image (initmem) memory: 46200K Jan 28 01:43:55.637251 kernel: Write protecting the kernel read-only data: 40960k Jan 28 01:43:55.637262 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 28 01:43:55.637277 kernel: Run /init as init process Jan 28 01:43:55.637288 kernel: with arguments: Jan 28 01:43:55.637300 kernel: /init Jan 28 01:43:55.637311 kernel: with environment: Jan 28 01:43:55.637323 kernel: HOME=/ Jan 28 01:43:55.637334 kernel: TERM=linux Jan 28 01:43:55.637347 systemd[1]: Successfully made /usr/ read-only. Jan 28 01:43:55.637362 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 28 01:43:55.637378 systemd[1]: Detected virtualization kvm. Jan 28 01:43:55.637390 systemd[1]: Detected architecture x86-64. Jan 28 01:43:55.637401 systemd[1]: Running in initrd. Jan 28 01:43:55.637413 systemd[1]: No hostname configured, using default hostname. Jan 28 01:43:55.637425 systemd[1]: Hostname set to . Jan 28 01:43:55.637437 systemd[1]: Initializing machine ID from VM UUID. Jan 28 01:43:55.637449 systemd[1]: Queued start job for default target initrd.target. Jan 28 01:43:55.637461 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:43:55.637477 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:43:55.637490 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 28 01:43:55.637503 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 01:43:55.637515 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 28 01:43:55.637529 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 28 01:43:55.637543 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 28 01:43:55.637555 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 28 01:43:55.637570 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:43:55.637582 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:43:55.637595 systemd[1]: Reached target paths.target - Path Units. Jan 28 01:43:55.637607 systemd[1]: Reached target slices.target - Slice Units. Jan 28 01:43:55.637619 systemd[1]: Reached target swap.target - Swaps. Jan 28 01:43:55.637631 systemd[1]: Reached target timers.target - Timer Units. Jan 28 01:43:55.637643 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 01:43:55.637655 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 01:43:55.637668 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 01:43:55.637683 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 28 01:43:55.637695 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:43:55.637708 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 01:43:55.637760 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:43:55.637772 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 01:43:55.637782 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 28 01:43:55.637793 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 01:43:55.637805 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 28 01:43:55.637821 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 28 01:43:55.637834 systemd[1]: Starting systemd-fsck-usr.service... Jan 28 01:43:55.637846 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 01:43:55.637941 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 01:43:55.637956 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:43:55.637968 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 28 01:43:55.638020 systemd-journald[203]: Collecting audit messages is disabled. Jan 28 01:43:55.638050 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:43:55.638066 systemd[1]: Finished systemd-fsck-usr.service. Jan 28 01:43:55.638079 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 01:43:55.638091 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:43:55.638104 systemd-journald[203]: Journal started Jan 28 01:43:55.638190 systemd-journald[203]: Runtime Journal (/run/log/journal/e7ceb61748e541ef9aa0a7438bb231d1) is 6M, max 48.1M, 42.1M free. Jan 28 01:43:55.594228 systemd-modules-load[204]: Inserted module 'overlay' Jan 28 01:43:55.661518 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 01:43:55.662246 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 01:43:55.685037 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 01:43:55.738363 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 28 01:43:55.742760 kernel: Bridge firewalling registered Jan 28 01:43:55.743008 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 01:43:55.743846 systemd-modules-load[204]: Inserted module 'br_netfilter' Jan 28 01:43:55.744346 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 01:43:55.747574 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 01:43:55.753951 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 01:43:55.772475 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:43:55.795192 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 28 01:43:55.811357 systemd-tmpfiles[224]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 28 01:43:55.831566 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:43:55.863817 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:43:55.871250 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 01:43:55.901696 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:43:55.910945 dracut-cmdline[238]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8355b3b0767fa229204d694342ded3950b38c60ee1a03409aead6472a8d5e262 Jan 28 01:43:56.010955 systemd-resolved[243]: Positive Trust Anchors: Jan 28 01:43:56.010994 systemd-resolved[243]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 01:43:56.011019 systemd-resolved[243]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 01:43:56.014537 systemd-resolved[243]: Defaulting to hostname 'linux'. Jan 28 01:43:56.018536 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 01:43:56.028010 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:43:56.355939 kernel: SCSI subsystem initialized Jan 28 01:43:56.384267 kernel: Loading iSCSI transport class v2.0-870. Jan 28 01:43:56.429777 kernel: iscsi: registered transport (tcp) Jan 28 01:43:56.478598 kernel: iscsi: registered transport (qla4xxx) Jan 28 01:43:56.478680 kernel: QLogic iSCSI HBA Driver Jan 28 01:43:56.548387 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 01:43:56.616789 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 01:43:56.638370 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 01:43:56.787620 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 28 01:43:56.801833 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 28 01:43:56.924308 kernel: raid6: avx2x4 gen() 19659 MB/s Jan 28 01:43:56.944379 kernel: raid6: avx2x2 gen() 20251 MB/s Jan 28 01:43:56.966055 kernel: raid6: avx2x1 gen() 13959 MB/s Jan 28 01:43:56.966211 kernel: raid6: using algorithm avx2x2 gen() 20251 MB/s Jan 28 01:43:56.988858 kernel: raid6: .... xor() 17452 MB/s, rmw enabled Jan 28 01:43:56.989317 kernel: raid6: using avx2x2 recovery algorithm Jan 28 01:43:57.027973 kernel: xor: automatically using best checksumming function avx Jan 28 01:43:57.412833 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 28 01:43:57.434902 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 28 01:43:57.455475 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:43:57.532300 systemd-udevd[454]: Using default interface naming scheme 'v255'. Jan 28 01:43:57.545875 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:43:57.573847 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 28 01:43:57.635470 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Jan 28 01:43:57.710632 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 01:43:57.728023 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 01:43:57.926456 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:43:57.947898 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 28 01:43:58.098371 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 28 01:43:58.121184 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 28 01:43:58.140266 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 28 01:43:58.123955 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:43:58.163279 kernel: GPT:9289727 != 19775487 Jan 28 01:43:58.163349 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 28 01:43:58.163369 kernel: GPT:9289727 != 19775487 Jan 28 01:43:58.163385 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 28 01:43:58.163400 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 01:43:58.124042 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:43:58.163822 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:43:58.177804 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:43:58.196202 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 28 01:43:58.203265 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:43:58.203412 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:43:58.213703 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:43:58.246260 kernel: libata version 3.00 loaded. Jan 28 01:43:58.246301 kernel: cryptd: max_cpu_qlen set to 1000 Jan 28 01:43:58.303221 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 28 01:43:58.319936 kernel: ahci 0000:00:1f.2: version 3.0 Jan 28 01:43:58.320351 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 28 01:43:58.361085 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 28 01:43:58.361621 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 28 01:43:58.362050 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 28 01:43:58.374899 kernel: AES CTR mode by8 optimization enabled Jan 28 01:43:58.374960 kernel: scsi host0: ahci Jan 28 01:43:58.378858 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 28 01:43:58.436454 kernel: scsi host1: ahci Jan 28 01:43:58.436727 kernel: scsi host2: ahci Jan 28 01:43:58.436974 kernel: scsi host3: ahci Jan 28 01:43:58.437233 kernel: scsi host4: ahci Jan 28 01:43:58.437429 kernel: scsi host5: ahci Jan 28 01:43:58.437613 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Jan 28 01:43:58.397627 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:43:58.480060 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Jan 28 01:43:58.480098 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Jan 28 01:43:58.480224 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Jan 28 01:43:58.480245 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Jan 28 01:43:58.480263 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Jan 28 01:43:58.478894 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 01:43:58.503514 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 28 01:43:58.525910 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 28 01:43:58.526185 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 28 01:43:58.578106 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 28 01:43:58.621888 disk-uuid[616]: Primary Header is updated. Jan 28 01:43:58.621888 disk-uuid[616]: Secondary Entries is updated. Jan 28 01:43:58.621888 disk-uuid[616]: Secondary Header is updated. Jan 28 01:43:58.651544 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 01:43:58.672598 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 01:43:58.762322 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 28 01:43:58.775970 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 28 01:43:58.776042 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 28 01:43:58.799086 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 28 01:43:58.799208 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 28 01:43:58.805095 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 28 01:43:58.805206 kernel: ata3.00: LPM support broken, forcing max_power Jan 28 01:43:58.828641 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 28 01:43:58.828699 kernel: ata3.00: applying bridge limits Jan 28 01:43:58.842649 kernel: ata3.00: LPM support broken, forcing max_power Jan 28 01:43:58.842706 kernel: ata3.00: configured for UDMA/100 Jan 28 01:43:58.853702 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 28 01:43:58.980862 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 28 01:43:58.981348 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 28 01:43:59.000531 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 28 01:43:59.513742 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 28 01:43:59.529245 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 01:43:59.566606 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:43:59.588245 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 01:43:59.611740 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 28 01:43:59.673275 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 01:43:59.675548 disk-uuid[617]: The operation has completed successfully. Jan 28 01:43:59.683839 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 28 01:43:59.755589 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 28 01:43:59.755894 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 28 01:43:59.827271 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 28 01:43:59.868658 sh[654]: Success Jan 28 01:43:59.931378 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 28 01:43:59.931461 kernel: device-mapper: uevent: version 1.0.3 Jan 28 01:43:59.934448 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 28 01:43:59.996231 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 28 01:44:00.075951 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 28 01:44:00.096454 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 28 01:44:00.138454 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 28 01:44:00.174873 kernel: BTRFS: device fsid 4389fb68-1fd1-4240-9a3a-21ed56363b72 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (666) Jan 28 01:44:00.174897 kernel: BTRFS info (device dm-0): first mount of filesystem 4389fb68-1fd1-4240-9a3a-21ed56363b72 Jan 28 01:44:00.174908 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:44:00.204651 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 28 01:44:00.204729 kernel: BTRFS info (device dm-0): enabling free space tree Jan 28 01:44:00.209831 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 28 01:44:00.215508 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 28 01:44:00.242876 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 28 01:44:00.254626 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 28 01:44:00.271516 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 28 01:44:00.332633 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (699) Jan 28 01:44:00.340388 kernel: BTRFS info (device vda6): first mount of filesystem 9af5053b-db68-4bfb-9f6a-fea1b6dc27af Jan 28 01:44:00.340456 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:44:00.366423 kernel: BTRFS info (device vda6): turning on async discard Jan 28 01:44:00.366516 kernel: BTRFS info (device vda6): enabling free space tree Jan 28 01:44:00.385405 kernel: BTRFS info (device vda6): last unmount of filesystem 9af5053b-db68-4bfb-9f6a-fea1b6dc27af Jan 28 01:44:00.409041 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 28 01:44:00.432256 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 28 01:44:00.688520 ignition[758]: Ignition 2.22.0 Jan 28 01:44:00.688568 ignition[758]: Stage: fetch-offline Jan 28 01:44:00.688619 ignition[758]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:44:00.688634 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:44:00.688729 ignition[758]: parsed url from cmdline: "" Jan 28 01:44:00.717516 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 01:44:00.688735 ignition[758]: no config URL provided Jan 28 01:44:00.739033 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 01:44:00.688743 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 01:44:00.688807 ignition[758]: no config at "/usr/lib/ignition/user.ign" Jan 28 01:44:00.688837 ignition[758]: op(1): [started] loading QEMU firmware config module Jan 28 01:44:00.688844 ignition[758]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 28 01:44:00.724837 ignition[758]: op(1): [finished] loading QEMU firmware config module Jan 28 01:44:00.880744 systemd-networkd[846]: lo: Link UP Jan 28 01:44:00.887049 systemd-networkd[846]: lo: Gained carrier Jan 28 01:44:00.889622 systemd-networkd[846]: Enumeration completed Jan 28 01:44:00.891402 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 01:44:00.897961 systemd-networkd[846]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:44:00.897968 systemd-networkd[846]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 01:44:00.901724 systemd-networkd[846]: eth0: Link UP Jan 28 01:44:00.930361 systemd[1]: Reached target network.target - Network. Jan 28 01:44:00.933004 systemd-networkd[846]: eth0: Gained carrier Jan 28 01:44:00.933031 systemd-networkd[846]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:44:01.009273 systemd-networkd[846]: eth0: DHCPv4 address 10.0.0.33/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 28 01:44:01.245280 ignition[758]: parsing config with SHA512: 0c1192c6192f33f0cdc43ebcc097de2dbfdfc4c3ac0fa6532cc5818e86b258803e1d883b344aea6b9b80db54aed13672f83428263b922597119376d523da3af6 Jan 28 01:44:01.258444 unknown[758]: fetched base config from "system" Jan 28 01:44:01.258460 unknown[758]: fetched user config from "qemu" Jan 28 01:44:01.272697 ignition[758]: fetch-offline: fetch-offline passed Jan 28 01:44:01.273305 ignition[758]: Ignition finished successfully Jan 28 01:44:01.288565 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 01:44:01.304019 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 28 01:44:01.316970 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 28 01:44:01.385372 ignition[851]: Ignition 2.22.0 Jan 28 01:44:01.385393 ignition[851]: Stage: kargs Jan 28 01:44:01.385575 ignition[851]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:44:01.385591 ignition[851]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:44:01.387288 ignition[851]: kargs: kargs passed Jan 28 01:44:01.387350 ignition[851]: Ignition finished successfully Jan 28 01:44:01.433574 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 28 01:44:01.449235 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 28 01:44:01.511392 ignition[859]: Ignition 2.22.0 Jan 28 01:44:01.511452 ignition[859]: Stage: disks Jan 28 01:44:01.511650 ignition[859]: no configs at "/usr/lib/ignition/base.d" Jan 28 01:44:01.511664 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:44:01.523383 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 28 01:44:01.512555 ignition[859]: disks: disks passed Jan 28 01:44:01.527278 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 28 01:44:01.512604 ignition[859]: Ignition finished successfully Jan 28 01:44:01.561532 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 01:44:01.573732 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 01:44:01.577544 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 01:44:01.598349 systemd[1]: Reached target basic.target - Basic System. Jan 28 01:44:01.621201 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 28 01:44:01.676592 systemd-fsck[869]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 28 01:44:01.688930 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 28 01:44:01.699631 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 28 01:44:02.140443 kernel: EXT4-fs (vda9): mounted filesystem 0c920277-6cf2-4276-8e4c-1a9645be49e7 r/w with ordered data mode. Quota mode: none. Jan 28 01:44:02.143464 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 28 01:44:02.148420 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 28 01:44:02.163649 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 01:44:02.172256 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 28 01:44:02.183420 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 28 01:44:02.183517 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 28 01:44:02.183558 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 01:44:02.253301 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 28 01:44:02.290878 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (877) Jan 28 01:44:02.290916 kernel: BTRFS info (device vda6): first mount of filesystem 9af5053b-db68-4bfb-9f6a-fea1b6dc27af Jan 28 01:44:02.290932 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:44:02.267413 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 28 01:44:02.317670 kernel: BTRFS info (device vda6): turning on async discard Jan 28 01:44:02.317756 kernel: BTRFS info (device vda6): enabling free space tree Jan 28 01:44:02.321663 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 01:44:02.416226 initrd-setup-root[901]: cut: /sysroot/etc/passwd: No such file or directory Jan 28 01:44:02.437979 initrd-setup-root[908]: cut: /sysroot/etc/group: No such file or directory Jan 28 01:44:02.456456 initrd-setup-root[915]: cut: /sysroot/etc/shadow: No such file or directory Jan 28 01:44:02.467392 initrd-setup-root[922]: cut: /sysroot/etc/gshadow: No such file or directory Jan 28 01:44:02.570328 systemd-networkd[846]: eth0: Gained IPv6LL Jan 28 01:44:02.714950 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 28 01:44:02.719242 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 28 01:44:02.765847 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 28 01:44:02.809181 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 28 01:44:02.845953 kernel: BTRFS info (device vda6): last unmount of filesystem 9af5053b-db68-4bfb-9f6a-fea1b6dc27af Jan 28 01:44:02.940163 ignition[989]: INFO : Ignition 2.22.0 Jan 28 01:44:02.940163 ignition[989]: INFO : Stage: mount Jan 28 01:44:02.937905 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 28 01:44:02.958095 ignition[989]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:44:02.958095 ignition[989]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:44:02.958095 ignition[989]: INFO : mount: mount passed Jan 28 01:44:02.958095 ignition[989]: INFO : Ignition finished successfully Jan 28 01:44:02.953220 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 28 01:44:02.971540 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 28 01:44:03.153215 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 01:44:03.222617 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1002) Jan 28 01:44:03.234594 kernel: BTRFS info (device vda6): first mount of filesystem 9af5053b-db68-4bfb-9f6a-fea1b6dc27af Jan 28 01:44:03.234670 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 01:44:03.264984 kernel: BTRFS info (device vda6): turning on async discard Jan 28 01:44:03.265074 kernel: BTRFS info (device vda6): enabling free space tree Jan 28 01:44:03.274968 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 01:44:03.376106 ignition[1019]: INFO : Ignition 2.22.0 Jan 28 01:44:03.376106 ignition[1019]: INFO : Stage: files Jan 28 01:44:03.384436 ignition[1019]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:44:03.384436 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:44:03.384436 ignition[1019]: DEBUG : files: compiled without relabeling support, skipping Jan 28 01:44:03.384436 ignition[1019]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 28 01:44:03.384436 ignition[1019]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 28 01:44:03.450978 ignition[1019]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 28 01:44:03.450978 ignition[1019]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 28 01:44:03.450978 ignition[1019]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 28 01:44:03.450978 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 28 01:44:03.450978 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 28 01:44:03.424857 unknown[1019]: wrote ssh authorized keys file for user: core Jan 28 01:44:03.540037 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 28 01:44:03.672366 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 28 01:44:03.693850 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 28 01:44:03.693850 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 28 01:44:03.693850 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 28 01:44:03.693850 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 28 01:44:03.693850 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 01:44:03.693850 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 01:44:03.693850 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 01:44:03.693850 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 01:44:03.693850 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 01:44:03.693850 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 01:44:03.693850 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 01:44:03.693850 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 01:44:03.693850 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 01:44:03.693850 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 28 01:44:04.009526 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 28 01:44:04.854014 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 01:44:04.854014 ignition[1019]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 28 01:44:04.874460 ignition[1019]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 01:44:04.889506 ignition[1019]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 01:44:04.889506 ignition[1019]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 28 01:44:04.889506 ignition[1019]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 28 01:44:04.889506 ignition[1019]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 28 01:44:04.889506 ignition[1019]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 28 01:44:04.889506 ignition[1019]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 28 01:44:04.889506 ignition[1019]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 28 01:44:04.968537 ignition[1019]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 28 01:44:04.984748 ignition[1019]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 28 01:44:04.984748 ignition[1019]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 28 01:44:04.984748 ignition[1019]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 28 01:44:04.984748 ignition[1019]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 28 01:44:04.984748 ignition[1019]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 28 01:44:04.984748 ignition[1019]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 28 01:44:04.984748 ignition[1019]: INFO : files: files passed Jan 28 01:44:04.984748 ignition[1019]: INFO : Ignition finished successfully Jan 28 01:44:04.983596 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 28 01:44:04.996433 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 28 01:44:05.024449 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 28 01:44:05.095700 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 28 01:44:05.096006 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 28 01:44:05.129599 initrd-setup-root-after-ignition[1047]: grep: /sysroot/oem/oem-release: No such file or directory Jan 28 01:44:05.149682 initrd-setup-root-after-ignition[1050]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:44:05.149682 initrd-setup-root-after-ignition[1050]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:44:05.142269 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 01:44:05.179192 initrd-setup-root-after-ignition[1054]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 01:44:05.155988 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 28 01:44:05.185636 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 28 01:44:05.280607 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 28 01:44:05.281518 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 28 01:44:05.300461 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 28 01:44:05.300644 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 28 01:44:05.314867 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 28 01:44:05.316296 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 28 01:44:05.391843 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 01:44:05.394318 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 28 01:44:05.431593 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:44:05.432002 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:44:05.444288 systemd[1]: Stopped target timers.target - Timer Units. Jan 28 01:44:05.455189 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 28 01:44:05.455445 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 01:44:05.467403 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 28 01:44:05.476367 systemd[1]: Stopped target basic.target - Basic System. Jan 28 01:44:05.480706 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 28 01:44:05.491863 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 01:44:05.495879 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 28 01:44:05.509733 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 28 01:44:05.515709 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 28 01:44:05.522494 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 01:44:05.541742 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 28 01:44:05.546752 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 28 01:44:05.555618 systemd[1]: Stopped target swap.target - Swaps. Jan 28 01:44:05.571203 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 28 01:44:05.571412 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 28 01:44:05.583711 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:44:05.596618 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:44:05.602061 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 28 01:44:05.616470 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:44:05.616784 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 28 01:44:05.617030 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 28 01:44:05.637282 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 28 01:44:05.637535 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 01:44:05.642495 systemd[1]: Stopped target paths.target - Path Units. Jan 28 01:44:05.647548 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 28 01:44:05.798010 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:44:05.941991 systemd[1]: Stopped target slices.target - Slice Units. Jan 28 01:44:06.000468 systemd[1]: Stopped target sockets.target - Socket Units. Jan 28 01:44:06.024643 systemd[1]: iscsid.socket: Deactivated successfully. Jan 28 01:44:06.024779 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 01:44:06.045489 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 28 01:44:06.045669 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 01:44:06.050671 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 28 01:44:06.050941 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 01:44:06.055429 systemd[1]: ignition-files.service: Deactivated successfully. Jan 28 01:44:06.056493 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 28 01:44:06.068850 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 28 01:44:06.072659 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 28 01:44:06.072885 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:44:06.083325 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 28 01:44:06.089832 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 28 01:44:06.090074 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:44:06.095855 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 28 01:44:06.096043 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 01:44:06.121281 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 28 01:44:06.149550 ignition[1074]: INFO : Ignition 2.22.0 Jan 28 01:44:06.149550 ignition[1074]: INFO : Stage: umount Jan 28 01:44:06.149550 ignition[1074]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 01:44:06.149550 ignition[1074]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 28 01:44:06.149550 ignition[1074]: INFO : umount: umount passed Jan 28 01:44:06.149550 ignition[1074]: INFO : Ignition finished successfully Jan 28 01:44:06.121415 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 28 01:44:06.178465 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 28 01:44:06.179590 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 28 01:44:06.179764 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 28 01:44:06.182629 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 28 01:44:06.182846 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 28 01:44:06.203366 systemd[1]: Stopped target network.target - Network. Jan 28 01:44:06.208368 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 28 01:44:06.208468 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 28 01:44:06.216274 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 28 01:44:06.216378 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 28 01:44:06.227501 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 28 01:44:06.227617 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 28 01:44:06.231629 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 28 01:44:06.231719 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 28 01:44:06.236086 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 28 01:44:06.236246 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 28 01:44:06.249024 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 28 01:44:06.253057 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 28 01:44:06.268453 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 28 01:44:06.268590 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 28 01:44:06.285276 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 28 01:44:06.285621 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 28 01:44:06.285772 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 28 01:44:06.299407 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 28 01:44:06.301366 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 28 01:44:06.323601 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 28 01:44:06.323960 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:44:06.345588 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 28 01:44:06.360059 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 28 01:44:06.360219 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 01:44:06.379922 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 01:44:06.380418 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:44:06.397333 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 28 01:44:06.398578 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 28 01:44:06.415904 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 28 01:44:06.416011 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:44:06.428644 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:44:06.444403 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 28 01:44:06.444495 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 28 01:44:06.479925 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 28 01:44:06.480658 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:44:06.496619 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 28 01:44:06.498614 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 28 01:44:06.509702 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 28 01:44:06.509887 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 28 01:44:06.517608 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 28 01:44:06.517666 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:44:06.525605 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 28 01:44:06.525714 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 28 01:44:06.537744 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 28 01:44:06.538695 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 28 01:44:06.552445 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 01:44:06.552577 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 01:44:06.588557 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 28 01:44:06.588839 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 28 01:44:06.588962 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 01:44:06.615599 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 28 01:44:06.615698 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:44:06.628656 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 01:44:06.628748 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:44:06.658281 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 28 01:44:06.658375 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 28 01:44:06.658449 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 28 01:44:06.671398 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 28 01:44:06.671562 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 28 01:44:06.680557 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 28 01:44:06.702329 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 28 01:44:06.760046 systemd[1]: Switching root. Jan 28 01:44:06.836497 systemd-journald[203]: Journal stopped Jan 28 01:44:09.263442 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Jan 28 01:44:09.263550 kernel: SELinux: policy capability network_peer_controls=1 Jan 28 01:44:09.263571 kernel: SELinux: policy capability open_perms=1 Jan 28 01:44:09.263586 kernel: SELinux: policy capability extended_socket_class=1 Jan 28 01:44:09.263611 kernel: SELinux: policy capability always_check_network=0 Jan 28 01:44:09.263626 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 28 01:44:09.263657 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 28 01:44:09.263674 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 28 01:44:09.263690 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 28 01:44:09.263705 kernel: SELinux: policy capability userspace_initial_context=0 Jan 28 01:44:09.263719 kernel: audit: type=1403 audit(1769564647.309:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 28 01:44:09.263736 systemd[1]: Successfully loaded SELinux policy in 131.734ms. Jan 28 01:44:09.263768 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.212ms. Jan 28 01:44:09.263789 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 28 01:44:09.263808 systemd[1]: Detected virtualization kvm. Jan 28 01:44:09.263882 systemd[1]: Detected architecture x86-64. Jan 28 01:44:09.263896 systemd[1]: Detected first boot. Jan 28 01:44:09.263907 systemd[1]: Initializing machine ID from VM UUID. Jan 28 01:44:09.263919 zram_generator::config[1120]: No configuration found. Jan 28 01:44:09.263938 kernel: Guest personality initialized and is inactive Jan 28 01:44:09.263956 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 28 01:44:09.263983 kernel: Initialized host personality Jan 28 01:44:09.263999 kernel: NET: Registered PF_VSOCK protocol family Jan 28 01:44:09.264017 systemd[1]: Populated /etc with preset unit settings. Jan 28 01:44:09.264041 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 28 01:44:09.264061 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 28 01:44:09.264078 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 28 01:44:09.264097 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 28 01:44:09.264184 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 28 01:44:09.264206 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 28 01:44:09.264226 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 28 01:44:09.264244 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 28 01:44:09.264269 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 28 01:44:09.264291 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 28 01:44:09.264307 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 28 01:44:09.264323 systemd[1]: Created slice user.slice - User and Session Slice. Jan 28 01:44:09.264340 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 01:44:09.264357 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 01:44:09.264376 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 28 01:44:09.264393 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 28 01:44:09.264412 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 28 01:44:09.264435 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 01:44:09.264452 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 28 01:44:09.264468 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 01:44:09.264484 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 01:44:09.264500 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 28 01:44:09.264516 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 28 01:44:09.264532 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 28 01:44:09.264548 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 28 01:44:09.264567 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 01:44:09.264586 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 01:44:09.264596 systemd[1]: Reached target slices.target - Slice Units. Jan 28 01:44:09.264607 systemd[1]: Reached target swap.target - Swaps. Jan 28 01:44:09.264618 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 28 01:44:09.264630 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 28 01:44:09.264640 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 28 01:44:09.264651 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 01:44:09.264662 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 01:44:09.264675 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 01:44:09.264686 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 28 01:44:09.264696 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 28 01:44:09.264708 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 28 01:44:09.264719 systemd[1]: Mounting media.mount - External Media Directory... Jan 28 01:44:09.264729 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:44:09.264740 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 28 01:44:09.264750 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 28 01:44:09.264761 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 28 01:44:09.264774 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 28 01:44:09.264784 systemd[1]: Reached target machines.target - Containers. Jan 28 01:44:09.264795 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 28 01:44:09.264806 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:44:09.264816 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 01:44:09.264873 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 28 01:44:09.264885 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:44:09.264895 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 01:44:09.264910 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:44:09.264921 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 28 01:44:09.264931 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:44:09.264943 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 28 01:44:09.264954 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 28 01:44:09.264965 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 28 01:44:09.264976 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 28 01:44:09.264986 systemd[1]: Stopped systemd-fsck-usr.service. Jan 28 01:44:09.265000 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 01:44:09.265011 kernel: fuse: init (API version 7.41) Jan 28 01:44:09.265021 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 01:44:09.265032 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 01:44:09.265042 kernel: ACPI: bus type drm_connector registered Jan 28 01:44:09.265053 kernel: loop: module loaded Jan 28 01:44:09.265063 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 01:44:09.265074 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 28 01:44:09.265085 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 28 01:44:09.265098 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 01:44:09.265155 systemd[1]: verity-setup.service: Deactivated successfully. Jan 28 01:44:09.265169 systemd[1]: Stopped verity-setup.service. Jan 28 01:44:09.265181 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:44:09.265226 systemd-journald[1205]: Collecting audit messages is disabled. Jan 28 01:44:09.265248 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 28 01:44:09.265260 systemd-journald[1205]: Journal started Jan 28 01:44:09.265280 systemd-journald[1205]: Runtime Journal (/run/log/journal/e7ceb61748e541ef9aa0a7438bb231d1) is 6M, max 48.1M, 42.1M free. Jan 28 01:44:08.306353 systemd[1]: Queued start job for default target multi-user.target. Jan 28 01:44:08.330364 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 28 01:44:08.331413 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 28 01:44:08.332028 systemd[1]: systemd-journald.service: Consumed 1.417s CPU time. Jan 28 01:44:09.274192 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 01:44:09.282363 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 28 01:44:09.291302 systemd[1]: Mounted media.mount - External Media Directory. Jan 28 01:44:09.297222 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 28 01:44:09.304506 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 28 01:44:09.314244 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 28 01:44:09.320003 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 28 01:44:09.327278 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 01:44:09.334910 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 28 01:44:09.335413 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 28 01:44:09.341414 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:44:09.341809 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:44:09.349179 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 01:44:09.349554 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 01:44:09.359764 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:44:09.360231 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:44:09.370239 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 28 01:44:09.370610 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 28 01:44:09.378871 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:44:09.379278 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:44:09.387878 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 01:44:09.393634 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 01:44:09.402908 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 28 01:44:09.411260 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 28 01:44:09.436882 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 01:44:09.448915 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 01:44:09.457939 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 28 01:44:09.468421 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 28 01:44:09.479015 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 28 01:44:09.479187 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 01:44:09.483024 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 28 01:44:09.495939 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 28 01:44:09.501646 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:44:09.505241 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 28 01:44:09.515648 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 28 01:44:09.522520 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 01:44:09.533957 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 28 01:44:09.542593 systemd-journald[1205]: Time spent on flushing to /var/log/journal/e7ceb61748e541ef9aa0a7438bb231d1 is 35.809ms for 1060 entries. Jan 28 01:44:09.542593 systemd-journald[1205]: System Journal (/var/log/journal/e7ceb61748e541ef9aa0a7438bb231d1) is 8M, max 195.6M, 187.6M free. Jan 28 01:44:09.595379 systemd-journald[1205]: Received client request to flush runtime journal. Jan 28 01:44:09.543074 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 01:44:09.550384 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 01:44:09.566329 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 28 01:44:09.583025 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 28 01:44:09.594414 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 28 01:44:09.601598 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 28 01:44:09.608893 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 28 01:44:09.623181 kernel: loop0: detected capacity change from 0 to 110984 Jan 28 01:44:09.620634 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 28 01:44:09.631959 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 28 01:44:09.643593 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 28 01:44:09.650789 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 01:44:09.688209 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 28 01:44:09.696786 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 28 01:44:09.704515 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 01:44:09.715031 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 28 01:44:09.719254 kernel: loop1: detected capacity change from 0 to 128560 Jan 28 01:44:09.720804 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 28 01:44:09.768919 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Jan 28 01:44:09.768973 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Jan 28 01:44:09.776043 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 01:44:09.800521 kernel: loop2: detected capacity change from 0 to 224512 Jan 28 01:44:09.846262 kernel: loop3: detected capacity change from 0 to 110984 Jan 28 01:44:09.896188 kernel: loop4: detected capacity change from 0 to 128560 Jan 28 01:44:09.934178 kernel: loop5: detected capacity change from 0 to 224512 Jan 28 01:44:09.965784 (sd-merge)[1264]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 28 01:44:09.966726 (sd-merge)[1264]: Merged extensions into '/usr'. Jan 28 01:44:09.975884 systemd[1]: Reload requested from client PID 1240 ('systemd-sysext') (unit systemd-sysext.service)... Jan 28 01:44:09.976034 systemd[1]: Reloading... Jan 28 01:44:10.049241 zram_generator::config[1287]: No configuration found. Jan 28 01:44:10.264353 ldconfig[1235]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 28 01:44:10.338672 systemd[1]: Reloading finished in 361 ms. Jan 28 01:44:10.378932 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 28 01:44:10.384329 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 28 01:44:10.392789 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 28 01:44:10.433434 systemd[1]: Starting ensure-sysext.service... Jan 28 01:44:10.440934 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 01:44:10.449975 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 01:44:10.494536 systemd[1]: Reload requested from client PID 1328 ('systemctl') (unit ensure-sysext.service)... Jan 28 01:44:10.494580 systemd[1]: Reloading... Jan 28 01:44:10.501102 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 28 01:44:10.502010 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 28 01:44:10.502557 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 28 01:44:10.503026 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 28 01:44:10.504496 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 28 01:44:10.505013 systemd-tmpfiles[1329]: ACLs are not supported, ignoring. Jan 28 01:44:10.505234 systemd-tmpfiles[1329]: ACLs are not supported, ignoring. Jan 28 01:44:10.511711 systemd-tmpfiles[1329]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 01:44:10.511820 systemd-tmpfiles[1329]: Skipping /boot Jan 28 01:44:10.529181 systemd-tmpfiles[1329]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 01:44:10.529362 systemd-tmpfiles[1329]: Skipping /boot Jan 28 01:44:10.534272 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Jan 28 01:44:10.599700 zram_generator::config[1357]: No configuration found. Jan 28 01:44:10.894212 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 28 01:44:10.894309 kernel: mousedev: PS/2 mouse device common for all mice Jan 28 01:44:10.936413 kernel: ACPI: button: Power Button [PWRF] Jan 28 01:44:10.989696 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 28 01:44:10.992794 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 28 01:44:10.993238 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 28 01:44:11.084478 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 28 01:44:11.085803 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 01:44:11.092785 systemd[1]: Reloading finished in 597 ms. Jan 28 01:44:11.114359 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 01:44:11.211310 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 01:44:11.296204 systemd[1]: Finished ensure-sysext.service. Jan 28 01:44:11.332102 kernel: kvm_amd: TSC scaling supported Jan 28 01:44:11.332267 kernel: kvm_amd: Nested Virtualization enabled Jan 28 01:44:11.332288 kernel: kvm_amd: Nested Paging enabled Jan 28 01:44:11.337501 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 28 01:44:11.337598 kernel: kvm_amd: PMU virtualization is disabled Jan 28 01:44:11.346941 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:44:11.353238 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 28 01:44:11.499371 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 28 01:44:11.511185 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 01:44:11.515257 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 01:44:11.530024 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 01:44:11.536419 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 01:44:11.610320 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 01:44:11.611646 kernel: EDAC MC: Ver: 3.0.0 Jan 28 01:44:11.622283 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 01:44:11.632277 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 28 01:44:11.639710 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 01:44:11.652736 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 28 01:44:11.667334 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 01:44:11.672077 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 01:44:11.681206 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 28 01:44:11.698004 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 28 01:44:11.714027 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 01:44:11.720987 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 01:44:11.723203 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 01:44:11.723593 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 01:44:11.730408 augenrules[1483]: No rules Jan 28 01:44:11.735303 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 01:44:11.735793 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 28 01:44:11.738500 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 01:44:11.741432 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 01:44:11.751572 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 01:44:11.757393 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 01:44:11.766323 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 01:44:11.768474 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 01:44:11.775398 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 28 01:44:11.786964 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 28 01:44:11.813534 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 01:44:11.813818 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 01:44:11.819752 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 28 01:44:11.824428 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 28 01:44:11.835430 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 28 01:44:11.866470 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 28 01:44:11.888767 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 28 01:44:11.889273 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 28 01:44:11.943049 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 28 01:44:11.993829 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 01:44:12.113417 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 28 01:44:12.118789 systemd-networkd[1475]: lo: Link UP Jan 28 01:44:12.118801 systemd-networkd[1475]: lo: Gained carrier Jan 28 01:44:12.119340 systemd[1]: Reached target time-set.target - System Time Set. Jan 28 01:44:12.122401 systemd-networkd[1475]: Enumeration completed Jan 28 01:44:12.123636 systemd-networkd[1475]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:44:12.123711 systemd-networkd[1475]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 01:44:12.123926 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 01:44:12.125322 systemd-networkd[1475]: eth0: Link UP Jan 28 01:44:12.126730 systemd-networkd[1475]: eth0: Gained carrier Jan 28 01:44:12.126821 systemd-networkd[1475]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 01:44:12.134498 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 28 01:44:12.144656 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 28 01:44:12.158417 systemd-resolved[1476]: Positive Trust Anchors: Jan 28 01:44:12.158933 systemd-resolved[1476]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 01:44:12.158978 systemd-resolved[1476]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 01:44:12.165211 systemd-networkd[1475]: eth0: DHCPv4 address 10.0.0.33/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 28 01:44:12.165260 systemd-resolved[1476]: Defaulting to hostname 'linux'. Jan 28 01:44:12.168545 systemd-timesyncd[1478]: Network configuration changed, trying to establish connection. Jan 28 01:44:12.168682 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 01:44:13.896853 systemd-timesyncd[1478]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 28 01:44:13.897200 systemd-timesyncd[1478]: Initial clock synchronization to Wed 2026-01-28 01:44:13.896707 UTC. Jan 28 01:44:13.897410 systemd[1]: Reached target network.target - Network. Jan 28 01:44:13.900178 systemd-resolved[1476]: Clock change detected. Flushing caches. Jan 28 01:44:13.903153 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 01:44:13.909485 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 01:44:13.914441 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 28 01:44:13.921468 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 28 01:44:13.929291 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 28 01:44:13.937417 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 28 01:44:13.943182 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 28 01:44:13.949810 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 28 01:44:13.955602 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 28 01:44:13.955728 systemd[1]: Reached target paths.target - Path Units. Jan 28 01:44:13.960538 systemd[1]: Reached target timers.target - Timer Units. Jan 28 01:44:13.969380 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 28 01:44:13.981088 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 28 01:44:13.988858 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 28 01:44:13.996471 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 28 01:44:14.003763 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 28 01:44:14.015286 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 28 01:44:14.020265 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 28 01:44:14.028372 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 28 01:44:14.034336 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 28 01:44:14.044772 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 01:44:14.051302 systemd[1]: Reached target basic.target - Basic System. Jan 28 01:44:14.055040 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 28 01:44:14.055119 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 28 01:44:14.058295 systemd[1]: Starting containerd.service - containerd container runtime... Jan 28 01:44:14.064369 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 28 01:44:14.069732 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 28 01:44:14.087962 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 28 01:44:14.101268 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 28 01:44:14.105258 jq[1522]: false Jan 28 01:44:14.106586 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 28 01:44:14.108509 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 28 01:44:14.118773 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 28 01:44:14.134276 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 28 01:44:14.143442 google_oslogin_nss_cache[1524]: oslogin_cache_refresh[1524]: Refreshing passwd entry cache Jan 28 01:44:14.140774 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 28 01:44:14.140034 oslogin_cache_refresh[1524]: Refreshing passwd entry cache Jan 28 01:44:14.158226 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 28 01:44:14.166492 extend-filesystems[1523]: Found /dev/vda6 Jan 28 01:44:14.177545 google_oslogin_nss_cache[1524]: oslogin_cache_refresh[1524]: Failure getting users, quitting Jan 28 01:44:14.177545 google_oslogin_nss_cache[1524]: oslogin_cache_refresh[1524]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 28 01:44:14.177545 google_oslogin_nss_cache[1524]: oslogin_cache_refresh[1524]: Refreshing group entry cache Jan 28 01:44:14.173523 oslogin_cache_refresh[1524]: Failure getting users, quitting Jan 28 01:44:14.173551 oslogin_cache_refresh[1524]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 28 01:44:14.173682 oslogin_cache_refresh[1524]: Refreshing group entry cache Jan 28 01:44:14.182559 extend-filesystems[1523]: Found /dev/vda9 Jan 28 01:44:14.190591 extend-filesystems[1523]: Checking size of /dev/vda9 Jan 28 01:44:14.201038 google_oslogin_nss_cache[1524]: oslogin_cache_refresh[1524]: Failure getting groups, quitting Jan 28 01:44:14.201038 google_oslogin_nss_cache[1524]: oslogin_cache_refresh[1524]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 28 01:44:14.195866 oslogin_cache_refresh[1524]: Failure getting groups, quitting Jan 28 01:44:14.196168 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 28 01:44:14.196199 oslogin_cache_refresh[1524]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 28 01:44:14.207846 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 28 01:44:14.210620 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 28 01:44:14.213244 systemd[1]: Starting update-engine.service - Update Engine... Jan 28 01:44:14.225150 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 28 01:44:14.227288 extend-filesystems[1523]: Resized partition /dev/vda9 Jan 28 01:44:14.241030 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 28 01:44:14.250188 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 28 01:44:14.250565 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 28 01:44:14.251194 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 28 01:44:14.251538 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 28 01:44:14.261215 systemd[1]: motdgen.service: Deactivated successfully. Jan 28 01:44:14.262334 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 28 01:44:14.263080 extend-filesystems[1549]: resize2fs 1.47.3 (8-Jul-2025) Jan 28 01:44:14.300171 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 28 01:44:14.300210 update_engine[1545]: I20260128 01:44:14.295006 1545 main.cc:92] Flatcar Update Engine starting Jan 28 01:44:14.287087 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 28 01:44:14.289028 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 28 01:44:14.328959 jq[1548]: true Jan 28 01:44:14.350436 (ntainerd)[1553]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 28 01:44:14.362958 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 28 01:44:14.363021 tar[1552]: linux-amd64/LICENSE Jan 28 01:44:14.379869 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 28 01:44:14.397978 jq[1557]: true Jan 28 01:44:14.412052 extend-filesystems[1549]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 28 01:44:14.412052 extend-filesystems[1549]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 28 01:44:14.412052 extend-filesystems[1549]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 28 01:44:14.446592 extend-filesystems[1523]: Resized filesystem in /dev/vda9 Jan 28 01:44:14.440979 dbus-daemon[1520]: [system] SELinux support is enabled Jan 28 01:44:14.416420 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 28 01:44:14.447358 tar[1552]: linux-amd64/helm Jan 28 01:44:14.416869 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 28 01:44:14.450091 update_engine[1545]: I20260128 01:44:14.448395 1545 update_check_scheduler.cc:74] Next update check in 5m13s Jan 28 01:44:14.442746 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 28 01:44:14.463540 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 28 01:44:14.463598 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 28 01:44:14.472191 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 28 01:44:14.472240 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 28 01:44:14.480278 systemd[1]: Started update-engine.service - Update Engine. Jan 28 01:44:14.490560 systemd-logind[1540]: Watching system buttons on /dev/input/event2 (Power Button) Jan 28 01:44:14.493341 systemd-logind[1540]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 28 01:44:14.493822 systemd-logind[1540]: New seat seat0. Jan 28 01:44:14.494229 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 28 01:44:14.499352 systemd[1]: Started systemd-logind.service - User Login Management. Jan 28 01:44:14.597748 bash[1585]: Updated "/home/core/.ssh/authorized_keys" Jan 28 01:44:14.602227 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 28 01:44:14.610112 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 28 01:44:14.654600 locksmithd[1571]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 28 01:44:14.765459 sshd_keygen[1547]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 28 01:44:14.794969 containerd[1553]: time="2026-01-28T01:44:14Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 28 01:44:14.795762 containerd[1553]: time="2026-01-28T01:44:14.795688685Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 28 01:44:14.815770 containerd[1553]: time="2026-01-28T01:44:14.812695778Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.963µs" Jan 28 01:44:14.815770 containerd[1553]: time="2026-01-28T01:44:14.813265622Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 28 01:44:14.815770 containerd[1553]: time="2026-01-28T01:44:14.813310726Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 28 01:44:14.815770 containerd[1553]: time="2026-01-28T01:44:14.813562867Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 28 01:44:14.815770 containerd[1553]: time="2026-01-28T01:44:14.813587383Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 28 01:44:14.815770 containerd[1553]: time="2026-01-28T01:44:14.813628680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 28 01:44:14.815770 containerd[1553]: time="2026-01-28T01:44:14.813775864Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 28 01:44:14.815770 containerd[1553]: time="2026-01-28T01:44:14.813797615Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 28 01:44:14.815770 containerd[1553]: time="2026-01-28T01:44:14.814231194Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 28 01:44:14.815770 containerd[1553]: time="2026-01-28T01:44:14.814255039Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 28 01:44:14.815770 containerd[1553]: time="2026-01-28T01:44:14.814268254Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 28 01:44:14.815770 containerd[1553]: time="2026-01-28T01:44:14.814278122Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 28 01:44:14.816310 containerd[1553]: time="2026-01-28T01:44:14.814395741Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 28 01:44:14.816310 containerd[1553]: time="2026-01-28T01:44:14.815235940Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 28 01:44:14.816310 containerd[1553]: time="2026-01-28T01:44:14.815279751Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 28 01:44:14.816310 containerd[1553]: time="2026-01-28T01:44:14.815299027Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 28 01:44:14.816310 containerd[1553]: time="2026-01-28T01:44:14.815378546Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 28 01:44:14.816310 containerd[1553]: time="2026-01-28T01:44:14.815700447Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 28 01:44:14.816310 containerd[1553]: time="2026-01-28T01:44:14.815785986Z" level=info msg="metadata content store policy set" policy=shared Jan 28 01:44:14.829677 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 28 01:44:14.853263 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 28 01:44:14.862781 containerd[1553]: time="2026-01-28T01:44:14.860091865Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 28 01:44:14.864135 containerd[1553]: time="2026-01-28T01:44:14.863995150Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 28 01:44:14.864135 containerd[1553]: time="2026-01-28T01:44:14.864100757Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 28 01:44:14.864135 containerd[1553]: time="2026-01-28T01:44:14.864124572Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 28 01:44:14.864273 containerd[1553]: time="2026-01-28T01:44:14.864149107Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 28 01:44:14.864273 containerd[1553]: time="2026-01-28T01:44:14.864170197Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 28 01:44:14.864273 containerd[1553]: time="2026-01-28T01:44:14.864191456Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 28 01:44:14.864273 containerd[1553]: time="2026-01-28T01:44:14.864215171Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 28 01:44:14.864273 containerd[1553]: time="2026-01-28T01:44:14.864235199Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 28 01:44:14.864404 containerd[1553]: time="2026-01-28T01:44:14.864305500Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 28 01:44:14.864404 containerd[1553]: time="2026-01-28T01:44:14.864328833Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 28 01:44:14.864404 containerd[1553]: time="2026-01-28T01:44:14.864353799Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 28 01:44:14.865548 systemd[1]: Started sshd@0-10.0.0.33:22-10.0.0.1:49930.service - OpenSSH per-connection server daemon (10.0.0.1:49930). Jan 28 01:44:14.872396 containerd[1553]: time="2026-01-28T01:44:14.865144806Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 28 01:44:14.872396 containerd[1553]: time="2026-01-28T01:44:14.871727853Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 28 01:44:14.872396 containerd[1553]: time="2026-01-28T01:44:14.871773068Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 28 01:44:14.872396 containerd[1553]: time="2026-01-28T01:44:14.871791322Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 28 01:44:14.872396 containerd[1553]: time="2026-01-28T01:44:14.871813263Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 28 01:44:14.872396 containerd[1553]: time="2026-01-28T01:44:14.871883985Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 28 01:44:14.872396 containerd[1553]: time="2026-01-28T01:44:14.871984352Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 28 01:44:14.872396 containerd[1553]: time="2026-01-28T01:44:14.872113774Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 28 01:44:14.872396 containerd[1553]: time="2026-01-28T01:44:14.872303468Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 28 01:44:14.873064 containerd[1553]: time="2026-01-28T01:44:14.872330568Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 28 01:44:14.873064 containerd[1553]: time="2026-01-28T01:44:14.872600885Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 28 01:44:14.873064 containerd[1553]: time="2026-01-28T01:44:14.872769781Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 28 01:44:14.873064 containerd[1553]: time="2026-01-28T01:44:14.872795439Z" level=info msg="Start snapshots syncer" Jan 28 01:44:14.873064 containerd[1553]: time="2026-01-28T01:44:14.872831325Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 28 01:44:14.873363 containerd[1553]: time="2026-01-28T01:44:14.873257211Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 28 01:44:14.873538 containerd[1553]: time="2026-01-28T01:44:14.873375361Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 28 01:44:14.873538 containerd[1553]: time="2026-01-28T01:44:14.873438609Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 28 01:44:14.873743 containerd[1553]: time="2026-01-28T01:44:14.873629986Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 28 01:44:14.873795 containerd[1553]: time="2026-01-28T01:44:14.873741374Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 28 01:44:14.873795 containerd[1553]: time="2026-01-28T01:44:14.873757905Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 28 01:44:14.873795 containerd[1553]: time="2026-01-28T01:44:14.873783122Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 28 01:44:14.873874 containerd[1553]: time="2026-01-28T01:44:14.873800935Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 28 01:44:14.873874 containerd[1553]: time="2026-01-28T01:44:14.873814851Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 28 01:44:14.873874 containerd[1553]: time="2026-01-28T01:44:14.873834989Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 28 01:44:14.874438 containerd[1553]: time="2026-01-28T01:44:14.873876096Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 28 01:44:14.874438 containerd[1553]: time="2026-01-28T01:44:14.873972295Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 28 01:44:14.874438 containerd[1553]: time="2026-01-28T01:44:14.873995859Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 28 01:44:14.874438 containerd[1553]: time="2026-01-28T01:44:14.874043859Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 28 01:44:14.874438 containerd[1553]: time="2026-01-28T01:44:14.874072121Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 28 01:44:14.874438 containerd[1553]: time="2026-01-28T01:44:14.874087590Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 28 01:44:14.874438 containerd[1553]: time="2026-01-28T01:44:14.874105314Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 28 01:44:14.874438 containerd[1553]: time="2026-01-28T01:44:14.874118388Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 28 01:44:14.874438 containerd[1553]: time="2026-01-28T01:44:14.874136301Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 28 01:44:14.874438 containerd[1553]: time="2026-01-28T01:44:14.874163232Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 28 01:44:14.874438 containerd[1553]: time="2026-01-28T01:44:14.874187607Z" level=info msg="runtime interface created" Jan 28 01:44:14.874438 containerd[1553]: time="2026-01-28T01:44:14.874197145Z" level=info msg="created NRI interface" Jan 28 01:44:14.874438 containerd[1553]: time="2026-01-28T01:44:14.874210109Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 28 01:44:14.874438 containerd[1553]: time="2026-01-28T01:44:14.874230387Z" level=info msg="Connect containerd service" Jan 28 01:44:14.874438 containerd[1553]: time="2026-01-28T01:44:14.874258760Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 28 01:44:14.879959 containerd[1553]: time="2026-01-28T01:44:14.878878833Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 01:44:14.896213 systemd[1]: issuegen.service: Deactivated successfully. Jan 28 01:44:14.896516 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 28 01:44:14.907285 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 28 01:44:14.958414 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 28 01:44:14.982386 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 28 01:44:14.991881 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 28 01:44:14.997598 systemd[1]: Reached target getty.target - Login Prompts. Jan 28 01:44:15.039562 containerd[1553]: time="2026-01-28T01:44:15.039462556Z" level=info msg="Start subscribing containerd event" Jan 28 01:44:15.039562 containerd[1553]: time="2026-01-28T01:44:15.039534500Z" level=info msg="Start recovering state" Jan 28 01:44:15.039824 containerd[1553]: time="2026-01-28T01:44:15.039800088Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 28 01:44:15.040026 containerd[1553]: time="2026-01-28T01:44:15.039834921Z" level=info msg="Start event monitor" Jan 28 01:44:15.040026 containerd[1553]: time="2026-01-28T01:44:15.040020437Z" level=info msg="Start cni network conf syncer for default" Jan 28 01:44:15.040095 containerd[1553]: time="2026-01-28T01:44:15.040036436Z" level=info msg="Start streaming server" Jan 28 01:44:15.040095 containerd[1553]: time="2026-01-28T01:44:15.040054430Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 28 01:44:15.040095 containerd[1553]: time="2026-01-28T01:44:15.040064890Z" level=info msg="runtime interface starting up..." Jan 28 01:44:15.040095 containerd[1553]: time="2026-01-28T01:44:15.040072624Z" level=info msg="starting plugins..." Jan 28 01:44:15.040095 containerd[1553]: time="2026-01-28T01:44:15.040093413Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 28 01:44:15.040306 containerd[1553]: time="2026-01-28T01:44:15.040287397Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 28 01:44:15.040547 systemd[1]: Started containerd.service - containerd container runtime. Jan 28 01:44:15.041743 containerd[1553]: time="2026-01-28T01:44:15.041724369Z" level=info msg="containerd successfully booted in 0.247564s" Jan 28 01:44:15.055119 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 49930 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:44:15.060278 sshd-session[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:44:15.093726 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 28 01:44:15.108257 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 28 01:44:15.129834 systemd-logind[1540]: New session 1 of user core. Jan 28 01:44:15.148990 tar[1552]: linux-amd64/README.md Jan 28 01:44:15.149676 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 28 01:44:15.165369 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 28 01:44:15.189282 (systemd)[1636]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 28 01:44:15.189504 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 28 01:44:15.205495 systemd-logind[1540]: New session c1 of user core. Jan 28 01:44:15.526715 systemd[1636]: Queued start job for default target default.target. Jan 28 01:44:15.549551 systemd[1636]: Created slice app.slice - User Application Slice. Jan 28 01:44:15.550162 systemd[1636]: Reached target paths.target - Paths. Jan 28 01:44:15.551096 systemd[1636]: Reached target timers.target - Timers. Jan 28 01:44:15.558175 systemd[1636]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 28 01:44:15.619130 systemd[1636]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 28 01:44:15.620595 systemd[1636]: Reached target sockets.target - Sockets. Jan 28 01:44:15.623358 systemd[1636]: Reached target basic.target - Basic System. Jan 28 01:44:15.626358 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 28 01:44:15.633082 systemd[1636]: Reached target default.target - Main User Target. Jan 28 01:44:15.633187 systemd[1636]: Startup finished in 414ms. Jan 28 01:44:15.651488 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 28 01:44:15.731104 systemd[1]: Started sshd@1-10.0.0.33:22-10.0.0.1:38774.service - OpenSSH per-connection server daemon (10.0.0.1:38774). Jan 28 01:44:15.811517 systemd-networkd[1475]: eth0: Gained IPv6LL Jan 28 01:44:15.819410 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 28 01:44:15.836516 systemd[1]: Reached target network-online.target - Network is Online. Jan 28 01:44:15.850604 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 28 01:44:15.860094 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:44:15.875316 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 28 01:44:15.911807 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 38774 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:44:15.916014 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:44:15.917578 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 28 01:44:15.918092 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 28 01:44:15.930010 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 28 01:44:15.941372 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 28 01:44:15.958158 systemd-logind[1540]: New session 2 of user core. Jan 28 01:44:15.976381 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 28 01:44:16.060547 sshd[1669]: Connection closed by 10.0.0.1 port 38774 Jan 28 01:44:16.058959 sshd-session[1648]: pam_unix(sshd:session): session closed for user core Jan 28 01:44:16.077427 systemd[1]: sshd@1-10.0.0.33:22-10.0.0.1:38774.service: Deactivated successfully. Jan 28 01:44:16.080579 systemd[1]: session-2.scope: Deactivated successfully. Jan 28 01:44:16.084018 systemd-logind[1540]: Session 2 logged out. Waiting for processes to exit. Jan 28 01:44:16.089240 systemd[1]: Started sshd@2-10.0.0.33:22-10.0.0.1:38790.service - OpenSSH per-connection server daemon (10.0.0.1:38790). Jan 28 01:44:16.106745 systemd-logind[1540]: Removed session 2. Jan 28 01:44:16.193417 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 38790 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:44:16.198105 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:44:16.211182 systemd-logind[1540]: New session 3 of user core. Jan 28 01:44:16.229251 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 28 01:44:16.329229 sshd[1678]: Connection closed by 10.0.0.1 port 38790 Jan 28 01:44:16.330185 sshd-session[1675]: pam_unix(sshd:session): session closed for user core Jan 28 01:44:16.338435 systemd[1]: sshd@2-10.0.0.33:22-10.0.0.1:38790.service: Deactivated successfully. Jan 28 01:44:16.341344 systemd[1]: session-3.scope: Deactivated successfully. Jan 28 01:44:16.345105 systemd-logind[1540]: Session 3 logged out. Waiting for processes to exit. Jan 28 01:44:16.351426 systemd-logind[1540]: Removed session 3. Jan 28 01:44:17.543777 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:44:17.563160 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 28 01:44:17.571810 systemd[1]: Startup finished in 4.595s (kernel) + 12.377s (initrd) + 8.668s (userspace) = 25.641s. Jan 28 01:44:17.587761 (kubelet)[1687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:44:18.958841 kubelet[1687]: E0128 01:44:18.958395 1687 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:44:18.965305 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:44:18.965565 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:44:18.966246 systemd[1]: kubelet.service: Consumed 1.396s CPU time, 266.2M memory peak. Jan 28 01:44:26.344329 systemd[1]: Started sshd@3-10.0.0.33:22-10.0.0.1:44430.service - OpenSSH per-connection server daemon (10.0.0.1:44430). Jan 28 01:44:26.427035 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 44430 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:44:26.429676 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:44:26.438387 systemd-logind[1540]: New session 4 of user core. Jan 28 01:44:26.453205 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 28 01:44:26.522558 sshd[1704]: Connection closed by 10.0.0.1 port 44430 Jan 28 01:44:26.520639 sshd-session[1701]: pam_unix(sshd:session): session closed for user core Jan 28 01:44:26.535862 systemd[1]: sshd@3-10.0.0.33:22-10.0.0.1:44430.service: Deactivated successfully. Jan 28 01:44:26.539417 systemd[1]: session-4.scope: Deactivated successfully. Jan 28 01:44:26.542056 systemd-logind[1540]: Session 4 logged out. Waiting for processes to exit. Jan 28 01:44:26.547528 systemd[1]: Started sshd@4-10.0.0.33:22-10.0.0.1:44442.service - OpenSSH per-connection server daemon (10.0.0.1:44442). Jan 28 01:44:26.549863 systemd-logind[1540]: Removed session 4. Jan 28 01:44:26.628588 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 44442 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:44:26.630648 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:44:26.641846 systemd-logind[1540]: New session 5 of user core. Jan 28 01:44:26.652329 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 28 01:44:26.709696 sshd[1713]: Connection closed by 10.0.0.1 port 44442 Jan 28 01:44:26.712176 sshd-session[1710]: pam_unix(sshd:session): session closed for user core Jan 28 01:44:26.724719 systemd[1]: sshd@4-10.0.0.33:22-10.0.0.1:44442.service: Deactivated successfully. Jan 28 01:44:26.727477 systemd[1]: session-5.scope: Deactivated successfully. Jan 28 01:44:26.729960 systemd-logind[1540]: Session 5 logged out. Waiting for processes to exit. Jan 28 01:44:26.734360 systemd[1]: Started sshd@5-10.0.0.33:22-10.0.0.1:44456.service - OpenSSH per-connection server daemon (10.0.0.1:44456). Jan 28 01:44:26.736545 systemd-logind[1540]: Removed session 5. Jan 28 01:44:26.819171 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 44456 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:44:26.821463 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:44:26.836295 systemd-logind[1540]: New session 6 of user core. Jan 28 01:44:26.848792 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 28 01:44:26.918572 sshd[1723]: Connection closed by 10.0.0.1 port 44456 Jan 28 01:44:26.918982 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Jan 28 01:44:26.934513 systemd[1]: sshd@5-10.0.0.33:22-10.0.0.1:44456.service: Deactivated successfully. Jan 28 01:44:26.937564 systemd[1]: session-6.scope: Deactivated successfully. Jan 28 01:44:26.939080 systemd-logind[1540]: Session 6 logged out. Waiting for processes to exit. Jan 28 01:44:26.943563 systemd[1]: Started sshd@6-10.0.0.33:22-10.0.0.1:44470.service - OpenSSH per-connection server daemon (10.0.0.1:44470). Jan 28 01:44:26.946509 systemd-logind[1540]: Removed session 6. Jan 28 01:44:27.021321 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 44470 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:44:27.024057 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:44:27.038551 systemd-logind[1540]: New session 7 of user core. Jan 28 01:44:27.050580 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 28 01:44:27.119863 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 28 01:44:27.120286 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:44:27.145154 sudo[1733]: pam_unix(sudo:session): session closed for user root Jan 28 01:44:27.147314 sshd[1732]: Connection closed by 10.0.0.1 port 44470 Jan 28 01:44:27.147734 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Jan 28 01:44:27.161091 systemd[1]: sshd@6-10.0.0.33:22-10.0.0.1:44470.service: Deactivated successfully. Jan 28 01:44:27.164085 systemd[1]: session-7.scope: Deactivated successfully. Jan 28 01:44:27.165411 systemd-logind[1540]: Session 7 logged out. Waiting for processes to exit. Jan 28 01:44:27.168829 systemd[1]: Started sshd@7-10.0.0.33:22-10.0.0.1:44482.service - OpenSSH per-connection server daemon (10.0.0.1:44482). Jan 28 01:44:27.171563 systemd-logind[1540]: Removed session 7. Jan 28 01:44:27.245539 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 44482 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:44:27.249068 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:44:27.259130 systemd-logind[1540]: New session 8 of user core. Jan 28 01:44:27.275126 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 28 01:44:27.340436 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 28 01:44:27.341293 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:44:27.360135 sudo[1744]: pam_unix(sudo:session): session closed for user root Jan 28 01:44:27.369363 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 28 01:44:27.369865 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:44:27.384234 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 28 01:44:27.478851 augenrules[1766]: No rules Jan 28 01:44:27.480447 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 01:44:27.480815 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 28 01:44:27.485636 sudo[1743]: pam_unix(sudo:session): session closed for user root Jan 28 01:44:27.490852 sshd[1742]: Connection closed by 10.0.0.1 port 44482 Jan 28 01:44:27.490007 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Jan 28 01:44:27.506283 systemd[1]: sshd@7-10.0.0.33:22-10.0.0.1:44482.service: Deactivated successfully. Jan 28 01:44:27.512245 systemd[1]: session-8.scope: Deactivated successfully. Jan 28 01:44:27.514715 systemd-logind[1540]: Session 8 logged out. Waiting for processes to exit. Jan 28 01:44:27.525182 systemd[1]: Started sshd@8-10.0.0.33:22-10.0.0.1:44494.service - OpenSSH per-connection server daemon (10.0.0.1:44494). Jan 28 01:44:27.530990 systemd-logind[1540]: Removed session 8. Jan 28 01:44:27.612370 sshd[1775]: Accepted publickey for core from 10.0.0.1 port 44494 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:44:27.616157 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:44:27.625489 systemd-logind[1540]: New session 9 of user core. Jan 28 01:44:27.639283 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 28 01:44:27.706653 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 28 01:44:27.707265 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 01:44:28.213029 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 28 01:44:28.234570 (dockerd)[1800]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 28 01:44:28.592250 dockerd[1800]: time="2026-01-28T01:44:28.591702614Z" level=info msg="Starting up" Jan 28 01:44:28.594341 dockerd[1800]: time="2026-01-28T01:44:28.593822751Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 28 01:44:28.625621 dockerd[1800]: time="2026-01-28T01:44:28.625511549Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 28 01:44:28.735148 dockerd[1800]: time="2026-01-28T01:44:28.734981027Z" level=info msg="Loading containers: start." Jan 28 01:44:28.755366 kernel: Initializing XFRM netlink socket Jan 28 01:44:29.182435 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 28 01:44:29.184868 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:44:29.324236 systemd-networkd[1475]: docker0: Link UP Jan 28 01:44:29.444586 dockerd[1800]: time="2026-01-28T01:44:29.444079191Z" level=info msg="Loading containers: done." Jan 28 01:44:29.472621 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1400272971-merged.mount: Deactivated successfully. Jan 28 01:44:29.485846 dockerd[1800]: time="2026-01-28T01:44:29.485661597Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 28 01:44:29.487541 dockerd[1800]: time="2026-01-28T01:44:29.487431548Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 28 01:44:29.487845 dockerd[1800]: time="2026-01-28T01:44:29.487692556Z" level=info msg="Initializing buildkit" Jan 28 01:44:29.504585 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:44:29.516573 (kubelet)[1991]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:44:29.566394 dockerd[1800]: time="2026-01-28T01:44:29.566054067Z" level=info msg="Completed buildkit initialization" Jan 28 01:44:29.578291 dockerd[1800]: time="2026-01-28T01:44:29.578172887Z" level=info msg="Daemon has completed initialization" Jan 28 01:44:29.578476 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 28 01:44:29.578707 dockerd[1800]: time="2026-01-28T01:44:29.578589415Z" level=info msg="API listen on /run/docker.sock" Jan 28 01:44:29.611075 kubelet[1991]: E0128 01:44:29.610319 1991 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:44:29.617097 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:44:29.617401 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:44:29.618178 systemd[1]: kubelet.service: Consumed 313ms CPU time, 112.8M memory peak. Jan 28 01:44:30.678291 containerd[1553]: time="2026-01-28T01:44:30.678198797Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 28 01:44:31.464543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1516758059.mount: Deactivated successfully. Jan 28 01:44:35.090340 containerd[1553]: time="2026-01-28T01:44:35.090157833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:44:35.092585 containerd[1553]: time="2026-01-28T01:44:35.092447827Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070647" Jan 28 01:44:35.096049 containerd[1553]: time="2026-01-28T01:44:35.095497141Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:44:35.103162 containerd[1553]: time="2026-01-28T01:44:35.103094229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:44:35.104730 containerd[1553]: time="2026-01-28T01:44:35.104688367Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 4.426412585s" Jan 28 01:44:35.105193 containerd[1553]: time="2026-01-28T01:44:35.105013994Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 28 01:44:35.106676 containerd[1553]: time="2026-01-28T01:44:35.106615073Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 28 01:44:37.068525 containerd[1553]: time="2026-01-28T01:44:37.068245113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:44:37.070470 containerd[1553]: time="2026-01-28T01:44:37.070387181Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993354" Jan 28 01:44:37.073577 containerd[1553]: time="2026-01-28T01:44:37.073364240Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:44:37.077117 containerd[1553]: time="2026-01-28T01:44:37.077036182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:44:37.078328 containerd[1553]: time="2026-01-28T01:44:37.078036529Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.970511147s" Jan 28 01:44:37.078328 containerd[1553]: time="2026-01-28T01:44:37.078108153Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 28 01:44:37.078931 containerd[1553]: time="2026-01-28T01:44:37.078801959Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 28 01:44:38.834028 containerd[1553]: time="2026-01-28T01:44:38.832464363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:44:38.835041 containerd[1553]: time="2026-01-28T01:44:38.834819107Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405076" Jan 28 01:44:38.838160 containerd[1553]: time="2026-01-28T01:44:38.838047875Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:44:38.845009 containerd[1553]: time="2026-01-28T01:44:38.844071166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:44:38.845162 containerd[1553]: time="2026-01-28T01:44:38.845127037Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.766157956s" Jan 28 01:44:38.845216 containerd[1553]: time="2026-01-28T01:44:38.845170378Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 28 01:44:38.846327 containerd[1553]: time="2026-01-28T01:44:38.846240666Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 28 01:44:39.683615 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 28 01:44:39.686588 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:44:39.979016 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:44:39.997103 (kubelet)[2115]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:44:40.093962 kubelet[2115]: E0128 01:44:40.093502 2115 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:44:40.100594 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:44:40.100955 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:44:40.101697 systemd[1]: kubelet.service: Consumed 297ms CPU time, 110.8M memory peak. Jan 28 01:44:40.335739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1443319501.mount: Deactivated successfully. Jan 28 01:44:41.215224 containerd[1553]: time="2026-01-28T01:44:41.214195206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:44:41.218070 containerd[1553]: time="2026-01-28T01:44:41.217987215Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161899" Jan 28 01:44:41.220086 containerd[1553]: time="2026-01-28T01:44:41.220033905Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:44:41.225717 containerd[1553]: time="2026-01-28T01:44:41.225604353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:44:41.226484 containerd[1553]: time="2026-01-28T01:44:41.226374570Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 2.380089882s" Jan 28 01:44:41.226484 containerd[1553]: time="2026-01-28T01:44:41.226439902Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 28 01:44:41.227499 containerd[1553]: time="2026-01-28T01:44:41.227350521Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 28 01:44:41.870328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1228200484.mount: Deactivated successfully. Jan 28 01:44:43.953101 containerd[1553]: time="2026-01-28T01:44:43.952733171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:44:43.955482 containerd[1553]: time="2026-01-28T01:44:43.955268301Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 28 01:44:43.962052 containerd[1553]: time="2026-01-28T01:44:43.961397213Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:44:43.972328 containerd[1553]: time="2026-01-28T01:44:43.970390869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:44:43.975151 containerd[1553]: time="2026-01-28T01:44:43.971864871Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.744442224s" Jan 28 01:44:43.975151 containerd[1553]: time="2026-01-28T01:44:43.973258793Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 28 01:44:43.977477 containerd[1553]: time="2026-01-28T01:44:43.975532777Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 28 01:44:44.911490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2679965530.mount: Deactivated successfully. Jan 28 01:44:44.940610 containerd[1553]: time="2026-01-28T01:44:44.940379569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:44:44.944631 containerd[1553]: time="2026-01-28T01:44:44.944182450Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 28 01:44:44.953174 containerd[1553]: time="2026-01-28T01:44:44.950129911Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:44:44.959644 containerd[1553]: time="2026-01-28T01:44:44.957493675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 01:44:44.959644 containerd[1553]: time="2026-01-28T01:44:44.959428076Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 983.818997ms" Jan 28 01:44:44.959644 containerd[1553]: time="2026-01-28T01:44:44.959464254Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 28 01:44:44.961408 containerd[1553]: time="2026-01-28T01:44:44.961123089Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 28 01:44:45.607569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2540962303.mount: Deactivated successfully. Jan 28 01:44:49.026110 containerd[1553]: time="2026-01-28T01:44:49.025472747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:44:49.028563 containerd[1553]: time="2026-01-28T01:44:49.028296193Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Jan 28 01:44:49.031105 containerd[1553]: time="2026-01-28T01:44:49.030863370Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:44:49.037274 containerd[1553]: time="2026-01-28T01:44:49.037093949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:44:49.038912 containerd[1553]: time="2026-01-28T01:44:49.038662033Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.077421018s" Jan 28 01:44:49.038912 containerd[1553]: time="2026-01-28T01:44:49.038730313Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 28 01:44:50.182797 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 28 01:44:50.186838 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:44:50.483681 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:44:50.506681 (kubelet)[2270]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 01:44:50.590351 kubelet[2270]: E0128 01:44:50.590189 2270 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 01:44:50.595105 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 01:44:50.595337 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 01:44:50.595958 systemd[1]: kubelet.service: Consumed 294ms CPU time, 108.7M memory peak. Jan 28 01:44:52.675113 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:44:52.675421 systemd[1]: kubelet.service: Consumed 294ms CPU time, 108.7M memory peak. Jan 28 01:44:52.688691 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:44:52.777869 systemd[1]: Reload requested from client PID 2286 ('systemctl') (unit session-9.scope)... Jan 28 01:44:52.778386 systemd[1]: Reloading... Jan 28 01:44:53.047982 zram_generator::config[2329]: No configuration found. Jan 28 01:44:53.564004 systemd[1]: Reloading finished in 784 ms. Jan 28 01:44:53.722408 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 28 01:44:53.722592 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 28 01:44:53.725530 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:44:53.725640 systemd[1]: kubelet.service: Consumed 225ms CPU time, 98.6M memory peak. Jan 28 01:44:53.735742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:44:54.232231 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:44:54.285716 (kubelet)[2378]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 01:44:54.470969 kubelet[2378]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:44:54.470969 kubelet[2378]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 01:44:54.470969 kubelet[2378]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:44:54.470969 kubelet[2378]: I0128 01:44:54.470847 2378 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 01:44:54.975004 kernel: hrtimer: interrupt took 2784962 ns Jan 28 01:44:55.037510 kubelet[2378]: I0128 01:44:55.037421 2378 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 01:44:55.037510 kubelet[2378]: I0128 01:44:55.037489 2378 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 01:44:55.037848 kubelet[2378]: I0128 01:44:55.037779 2378 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 01:44:55.112724 kubelet[2378]: E0128 01:44:55.112620 2378 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.33:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:44:55.116764 kubelet[2378]: I0128 01:44:55.116675 2378 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 01:44:55.142113 kubelet[2378]: I0128 01:44:55.141832 2378 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 01:44:55.163604 kubelet[2378]: I0128 01:44:55.162048 2378 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 01:44:55.166235 kubelet[2378]: I0128 01:44:55.164856 2378 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 01:44:55.166235 kubelet[2378]: I0128 01:44:55.165005 2378 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 01:44:55.166235 kubelet[2378]: I0128 01:44:55.165296 2378 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 01:44:55.166235 kubelet[2378]: I0128 01:44:55.165308 2378 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 01:44:55.166560 kubelet[2378]: I0128 01:44:55.165525 2378 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:44:55.172612 kubelet[2378]: I0128 01:44:55.171620 2378 kubelet.go:446] "Attempting to sync node with API server" Jan 28 01:44:55.172612 kubelet[2378]: I0128 01:44:55.172355 2378 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 01:44:55.174234 kubelet[2378]: I0128 01:44:55.174091 2378 kubelet.go:352] "Adding apiserver pod source" Jan 28 01:44:55.174234 kubelet[2378]: I0128 01:44:55.174164 2378 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 01:44:55.183364 kubelet[2378]: W0128 01:44:55.183166 2378 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jan 28 01:44:55.183364 kubelet[2378]: E0128 01:44:55.183305 2378 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:44:55.183726 kubelet[2378]: W0128 01:44:55.183142 2378 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jan 28 01:44:55.183792 kubelet[2378]: E0128 01:44:55.183724 2378 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:44:55.185053 kubelet[2378]: I0128 01:44:55.185012 2378 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 28 01:44:55.187073 kubelet[2378]: I0128 01:44:55.186869 2378 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 01:44:55.190176 kubelet[2378]: W0128 01:44:55.188754 2378 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 28 01:44:55.192083 kubelet[2378]: I0128 01:44:55.192025 2378 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 01:44:55.192083 kubelet[2378]: I0128 01:44:55.192076 2378 server.go:1287] "Started kubelet" Jan 28 01:44:55.192376 kubelet[2378]: I0128 01:44:55.192336 2378 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 01:44:55.199790 kubelet[2378]: I0128 01:44:55.199383 2378 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 01:44:55.200096 kubelet[2378]: I0128 01:44:55.200032 2378 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 01:44:55.203976 kubelet[2378]: I0128 01:44:55.202785 2378 server.go:479] "Adding debug handlers to kubelet server" Jan 28 01:44:55.204647 kubelet[2378]: I0128 01:44:55.204615 2378 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 01:44:55.208671 kubelet[2378]: I0128 01:44:55.208413 2378 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 01:44:55.214002 kubelet[2378]: I0128 01:44:55.213980 2378 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 01:44:55.214287 kubelet[2378]: I0128 01:44:55.214272 2378 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 01:44:55.214389 kubelet[2378]: I0128 01:44:55.214378 2378 reconciler.go:26] "Reconciler: start to sync state" Jan 28 01:44:55.214786 kubelet[2378]: E0128 01:44:55.214535 2378 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="200ms" Jan 28 01:44:55.215870 kubelet[2378]: E0128 01:44:55.215136 2378 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:44:55.215870 kubelet[2378]: W0128 01:44:55.215275 2378 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jan 28 01:44:55.215870 kubelet[2378]: E0128 01:44:55.215326 2378 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:44:55.215870 kubelet[2378]: I0128 01:44:55.215560 2378 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 01:44:55.217514 kubelet[2378]: E0128 01:44:55.214505 2378 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.33:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.33:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188ec1b6f7fe7a47 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-28 01:44:55.192042055 +0000 UTC m=+0.886421307,LastTimestamp:2026-01-28 01:44:55.192042055 +0000 UTC m=+0.886421307,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 28 01:44:55.217704 kubelet[2378]: E0128 01:44:55.217556 2378 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 01:44:55.218397 kubelet[2378]: I0128 01:44:55.218178 2378 factory.go:221] Registration of the containerd container factory successfully Jan 28 01:44:55.218563 kubelet[2378]: I0128 01:44:55.218487 2378 factory.go:221] Registration of the systemd container factory successfully Jan 28 01:44:55.248367 kubelet[2378]: I0128 01:44:55.247681 2378 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 01:44:55.248367 kubelet[2378]: I0128 01:44:55.247709 2378 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 01:44:55.248367 kubelet[2378]: I0128 01:44:55.247732 2378 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:44:55.267168 kubelet[2378]: I0128 01:44:55.267075 2378 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 01:44:55.274128 kubelet[2378]: I0128 01:44:55.272004 2378 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 01:44:55.274128 kubelet[2378]: I0128 01:44:55.272077 2378 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 01:44:55.274128 kubelet[2378]: I0128 01:44:55.272107 2378 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 01:44:55.274128 kubelet[2378]: I0128 01:44:55.272881 2378 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 01:44:55.274128 kubelet[2378]: E0128 01:44:55.273525 2378 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 01:44:55.274128 kubelet[2378]: W0128 01:44:55.272659 2378 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jan 28 01:44:55.274128 kubelet[2378]: E0128 01:44:55.273623 2378 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:44:55.275530 kubelet[2378]: I0128 01:44:55.274172 2378 policy_none.go:49] "None policy: Start" Jan 28 01:44:55.275530 kubelet[2378]: I0128 01:44:55.274246 2378 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 01:44:55.275530 kubelet[2378]: I0128 01:44:55.274265 2378 state_mem.go:35] "Initializing new in-memory state store" Jan 28 01:44:55.289391 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 28 01:44:55.315793 kubelet[2378]: E0128 01:44:55.315745 2378 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 28 01:44:55.317102 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 28 01:44:55.329316 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 28 01:44:55.342362 kubelet[2378]: I0128 01:44:55.340853 2378 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 01:44:55.342362 kubelet[2378]: I0128 01:44:55.341289 2378 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 01:44:55.342362 kubelet[2378]: I0128 01:44:55.341306 2378 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 01:44:55.344755 kubelet[2378]: I0128 01:44:55.343406 2378 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 01:44:55.344755 kubelet[2378]: E0128 01:44:55.343709 2378 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 01:44:55.345599 kubelet[2378]: E0128 01:44:55.345537 2378 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 28 01:44:55.398482 systemd[1]: Created slice kubepods-burstable-pod1159cb65e7c68e803dbe60bba2579d35.slice - libcontainer container kubepods-burstable-pod1159cb65e7c68e803dbe60bba2579d35.slice. Jan 28 01:44:55.415978 kubelet[2378]: E0128 01:44:55.415484 2378 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="400ms" Jan 28 01:44:55.415978 kubelet[2378]: I0128 01:44:55.415773 2378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:44:55.415978 kubelet[2378]: I0128 01:44:55.415809 2378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:44:55.415978 kubelet[2378]: I0128 01:44:55.415841 2378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:44:55.416557 kubelet[2378]: I0128 01:44:55.415867 2378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1159cb65e7c68e803dbe60bba2579d35-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1159cb65e7c68e803dbe60bba2579d35\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:44:55.416595 kubelet[2378]: I0128 01:44:55.416579 2378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1159cb65e7c68e803dbe60bba2579d35-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1159cb65e7c68e803dbe60bba2579d35\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:44:55.416679 kubelet[2378]: I0128 01:44:55.416618 2378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1159cb65e7c68e803dbe60bba2579d35-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1159cb65e7c68e803dbe60bba2579d35\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:44:55.416679 kubelet[2378]: I0128 01:44:55.416648 2378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:44:55.416679 kubelet[2378]: I0128 01:44:55.416670 2378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:44:55.416755 kubelet[2378]: I0128 01:44:55.416693 2378 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 28 01:44:55.422870 kubelet[2378]: E0128 01:44:55.422756 2378 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:44:55.430583 systemd[1]: Created slice kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice - libcontainer container kubepods-burstable-pod73f4d0ebfe2f50199eb060021cc3bcbf.slice. Jan 28 01:44:55.445085 kubelet[2378]: I0128 01:44:55.444832 2378 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:44:55.445768 kubelet[2378]: E0128 01:44:55.445617 2378 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Jan 28 01:44:55.447667 kubelet[2378]: E0128 01:44:55.447568 2378 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:44:55.452361 systemd[1]: Created slice kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice - libcontainer container kubepods-burstable-pod0b8273f45c576ca70f8db6fe540c065c.slice. Jan 28 01:44:55.456356 kubelet[2378]: E0128 01:44:55.456235 2378 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:44:55.649115 kubelet[2378]: I0128 01:44:55.648727 2378 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:44:55.649630 kubelet[2378]: E0128 01:44:55.649549 2378 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Jan 28 01:44:55.726809 kubelet[2378]: E0128 01:44:55.724614 2378 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:44:55.727055 containerd[1553]: time="2026-01-28T01:44:55.725603736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1159cb65e7c68e803dbe60bba2579d35,Namespace:kube-system,Attempt:0,}" Jan 28 01:44:55.749495 kubelet[2378]: E0128 01:44:55.748583 2378 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:44:55.752671 containerd[1553]: time="2026-01-28T01:44:55.751676764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,}" Jan 28 01:44:55.758529 kubelet[2378]: E0128 01:44:55.758499 2378 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:44:55.760541 containerd[1553]: time="2026-01-28T01:44:55.760264995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,}" Jan 28 01:44:55.819686 containerd[1553]: time="2026-01-28T01:44:55.819265733Z" level=info msg="connecting to shim 2e88276055bb261703f506af274ac85388857c1cc2a7682a4acb1f65b07f502e" address="unix:///run/containerd/s/e2f6232797012f3946aa45c1ee28986491f60aab3d566e6fc5240fe15aed7770" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:44:55.820204 kubelet[2378]: E0128 01:44:55.820112 2378 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="800ms" Jan 28 01:44:55.868331 containerd[1553]: time="2026-01-28T01:44:55.868231394Z" level=info msg="connecting to shim ba827dcc546edf901d5201dc7aa0b8f50d8cab887edbddba8c5b8860b67787c0" address="unix:///run/containerd/s/c63a8a02611426a1846633fe0c4677c00dbaf7e8f0c9ddba0d7c459707052d1b" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:44:55.877451 containerd[1553]: time="2026-01-28T01:44:55.876778573Z" level=info msg="connecting to shim a38c1e93283a59e774452a02f8663a034c912452f9b29720dc422b6cb09450c5" address="unix:///run/containerd/s/768a4466b21d2434adacd01279af9f0513b9ea1feec6aa1893af3fe9d75c580c" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:44:55.899217 systemd[1]: Started cri-containerd-2e88276055bb261703f506af274ac85388857c1cc2a7682a4acb1f65b07f502e.scope - libcontainer container 2e88276055bb261703f506af274ac85388857c1cc2a7682a4acb1f65b07f502e. Jan 28 01:44:55.978613 systemd[1]: Started cri-containerd-ba827dcc546edf901d5201dc7aa0b8f50d8cab887edbddba8c5b8860b67787c0.scope - libcontainer container ba827dcc546edf901d5201dc7aa0b8f50d8cab887edbddba8c5b8860b67787c0. Jan 28 01:44:55.995733 systemd[1]: Started cri-containerd-a38c1e93283a59e774452a02f8663a034c912452f9b29720dc422b6cb09450c5.scope - libcontainer container a38c1e93283a59e774452a02f8663a034c912452f9b29720dc422b6cb09450c5. Jan 28 01:44:56.062648 kubelet[2378]: I0128 01:44:56.060273 2378 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:44:56.062648 kubelet[2378]: E0128 01:44:56.060621 2378 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Jan 28 01:44:56.140300 containerd[1553]: time="2026-01-28T01:44:56.140026714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1159cb65e7c68e803dbe60bba2579d35,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e88276055bb261703f506af274ac85388857c1cc2a7682a4acb1f65b07f502e\"" Jan 28 01:44:56.141450 kubelet[2378]: E0128 01:44:56.141346 2378 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:44:56.153325 containerd[1553]: time="2026-01-28T01:44:56.152390642Z" level=info msg="CreateContainer within sandbox \"2e88276055bb261703f506af274ac85388857c1cc2a7682a4acb1f65b07f502e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 28 01:44:56.243398 containerd[1553]: time="2026-01-28T01:44:56.242192914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:73f4d0ebfe2f50199eb060021cc3bcbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"a38c1e93283a59e774452a02f8663a034c912452f9b29720dc422b6cb09450c5\"" Jan 28 01:44:56.243398 containerd[1553]: time="2026-01-28T01:44:56.243178073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0b8273f45c576ca70f8db6fe540c065c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba827dcc546edf901d5201dc7aa0b8f50d8cab887edbddba8c5b8860b67787c0\"" Jan 28 01:44:56.247940 kubelet[2378]: E0128 01:44:56.244820 2378 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:44:56.247940 kubelet[2378]: E0128 01:44:56.245113 2378 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:44:56.251322 containerd[1553]: time="2026-01-28T01:44:56.248804092Z" level=info msg="CreateContainer within sandbox \"a38c1e93283a59e774452a02f8663a034c912452f9b29720dc422b6cb09450c5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 28 01:44:56.253017 containerd[1553]: time="2026-01-28T01:44:56.251581910Z" level=info msg="Container 569a1647eda710bd9d449eef75cd194fe2f5615c230823bb79d746ebedb6e08f: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:44:56.253017 containerd[1553]: time="2026-01-28T01:44:56.252125235Z" level=info msg="CreateContainer within sandbox \"ba827dcc546edf901d5201dc7aa0b8f50d8cab887edbddba8c5b8860b67787c0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 28 01:44:56.329975 kubelet[2378]: W0128 01:44:56.326621 2378 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jan 28 01:44:56.329975 kubelet[2378]: E0128 01:44:56.326693 2378 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:44:56.334647 containerd[1553]: time="2026-01-28T01:44:56.334504634Z" level=info msg="Container 09d8b5da283230e1d4b693a5913d993dc82517e34e44c695db99b441e15330be: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:44:56.348725 containerd[1553]: time="2026-01-28T01:44:56.347623949Z" level=info msg="Container a9d2b093ddefa55514d99c86552e9fb022401ed84773c69c0b565893167a0a0c: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:44:56.350797 containerd[1553]: time="2026-01-28T01:44:56.350519874Z" level=info msg="CreateContainer within sandbox \"2e88276055bb261703f506af274ac85388857c1cc2a7682a4acb1f65b07f502e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"569a1647eda710bd9d449eef75cd194fe2f5615c230823bb79d746ebedb6e08f\"" Jan 28 01:44:56.353445 containerd[1553]: time="2026-01-28T01:44:56.353414473Z" level=info msg="StartContainer for \"569a1647eda710bd9d449eef75cd194fe2f5615c230823bb79d746ebedb6e08f\"" Jan 28 01:44:56.359770 containerd[1553]: time="2026-01-28T01:44:56.359678758Z" level=info msg="CreateContainer within sandbox \"a38c1e93283a59e774452a02f8663a034c912452f9b29720dc422b6cb09450c5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"09d8b5da283230e1d4b693a5913d993dc82517e34e44c695db99b441e15330be\"" Jan 28 01:44:56.360471 containerd[1553]: time="2026-01-28T01:44:56.359811006Z" level=info msg="connecting to shim 569a1647eda710bd9d449eef75cd194fe2f5615c230823bb79d746ebedb6e08f" address="unix:///run/containerd/s/e2f6232797012f3946aa45c1ee28986491f60aab3d566e6fc5240fe15aed7770" protocol=ttrpc version=3 Jan 28 01:44:56.362098 containerd[1553]: time="2026-01-28T01:44:56.361677790Z" level=info msg="StartContainer for \"09d8b5da283230e1d4b693a5913d993dc82517e34e44c695db99b441e15330be\"" Jan 28 01:44:56.366461 containerd[1553]: time="2026-01-28T01:44:56.366436437Z" level=info msg="connecting to shim 09d8b5da283230e1d4b693a5913d993dc82517e34e44c695db99b441e15330be" address="unix:///run/containerd/s/768a4466b21d2434adacd01279af9f0513b9ea1feec6aa1893af3fe9d75c580c" protocol=ttrpc version=3 Jan 28 01:44:56.379417 containerd[1553]: time="2026-01-28T01:44:56.379307386Z" level=info msg="CreateContainer within sandbox \"ba827dcc546edf901d5201dc7aa0b8f50d8cab887edbddba8c5b8860b67787c0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a9d2b093ddefa55514d99c86552e9fb022401ed84773c69c0b565893167a0a0c\"" Jan 28 01:44:56.383028 containerd[1553]: time="2026-01-28T01:44:56.382016939Z" level=info msg="StartContainer for \"a9d2b093ddefa55514d99c86552e9fb022401ed84773c69c0b565893167a0a0c\"" Jan 28 01:44:56.383673 containerd[1553]: time="2026-01-28T01:44:56.383646493Z" level=info msg="connecting to shim a9d2b093ddefa55514d99c86552e9fb022401ed84773c69c0b565893167a0a0c" address="unix:///run/containerd/s/c63a8a02611426a1846633fe0c4677c00dbaf7e8f0c9ddba0d7c459707052d1b" protocol=ttrpc version=3 Jan 28 01:44:56.405541 systemd[1]: Started cri-containerd-09d8b5da283230e1d4b693a5913d993dc82517e34e44c695db99b441e15330be.scope - libcontainer container 09d8b5da283230e1d4b693a5913d993dc82517e34e44c695db99b441e15330be. Jan 28 01:44:56.408198 systemd[1]: Started cri-containerd-569a1647eda710bd9d449eef75cd194fe2f5615c230823bb79d746ebedb6e08f.scope - libcontainer container 569a1647eda710bd9d449eef75cd194fe2f5615c230823bb79d746ebedb6e08f. Jan 28 01:44:56.443397 systemd[1]: Started cri-containerd-a9d2b093ddefa55514d99c86552e9fb022401ed84773c69c0b565893167a0a0c.scope - libcontainer container a9d2b093ddefa55514d99c86552e9fb022401ed84773c69c0b565893167a0a0c. Jan 28 01:44:56.465930 kubelet[2378]: W0128 01:44:56.465737 2378 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jan 28 01:44:56.465930 kubelet[2378]: E0128 01:44:56.465860 2378 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:44:56.549559 kubelet[2378]: W0128 01:44:56.549363 2378 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jan 28 01:44:56.549559 kubelet[2378]: E0128 01:44:56.549440 2378 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:44:56.557422 containerd[1553]: time="2026-01-28T01:44:56.556520185Z" level=info msg="StartContainer for \"569a1647eda710bd9d449eef75cd194fe2f5615c230823bb79d746ebedb6e08f\" returns successfully" Jan 28 01:44:56.573980 kubelet[2378]: W0128 01:44:56.572987 2378 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Jan 28 01:44:56.573980 kubelet[2378]: E0128 01:44:56.573109 2378 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.33:6443: connect: connection refused" logger="UnhandledError" Jan 28 01:44:56.578823 containerd[1553]: time="2026-01-28T01:44:56.578379155Z" level=info msg="StartContainer for \"09d8b5da283230e1d4b693a5913d993dc82517e34e44c695db99b441e15330be\" returns successfully" Jan 28 01:44:56.597274 containerd[1553]: time="2026-01-28T01:44:56.597158108Z" level=info msg="StartContainer for \"a9d2b093ddefa55514d99c86552e9fb022401ed84773c69c0b565893167a0a0c\" returns successfully" Jan 28 01:44:56.865745 kubelet[2378]: I0128 01:44:56.865523 2378 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:44:57.323232 kubelet[2378]: E0128 01:44:57.322792 2378 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:44:57.323551 kubelet[2378]: E0128 01:44:57.323299 2378 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:44:57.337965 kubelet[2378]: E0128 01:44:57.336028 2378 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:44:57.339179 kubelet[2378]: E0128 01:44:57.338804 2378 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:44:57.345810 kubelet[2378]: E0128 01:44:57.345527 2378 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:44:57.346811 kubelet[2378]: E0128 01:44:57.346710 2378 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:44:58.352145 kubelet[2378]: E0128 01:44:58.350963 2378 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:44:58.352145 kubelet[2378]: E0128 01:44:58.351139 2378 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:44:58.352145 kubelet[2378]: E0128 01:44:58.351347 2378 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:44:58.352145 kubelet[2378]: E0128 01:44:58.351424 2378 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:44:58.352145 kubelet[2378]: E0128 01:44:58.351585 2378 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 28 01:44:58.352145 kubelet[2378]: E0128 01:44:58.351651 2378 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:44:58.597274 kubelet[2378]: E0128 01:44:58.597188 2378 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 28 01:44:58.758724 kubelet[2378]: I0128 01:44:58.758668 2378 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 28 01:44:58.758724 kubelet[2378]: E0128 01:44:58.758717 2378 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 28 01:44:58.812216 kubelet[2378]: I0128 01:44:58.811501 2378 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 28 01:44:58.823651 kubelet[2378]: E0128 01:44:58.823557 2378 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 28 01:44:58.823651 kubelet[2378]: I0128 01:44:58.823597 2378 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 28 01:44:58.826184 kubelet[2378]: E0128 01:44:58.825977 2378 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 28 01:44:58.826184 kubelet[2378]: I0128 01:44:58.826064 2378 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 28 01:44:58.828795 kubelet[2378]: E0128 01:44:58.828469 2378 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 28 01:44:59.176640 kubelet[2378]: I0128 01:44:59.176414 2378 apiserver.go:52] "Watching apiserver" Jan 28 01:44:59.214801 kubelet[2378]: I0128 01:44:59.214662 2378 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 01:44:59.351350 kubelet[2378]: I0128 01:44:59.351226 2378 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 28 01:44:59.351804 kubelet[2378]: I0128 01:44:59.351448 2378 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 28 01:44:59.355864 kubelet[2378]: E0128 01:44:59.355820 2378 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 28 01:44:59.356666 kubelet[2378]: E0128 01:44:59.355872 2378 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 28 01:44:59.356666 kubelet[2378]: E0128 01:44:59.356141 2378 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:44:59.356666 kubelet[2378]: E0128 01:44:59.356268 2378 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:44:59.932751 update_engine[1545]: I20260128 01:44:59.932624 1545 update_attempter.cc:509] Updating boot flags... Jan 28 01:45:01.294707 systemd[1]: Reload requested from client PID 2674 ('systemctl') (unit session-9.scope)... Jan 28 01:45:01.294777 systemd[1]: Reloading... Jan 28 01:45:01.413076 zram_generator::config[2717]: No configuration found. Jan 28 01:45:01.802085 systemd[1]: Reloading finished in 506 ms. Jan 28 01:45:01.859286 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:45:01.875718 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 01:45:01.876342 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:45:01.876416 systemd[1]: kubelet.service: Consumed 1.581s CPU time, 134M memory peak. Jan 28 01:45:01.880849 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 01:45:02.144075 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 01:45:02.157721 (kubelet)[2762]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 01:45:02.241612 kubelet[2762]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:45:02.241612 kubelet[2762]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 01:45:02.241612 kubelet[2762]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 01:45:02.242107 kubelet[2762]: I0128 01:45:02.241675 2762 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 01:45:02.252956 kubelet[2762]: I0128 01:45:02.252843 2762 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 01:45:02.253082 kubelet[2762]: I0128 01:45:02.253009 2762 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 01:45:02.253543 kubelet[2762]: I0128 01:45:02.253327 2762 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 01:45:02.254983 kubelet[2762]: I0128 01:45:02.254867 2762 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 28 01:45:02.258219 kubelet[2762]: I0128 01:45:02.257865 2762 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 01:45:02.270273 kubelet[2762]: I0128 01:45:02.270213 2762 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 01:45:02.277874 kubelet[2762]: I0128 01:45:02.277800 2762 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 01:45:02.278484 kubelet[2762]: I0128 01:45:02.278410 2762 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 01:45:02.278716 kubelet[2762]: I0128 01:45:02.278474 2762 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 01:45:02.278716 kubelet[2762]: I0128 01:45:02.278691 2762 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 01:45:02.278716 kubelet[2762]: I0128 01:45:02.278702 2762 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 01:45:02.279059 kubelet[2762]: I0128 01:45:02.278759 2762 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:45:02.279107 kubelet[2762]: I0128 01:45:02.279087 2762 kubelet.go:446] "Attempting to sync node with API server" Jan 28 01:45:02.279161 kubelet[2762]: I0128 01:45:02.279123 2762 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 01:45:02.279161 kubelet[2762]: I0128 01:45:02.279148 2762 kubelet.go:352] "Adding apiserver pod source" Jan 28 01:45:02.279161 kubelet[2762]: I0128 01:45:02.279161 2762 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 01:45:02.282077 kubelet[2762]: I0128 01:45:02.281998 2762 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 28 01:45:02.285649 kubelet[2762]: I0128 01:45:02.282833 2762 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 01:45:02.285649 kubelet[2762]: I0128 01:45:02.283648 2762 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 01:45:02.285649 kubelet[2762]: I0128 01:45:02.283676 2762 server.go:1287] "Started kubelet" Jan 28 01:45:02.286782 kubelet[2762]: I0128 01:45:02.286057 2762 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 01:45:02.288525 kubelet[2762]: I0128 01:45:02.286194 2762 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 01:45:02.289488 kubelet[2762]: I0128 01:45:02.289408 2762 server.go:479] "Adding debug handlers to kubelet server" Jan 28 01:45:02.290040 kubelet[2762]: I0128 01:45:02.289788 2762 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 01:45:02.304722 kubelet[2762]: I0128 01:45:02.303776 2762 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 01:45:02.305843 kubelet[2762]: I0128 01:45:02.305700 2762 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 01:45:02.313322 kubelet[2762]: I0128 01:45:02.313182 2762 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 01:45:02.315739 kubelet[2762]: I0128 01:45:02.313431 2762 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 01:45:02.315739 kubelet[2762]: E0128 01:45:02.313817 2762 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 01:45:02.315831 kubelet[2762]: I0128 01:45:02.315756 2762 reconciler.go:26] "Reconciler: start to sync state" Jan 28 01:45:02.323500 kubelet[2762]: I0128 01:45:02.323344 2762 factory.go:221] Registration of the containerd container factory successfully Jan 28 01:45:02.323500 kubelet[2762]: I0128 01:45:02.323367 2762 factory.go:221] Registration of the systemd container factory successfully Jan 28 01:45:02.324293 kubelet[2762]: I0128 01:45:02.323451 2762 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 01:45:02.342554 kubelet[2762]: I0128 01:45:02.342449 2762 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 01:45:02.345730 kubelet[2762]: I0128 01:45:02.345568 2762 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 01:45:02.346036 kubelet[2762]: I0128 01:45:02.345853 2762 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 01:45:02.346036 kubelet[2762]: I0128 01:45:02.346022 2762 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 01:45:02.346036 kubelet[2762]: I0128 01:45:02.346036 2762 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 01:45:02.346401 kubelet[2762]: E0128 01:45:02.346295 2762 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 01:45:02.412478 kubelet[2762]: I0128 01:45:02.412298 2762 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 01:45:02.413779 kubelet[2762]: I0128 01:45:02.413756 2762 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 01:45:02.413983 kubelet[2762]: I0128 01:45:02.413863 2762 state_mem.go:36] "Initialized new in-memory state store" Jan 28 01:45:02.414238 kubelet[2762]: I0128 01:45:02.414216 2762 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 28 01:45:02.414353 kubelet[2762]: I0128 01:45:02.414318 2762 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 28 01:45:02.414413 kubelet[2762]: I0128 01:45:02.414403 2762 policy_none.go:49] "None policy: Start" Jan 28 01:45:02.414475 kubelet[2762]: I0128 01:45:02.414464 2762 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 01:45:02.414535 kubelet[2762]: I0128 01:45:02.414525 2762 state_mem.go:35] "Initializing new in-memory state store" Jan 28 01:45:02.414716 kubelet[2762]: I0128 01:45:02.414700 2762 state_mem.go:75] "Updated machine memory state" Jan 28 01:45:02.425591 kubelet[2762]: I0128 01:45:02.425568 2762 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 01:45:02.429698 kubelet[2762]: I0128 01:45:02.428629 2762 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 01:45:02.429698 kubelet[2762]: I0128 01:45:02.428685 2762 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 01:45:02.429698 kubelet[2762]: I0128 01:45:02.429120 2762 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 01:45:02.432781 kubelet[2762]: E0128 01:45:02.432648 2762 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 01:45:02.447528 kubelet[2762]: I0128 01:45:02.447402 2762 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 28 01:45:02.450063 kubelet[2762]: I0128 01:45:02.449501 2762 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 28 01:45:02.450063 kubelet[2762]: I0128 01:45:02.449738 2762 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 28 01:45:02.573117 kubelet[2762]: I0128 01:45:02.572841 2762 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 28 01:45:02.587579 kubelet[2762]: I0128 01:45:02.586986 2762 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 28 01:45:02.587579 kubelet[2762]: I0128 01:45:02.587086 2762 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 28 01:45:02.617804 kubelet[2762]: I0128 01:45:02.617484 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1159cb65e7c68e803dbe60bba2579d35-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1159cb65e7c68e803dbe60bba2579d35\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:45:02.617804 kubelet[2762]: I0128 01:45:02.617574 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:45:02.617804 kubelet[2762]: I0128 01:45:02.617608 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:45:02.617804 kubelet[2762]: I0128 01:45:02.617627 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:45:02.617804 kubelet[2762]: I0128 01:45:02.617642 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b8273f45c576ca70f8db6fe540c065c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0b8273f45c576ca70f8db6fe540c065c\") " pod="kube-system/kube-scheduler-localhost" Jan 28 01:45:02.618236 kubelet[2762]: I0128 01:45:02.617656 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1159cb65e7c68e803dbe60bba2579d35-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1159cb65e7c68e803dbe60bba2579d35\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:45:02.618236 kubelet[2762]: I0128 01:45:02.617668 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:45:02.618236 kubelet[2762]: I0128 01:45:02.617685 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73f4d0ebfe2f50199eb060021cc3bcbf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"73f4d0ebfe2f50199eb060021cc3bcbf\") " pod="kube-system/kube-controller-manager-localhost" Jan 28 01:45:02.618236 kubelet[2762]: I0128 01:45:02.617699 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1159cb65e7c68e803dbe60bba2579d35-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1159cb65e7c68e803dbe60bba2579d35\") " pod="kube-system/kube-apiserver-localhost" Jan 28 01:45:02.770565 kubelet[2762]: E0128 01:45:02.770282 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:45:02.770791 kubelet[2762]: E0128 01:45:02.770708 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:45:02.777106 kubelet[2762]: E0128 01:45:02.776853 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:45:03.281292 kubelet[2762]: I0128 01:45:03.281195 2762 apiserver.go:52] "Watching apiserver" Jan 28 01:45:03.316126 kubelet[2762]: I0128 01:45:03.316006 2762 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 01:45:03.390356 kubelet[2762]: E0128 01:45:03.386671 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:45:03.390356 kubelet[2762]: I0128 01:45:03.386703 2762 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 28 01:45:03.393069 kubelet[2762]: E0128 01:45:03.392513 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:45:03.452208 kubelet[2762]: E0128 01:45:03.446489 2762 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 28 01:45:03.452208 kubelet[2762]: E0128 01:45:03.446746 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:45:03.641142 kubelet[2762]: I0128 01:45:03.638140 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.638092426 podStartE2EDuration="1.638092426s" podCreationTimestamp="2026-01-28 01:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:45:03.634965569 +0000 UTC m=+1.469473131" watchObservedRunningTime="2026-01-28 01:45:03.638092426 +0000 UTC m=+1.472599967" Jan 28 01:45:03.663337 kubelet[2762]: I0128 01:45:03.663015 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.6629956080000001 podStartE2EDuration="1.662995608s" podCreationTimestamp="2026-01-28 01:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:45:03.661623228 +0000 UTC m=+1.496130780" watchObservedRunningTime="2026-01-28 01:45:03.662995608 +0000 UTC m=+1.497503160" Jan 28 01:45:03.688499 kubelet[2762]: I0128 01:45:03.687436 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.687415713 podStartE2EDuration="1.687415713s" podCreationTimestamp="2026-01-28 01:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:45:03.686455656 +0000 UTC m=+1.520963199" watchObservedRunningTime="2026-01-28 01:45:03.687415713 +0000 UTC m=+1.521923254" Jan 28 01:45:04.390478 kubelet[2762]: E0128 01:45:04.390428 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:45:04.391416 kubelet[2762]: E0128 01:45:04.390580 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:45:06.181667 kubelet[2762]: E0128 01:45:06.181579 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:45:06.271394 kubelet[2762]: I0128 01:45:06.270424 2762 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 28 01:45:06.271394 kubelet[2762]: I0128 01:45:06.271396 2762 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 28 01:45:06.271638 containerd[1553]: time="2026-01-28T01:45:06.271172995Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 28 01:45:06.409995 kubelet[2762]: E0128 01:45:06.409148 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:45:07.068829 kubelet[2762]: I0128 01:45:07.068344 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fe0c1abf-aa16-480a-9496-008c82545d18-lib-modules\") pod \"kube-proxy-9zbg7\" (UID: \"fe0c1abf-aa16-480a-9496-008c82545d18\") " pod="kube-system/kube-proxy-9zbg7" Jan 28 01:45:07.068829 kubelet[2762]: I0128 01:45:07.068414 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fe0c1abf-aa16-480a-9496-008c82545d18-kube-proxy\") pod \"kube-proxy-9zbg7\" (UID: \"fe0c1abf-aa16-480a-9496-008c82545d18\") " pod="kube-system/kube-proxy-9zbg7" Jan 28 01:45:07.068829 kubelet[2762]: I0128 01:45:07.068439 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fe0c1abf-aa16-480a-9496-008c82545d18-xtables-lock\") pod \"kube-proxy-9zbg7\" (UID: \"fe0c1abf-aa16-480a-9496-008c82545d18\") " pod="kube-system/kube-proxy-9zbg7" Jan 28 01:45:07.068829 kubelet[2762]: I0128 01:45:07.068460 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r657\" (UniqueName: \"kubernetes.io/projected/fe0c1abf-aa16-480a-9496-008c82545d18-kube-api-access-7r657\") pod \"kube-proxy-9zbg7\" (UID: \"fe0c1abf-aa16-480a-9496-008c82545d18\") " pod="kube-system/kube-proxy-9zbg7" Jan 28 01:45:07.069454 systemd[1]: Created slice kubepods-besteffort-podfe0c1abf_aa16_480a_9496_008c82545d18.slice - libcontainer container kubepods-besteffort-podfe0c1abf_aa16_480a_9496_008c82545d18.slice. Jan 28 01:45:07.372261 kubelet[2762]: I0128 01:45:07.371722 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbhsb\" (UniqueName: \"kubernetes.io/projected/79b17522-e1bb-455b-b47b-96a39c6108bc-kube-api-access-bbhsb\") pod \"tigera-operator-7dcd859c48-l99cz\" (UID: \"79b17522-e1bb-455b-b47b-96a39c6108bc\") " pod="tigera-operator/tigera-operator-7dcd859c48-l99cz" Jan 28 01:45:07.372261 kubelet[2762]: I0128 01:45:07.371871 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/79b17522-e1bb-455b-b47b-96a39c6108bc-var-lib-calico\") pod \"tigera-operator-7dcd859c48-l99cz\" (UID: \"79b17522-e1bb-455b-b47b-96a39c6108bc\") " pod="tigera-operator/tigera-operator-7dcd859c48-l99cz" Jan 28 01:45:07.388337 systemd[1]: Created slice kubepods-besteffort-pod79b17522_e1bb_455b_b47b_96a39c6108bc.slice - libcontainer container kubepods-besteffort-pod79b17522_e1bb_455b_b47b_96a39c6108bc.slice. Jan 28 01:45:07.395477 kubelet[2762]: E0128 01:45:07.395254 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:45:07.396970 containerd[1553]: time="2026-01-28T01:45:07.396835459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9zbg7,Uid:fe0c1abf-aa16-480a-9496-008c82545d18,Namespace:kube-system,Attempt:0,}" Jan 28 01:45:07.411843 kubelet[2762]: E0128 01:45:07.411677 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:45:07.455716 containerd[1553]: time="2026-01-28T01:45:07.455574936Z" level=info msg="connecting to shim 381d2ebf30a1fbec3fa1edae4d6bc11c8868c8b85164cb1bef619486a986ed69" address="unix:///run/containerd/s/994b9349d30e94f6a37e1d09c416ff2a512c2efbbf6407392baca657b3841410" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:45:07.553829 systemd[1]: Started cri-containerd-381d2ebf30a1fbec3fa1edae4d6bc11c8868c8b85164cb1bef619486a986ed69.scope - libcontainer container 381d2ebf30a1fbec3fa1edae4d6bc11c8868c8b85164cb1bef619486a986ed69. Jan 28 01:45:07.625249 containerd[1553]: time="2026-01-28T01:45:07.625043007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9zbg7,Uid:fe0c1abf-aa16-480a-9496-008c82545d18,Namespace:kube-system,Attempt:0,} returns sandbox id \"381d2ebf30a1fbec3fa1edae4d6bc11c8868c8b85164cb1bef619486a986ed69\"" Jan 28 01:45:07.627306 kubelet[2762]: E0128 01:45:07.626992 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:45:07.631988 containerd[1553]: time="2026-01-28T01:45:07.630398684Z" level=info msg="CreateContainer within sandbox \"381d2ebf30a1fbec3fa1edae4d6bc11c8868c8b85164cb1bef619486a986ed69\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 28 01:45:07.666537 containerd[1553]: time="2026-01-28T01:45:07.664868066Z" level=info msg="Container 20fb7576e1308ce950452b82887a9825994fff2dcd88bd6136491c72aa2fe501: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:45:07.682368 containerd[1553]: time="2026-01-28T01:45:07.682064945Z" level=info msg="CreateContainer within sandbox \"381d2ebf30a1fbec3fa1edae4d6bc11c8868c8b85164cb1bef619486a986ed69\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"20fb7576e1308ce950452b82887a9825994fff2dcd88bd6136491c72aa2fe501\"" Jan 28 01:45:07.689036 containerd[1553]: time="2026-01-28T01:45:07.688348061Z" level=info msg="StartContainer for \"20fb7576e1308ce950452b82887a9825994fff2dcd88bd6136491c72aa2fe501\"" Jan 28 01:45:07.690649 containerd[1553]: time="2026-01-28T01:45:07.690431439Z" level=info msg="connecting to shim 20fb7576e1308ce950452b82887a9825994fff2dcd88bd6136491c72aa2fe501" address="unix:///run/containerd/s/994b9349d30e94f6a37e1d09c416ff2a512c2efbbf6407392baca657b3841410" protocol=ttrpc version=3 Jan 28 01:45:07.698381 containerd[1553]: time="2026-01-28T01:45:07.698090137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-l99cz,Uid:79b17522-e1bb-455b-b47b-96a39c6108bc,Namespace:tigera-operator,Attempt:0,}" Jan 28 01:45:07.730573 systemd[1]: Started cri-containerd-20fb7576e1308ce950452b82887a9825994fff2dcd88bd6136491c72aa2fe501.scope - libcontainer container 20fb7576e1308ce950452b82887a9825994fff2dcd88bd6136491c72aa2fe501. Jan 28 01:45:07.753114 containerd[1553]: time="2026-01-28T01:45:07.753054781Z" level=info msg="connecting to shim 21f14742c43952c7f320012848998cde2b60196b0fd16bff7e0cfbcd0eef8cb8" address="unix:///run/containerd/s/be2f23d5c1e204bd262f4f445a2be52a1e8d6c4103c4befd987fad87c3b80979" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:45:07.821285 systemd[1]: Started cri-containerd-21f14742c43952c7f320012848998cde2b60196b0fd16bff7e0cfbcd0eef8cb8.scope - libcontainer container 21f14742c43952c7f320012848998cde2b60196b0fd16bff7e0cfbcd0eef8cb8. Jan 28 01:45:07.868001 containerd[1553]: time="2026-01-28T01:45:07.867606327Z" level=info msg="StartContainer for \"20fb7576e1308ce950452b82887a9825994fff2dcd88bd6136491c72aa2fe501\" returns successfully" Jan 28 01:45:07.908620 containerd[1553]: time="2026-01-28T01:45:07.908413523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-l99cz,Uid:79b17522-e1bb-455b-b47b-96a39c6108bc,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"21f14742c43952c7f320012848998cde2b60196b0fd16bff7e0cfbcd0eef8cb8\"" Jan 28 01:45:07.911609 containerd[1553]: time="2026-01-28T01:45:07.911531001Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 28 01:45:08.419994 kubelet[2762]: E0128 01:45:08.419502 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:45:09.292110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3681083394.mount: Deactivated successfully. Jan 28 01:45:10.644923 kubelet[2762]: E0128 01:45:10.643505 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:45:10.689946 kubelet[2762]: I0128 01:45:10.689782 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9zbg7" podStartSLOduration=4.689761943 podStartE2EDuration="4.689761943s" podCreationTimestamp="2026-01-28 01:45:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:45:08.440211886 +0000 UTC m=+6.274719449" watchObservedRunningTime="2026-01-28 01:45:10.689761943 +0000 UTC m=+8.524269505" Jan 28 01:45:10.827000 kubelet[2762]: E0128 01:45:10.826257 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:45:11.441353 kubelet[2762]: E0128 01:45:11.441281 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:45:11.442323 kubelet[2762]: E0128 01:45:11.442207 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:45:12.445382 kubelet[2762]: E0128 01:45:12.445203 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:45:13.854587 containerd[1553]: time="2026-01-28T01:45:13.854424308Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:45:13.856329 containerd[1553]: time="2026-01-28T01:45:13.856234372Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 28 01:45:13.860940 containerd[1553]: time="2026-01-28T01:45:13.860816385Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:45:13.866776 containerd[1553]: time="2026-01-28T01:45:13.866652975Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:45:13.867960 containerd[1553]: time="2026-01-28T01:45:13.867735224Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 5.956129796s" Jan 28 01:45:13.867960 containerd[1553]: time="2026-01-28T01:45:13.867809010Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 28 01:45:13.871632 containerd[1553]: time="2026-01-28T01:45:13.871511937Z" level=info msg="CreateContainer within sandbox \"21f14742c43952c7f320012848998cde2b60196b0fd16bff7e0cfbcd0eef8cb8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 28 01:45:13.886991 containerd[1553]: time="2026-01-28T01:45:13.886423879Z" level=info msg="Container 803f70a34628e2c2a3c94231205708c0b40f4a079156e2614a9ec880a5edfbea: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:45:13.899938 containerd[1553]: time="2026-01-28T01:45:13.899807632Z" level=info msg="CreateContainer within sandbox \"21f14742c43952c7f320012848998cde2b60196b0fd16bff7e0cfbcd0eef8cb8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"803f70a34628e2c2a3c94231205708c0b40f4a079156e2614a9ec880a5edfbea\"" Jan 28 01:45:13.903047 containerd[1553]: time="2026-01-28T01:45:13.900972884Z" level=info msg="StartContainer for \"803f70a34628e2c2a3c94231205708c0b40f4a079156e2614a9ec880a5edfbea\"" Jan 28 01:45:13.903047 containerd[1553]: time="2026-01-28T01:45:13.902107099Z" level=info msg="connecting to shim 803f70a34628e2c2a3c94231205708c0b40f4a079156e2614a9ec880a5edfbea" address="unix:///run/containerd/s/be2f23d5c1e204bd262f4f445a2be52a1e8d6c4103c4befd987fad87c3b80979" protocol=ttrpc version=3 Jan 28 01:45:13.974232 systemd[1]: Started cri-containerd-803f70a34628e2c2a3c94231205708c0b40f4a079156e2614a9ec880a5edfbea.scope - libcontainer container 803f70a34628e2c2a3c94231205708c0b40f4a079156e2614a9ec880a5edfbea. Jan 28 01:45:14.045068 containerd[1553]: time="2026-01-28T01:45:14.045001794Z" level=info msg="StartContainer for \"803f70a34628e2c2a3c94231205708c0b40f4a079156e2614a9ec880a5edfbea\" returns successfully" Jan 28 01:45:14.471810 kubelet[2762]: I0128 01:45:14.471633 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-l99cz" podStartSLOduration=1.512981522 podStartE2EDuration="7.47161207s" podCreationTimestamp="2026-01-28 01:45:07 +0000 UTC" firstStartedPulling="2026-01-28 01:45:07.910642967 +0000 UTC m=+5.745150509" lastFinishedPulling="2026-01-28 01:45:13.869273514 +0000 UTC m=+11.703781057" observedRunningTime="2026-01-28 01:45:14.471473845 +0000 UTC m=+12.305981407" watchObservedRunningTime="2026-01-28 01:45:14.47161207 +0000 UTC m=+12.306119622" Jan 28 01:45:21.678374 sudo[1779]: pam_unix(sudo:session): session closed for user root Jan 28 01:45:21.689299 sshd[1778]: Connection closed by 10.0.0.1 port 44494 Jan 28 01:45:21.695273 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Jan 28 01:45:21.715882 systemd-logind[1540]: Session 9 logged out. Waiting for processes to exit. Jan 28 01:45:21.730062 systemd[1]: sshd@8-10.0.0.33:22-10.0.0.1:44494.service: Deactivated successfully. Jan 28 01:45:21.747514 systemd[1]: session-9.scope: Deactivated successfully. Jan 28 01:45:21.756800 systemd[1]: session-9.scope: Consumed 6.589s CPU time, 219.7M memory peak. Jan 28 01:45:21.796596 systemd-logind[1540]: Removed session 9. Jan 28 01:45:29.250838 kubelet[2762]: I0128 01:45:29.250790 2762 status_manager.go:890] "Failed to get status for pod" podUID="289d4d02-fd83-4f53-a733-6b8ed5f27800" pod="calico-system/calico-typha-bc78bc977-7qjzc" err="pods \"calico-typha-bc78bc977-7qjzc\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" Jan 28 01:45:29.264754 systemd[1]: Created slice kubepods-besteffort-pod289d4d02_fd83_4f53_a733_6b8ed5f27800.slice - libcontainer container kubepods-besteffort-pod289d4d02_fd83_4f53_a733_6b8ed5f27800.slice. Jan 28 01:45:29.306738 kubelet[2762]: I0128 01:45:29.306683 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/289d4d02-fd83-4f53-a733-6b8ed5f27800-typha-certs\") pod \"calico-typha-bc78bc977-7qjzc\" (UID: \"289d4d02-fd83-4f53-a733-6b8ed5f27800\") " pod="calico-system/calico-typha-bc78bc977-7qjzc" Jan 28 01:45:29.307342 kubelet[2762]: I0128 01:45:29.307246 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/289d4d02-fd83-4f53-a733-6b8ed5f27800-tigera-ca-bundle\") pod \"calico-typha-bc78bc977-7qjzc\" (UID: \"289d4d02-fd83-4f53-a733-6b8ed5f27800\") " pod="calico-system/calico-typha-bc78bc977-7qjzc" Jan 28 01:45:29.307342 kubelet[2762]: I0128 01:45:29.307291 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsw2q\" (UniqueName: \"kubernetes.io/projected/289d4d02-fd83-4f53-a733-6b8ed5f27800-kube-api-access-fsw2q\") pod \"calico-typha-bc78bc977-7qjzc\" (UID: \"289d4d02-fd83-4f53-a733-6b8ed5f27800\") " pod="calico-system/calico-typha-bc78bc977-7qjzc" Jan 28 01:45:29.466734 systemd[1]: Created slice kubepods-besteffort-pod3b1f1aba_09f8_422a_8899_a7ce366970de.slice - libcontainer container kubepods-besteffort-pod3b1f1aba_09f8_422a_8899_a7ce366970de.slice. Jan 28 01:45:29.510086 kubelet[2762]: I0128 01:45:29.509594 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3b1f1aba-09f8-422a-8899-a7ce366970de-flexvol-driver-host\") pod \"calico-node-6jz86\" (UID: \"3b1f1aba-09f8-422a-8899-a7ce366970de\") " pod="calico-system/calico-node-6jz86" Jan 28 01:45:29.510086 kubelet[2762]: I0128 01:45:29.509712 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3b1f1aba-09f8-422a-8899-a7ce366970de-policysync\") pod \"calico-node-6jz86\" (UID: \"3b1f1aba-09f8-422a-8899-a7ce366970de\") " pod="calico-system/calico-node-6jz86" Jan 28 01:45:29.510086 kubelet[2762]: I0128 01:45:29.509744 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3b1f1aba-09f8-422a-8899-a7ce366970de-var-lib-calico\") pod \"calico-node-6jz86\" (UID: \"3b1f1aba-09f8-422a-8899-a7ce366970de\") " pod="calico-system/calico-node-6jz86" Jan 28 01:45:29.511813 kubelet[2762]: I0128 01:45:29.511721 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gm8s\" (UniqueName: \"kubernetes.io/projected/3b1f1aba-09f8-422a-8899-a7ce366970de-kube-api-access-7gm8s\") pod \"calico-node-6jz86\" (UID: \"3b1f1aba-09f8-422a-8899-a7ce366970de\") " pod="calico-system/calico-node-6jz86" Jan 28 01:45:29.511873 kubelet[2762]: I0128 01:45:29.511816 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3b1f1aba-09f8-422a-8899-a7ce366970de-var-run-calico\") pod \"calico-node-6jz86\" (UID: \"3b1f1aba-09f8-422a-8899-a7ce366970de\") " pod="calico-system/calico-node-6jz86" Jan 28 01:45:29.511873 kubelet[2762]: I0128 01:45:29.511846 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b1f1aba-09f8-422a-8899-a7ce366970de-tigera-ca-bundle\") pod \"calico-node-6jz86\" (UID: \"3b1f1aba-09f8-422a-8899-a7ce366970de\") " pod="calico-system/calico-node-6jz86" Jan 28 01:45:29.512089 kubelet[2762]: I0128 01:45:29.511872 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3b1f1aba-09f8-422a-8899-a7ce366970de-cni-net-dir\") pod \"calico-node-6jz86\" (UID: \"3b1f1aba-09f8-422a-8899-a7ce366970de\") " pod="calico-system/calico-node-6jz86" Jan 28 01:45:29.512089 kubelet[2762]: I0128 01:45:29.512009 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3b1f1aba-09f8-422a-8899-a7ce366970de-cni-bin-dir\") pod \"calico-node-6jz86\" (UID: \"3b1f1aba-09f8-422a-8899-a7ce366970de\") " pod="calico-system/calico-node-6jz86" Jan 28 01:45:29.512089 kubelet[2762]: I0128 01:45:29.512033 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3b1f1aba-09f8-422a-8899-a7ce366970de-cni-log-dir\") pod \"calico-node-6jz86\" (UID: \"3b1f1aba-09f8-422a-8899-a7ce366970de\") " pod="calico-system/calico-node-6jz86" Jan 28 01:45:29.512089 kubelet[2762]: I0128 01:45:29.512055 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b1f1aba-09f8-422a-8899-a7ce366970de-xtables-lock\") pod \"calico-node-6jz86\" (UID: \"3b1f1aba-09f8-422a-8899-a7ce366970de\") " pod="calico-system/calico-node-6jz86" Jan 28 01:45:29.512089 kubelet[2762]: I0128 01:45:29.512078 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b1f1aba-09f8-422a-8899-a7ce366970de-lib-modules\") pod \"calico-node-6jz86\" (UID: \"3b1f1aba-09f8-422a-8899-a7ce366970de\") " pod="calico-system/calico-node-6jz86" Jan 28 01:45:29.512265 kubelet[2762]: I0128 01:45:29.512102 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3b1f1aba-09f8-422a-8899-a7ce366970de-node-certs\") pod \"calico-node-6jz86\" (UID: \"3b1f1aba-09f8-422a-8899-a7ce366970de\") " pod="calico-system/calico-node-6jz86" Jan 28 01:45:29.574479 kubelet[2762]: E0128 01:45:29.574386 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:45:29.576773 containerd[1553]: time="2026-01-28T01:45:29.576482427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bc78bc977-7qjzc,Uid:289d4d02-fd83-4f53-a733-6b8ed5f27800,Namespace:calico-system,Attempt:0,}" Jan 28 01:45:29.645001 kubelet[2762]: E0128 01:45:29.641727 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.645001 kubelet[2762]: W0128 01:45:29.641760 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.645001 kubelet[2762]: E0128 01:45:29.641806 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.668559 kubelet[2762]: E0128 01:45:29.668248 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5r68l" podUID="760a12b1-4a99-4684-a026-7c55d7164578" Jan 28 01:45:29.682011 containerd[1553]: time="2026-01-28T01:45:29.681847937Z" level=info msg="connecting to shim e78122a0d5414e475596652ce87a01af1a62e07fd2d05ecfdb1e86aadc3b7570" address="unix:///run/containerd/s/83b6e052fc5a4be72eb6bfdaa8195d911f152ed44e97cd09e1f0d0ff9d4ed901" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:45:29.686349 kubelet[2762]: E0128 01:45:29.686138 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.686349 kubelet[2762]: W0128 01:45:29.686212 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.686349 kubelet[2762]: E0128 01:45:29.686235 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.714104 kubelet[2762]: E0128 01:45:29.714067 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.714104 kubelet[2762]: W0128 01:45:29.714096 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.714104 kubelet[2762]: E0128 01:45:29.714117 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.717479 kubelet[2762]: E0128 01:45:29.715542 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.717479 kubelet[2762]: W0128 01:45:29.715551 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.717479 kubelet[2762]: E0128 01:45:29.715561 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.717479 kubelet[2762]: E0128 01:45:29.716504 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.717479 kubelet[2762]: W0128 01:45:29.716513 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.717479 kubelet[2762]: E0128 01:45:29.716523 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.718882 kubelet[2762]: E0128 01:45:29.718679 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.718882 kubelet[2762]: W0128 01:45:29.718741 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.718882 kubelet[2762]: E0128 01:45:29.718758 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.722346 kubelet[2762]: E0128 01:45:29.721842 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.723021 kubelet[2762]: W0128 01:45:29.722351 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.723021 kubelet[2762]: E0128 01:45:29.722371 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.723021 kubelet[2762]: I0128 01:45:29.722392 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/760a12b1-4a99-4684-a026-7c55d7164578-kubelet-dir\") pod \"csi-node-driver-5r68l\" (UID: \"760a12b1-4a99-4684-a026-7c55d7164578\") " pod="calico-system/csi-node-driver-5r68l" Jan 28 01:45:29.723710 kubelet[2762]: E0128 01:45:29.723468 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.724596 kubelet[2762]: W0128 01:45:29.724202 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.724596 kubelet[2762]: E0128 01:45:29.724470 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.726035 kubelet[2762]: E0128 01:45:29.725991 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.726035 kubelet[2762]: W0128 01:45:29.726007 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.726035 kubelet[2762]: E0128 01:45:29.726020 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.727757 kubelet[2762]: E0128 01:45:29.727678 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.727757 kubelet[2762]: W0128 01:45:29.727690 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.727757 kubelet[2762]: E0128 01:45:29.727700 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.728736 kubelet[2762]: E0128 01:45:29.728545 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.728736 kubelet[2762]: W0128 01:45:29.728557 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.728736 kubelet[2762]: E0128 01:45:29.728699 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.729828 kubelet[2762]: E0128 01:45:29.729572 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.729828 kubelet[2762]: W0128 01:45:29.729667 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.729828 kubelet[2762]: E0128 01:45:29.729788 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.730437 kubelet[2762]: E0128 01:45:29.730187 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.730437 kubelet[2762]: W0128 01:45:29.730202 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.730437 kubelet[2762]: E0128 01:45:29.730215 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.731367 kubelet[2762]: E0128 01:45:29.731303 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.731367 kubelet[2762]: W0128 01:45:29.731314 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.731367 kubelet[2762]: E0128 01:45:29.731323 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.732787 kubelet[2762]: E0128 01:45:29.732242 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.732787 kubelet[2762]: W0128 01:45:29.732257 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.732787 kubelet[2762]: E0128 01:45:29.732271 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.733575 kubelet[2762]: E0128 01:45:29.733111 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.733575 kubelet[2762]: W0128 01:45:29.733123 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.733575 kubelet[2762]: E0128 01:45:29.733194 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.735024 kubelet[2762]: E0128 01:45:29.734761 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.735024 kubelet[2762]: W0128 01:45:29.734777 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.735024 kubelet[2762]: E0128 01:45:29.734792 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.736087 kubelet[2762]: E0128 01:45:29.735376 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.736087 kubelet[2762]: W0128 01:45:29.735387 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.736087 kubelet[2762]: E0128 01:45:29.735399 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.736087 kubelet[2762]: E0128 01:45:29.735969 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.736087 kubelet[2762]: W0128 01:45:29.735982 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.736087 kubelet[2762]: E0128 01:45:29.735993 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.736600 kubelet[2762]: E0128 01:45:29.736460 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.736600 kubelet[2762]: W0128 01:45:29.736474 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.736600 kubelet[2762]: E0128 01:45:29.736485 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.738014 kubelet[2762]: E0128 01:45:29.737374 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.738014 kubelet[2762]: W0128 01:45:29.737389 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.738014 kubelet[2762]: E0128 01:45:29.737401 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.739534 kubelet[2762]: E0128 01:45:29.738107 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.739534 kubelet[2762]: W0128 01:45:29.738118 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.739534 kubelet[2762]: E0128 01:45:29.738129 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.739534 kubelet[2762]: E0128 01:45:29.738363 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.739534 kubelet[2762]: W0128 01:45:29.738373 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.739534 kubelet[2762]: E0128 01:45:29.738384 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.739534 kubelet[2762]: E0128 01:45:29.738712 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.739534 kubelet[2762]: W0128 01:45:29.738724 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.739534 kubelet[2762]: E0128 01:45:29.738737 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.740162 kubelet[2762]: E0128 01:45:29.739558 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.740162 kubelet[2762]: W0128 01:45:29.739569 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.740162 kubelet[2762]: E0128 01:45:29.739581 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.783289 kubelet[2762]: E0128 01:45:29.780160 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:45:29.785359 containerd[1553]: time="2026-01-28T01:45:29.785318255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6jz86,Uid:3b1f1aba-09f8-422a-8899-a7ce366970de,Namespace:calico-system,Attempt:0,}" Jan 28 01:45:29.807505 systemd[1]: Started cri-containerd-e78122a0d5414e475596652ce87a01af1a62e07fd2d05ecfdb1e86aadc3b7570.scope - libcontainer container e78122a0d5414e475596652ce87a01af1a62e07fd2d05ecfdb1e86aadc3b7570. Jan 28 01:45:29.826793 kubelet[2762]: E0128 01:45:29.826754 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.826793 kubelet[2762]: W0128 01:45:29.826783 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.826793 kubelet[2762]: E0128 01:45:29.826805 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.827135 kubelet[2762]: E0128 01:45:29.827125 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.827135 kubelet[2762]: W0128 01:45:29.827135 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.827211 kubelet[2762]: E0128 01:45:29.827146 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.827539 kubelet[2762]: I0128 01:45:29.827367 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/760a12b1-4a99-4684-a026-7c55d7164578-socket-dir\") pod \"csi-node-driver-5r68l\" (UID: \"760a12b1-4a99-4684-a026-7c55d7164578\") " pod="calico-system/csi-node-driver-5r68l" Jan 28 01:45:29.830868 kubelet[2762]: E0128 01:45:29.830691 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.830868 kubelet[2762]: W0128 01:45:29.830784 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.830868 kubelet[2762]: E0128 01:45:29.830817 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.830868 kubelet[2762]: I0128 01:45:29.830860 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn7zh\" (UniqueName: \"kubernetes.io/projected/760a12b1-4a99-4684-a026-7c55d7164578-kube-api-access-nn7zh\") pod \"csi-node-driver-5r68l\" (UID: \"760a12b1-4a99-4684-a026-7c55d7164578\") " pod="calico-system/csi-node-driver-5r68l" Jan 28 01:45:29.833313 kubelet[2762]: E0128 01:45:29.832772 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.833313 kubelet[2762]: W0128 01:45:29.833043 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.835065 kubelet[2762]: E0128 01:45:29.834404 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.835065 kubelet[2762]: I0128 01:45:29.834488 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/760a12b1-4a99-4684-a026-7c55d7164578-registration-dir\") pod \"csi-node-driver-5r68l\" (UID: \"760a12b1-4a99-4684-a026-7c55d7164578\") " pod="calico-system/csi-node-driver-5r68l" Jan 28 01:45:29.835065 kubelet[2762]: E0128 01:45:29.834720 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.835065 kubelet[2762]: W0128 01:45:29.834733 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.835862 kubelet[2762]: E0128 01:45:29.835437 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.835862 kubelet[2762]: E0128 01:45:29.835796 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.835862 kubelet[2762]: W0128 01:45:29.835810 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.837219 kubelet[2762]: E0128 01:45:29.836997 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.838867 kubelet[2762]: E0128 01:45:29.838746 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.838867 kubelet[2762]: W0128 01:45:29.838817 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.839760 kubelet[2762]: E0128 01:45:29.839411 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.840288 kubelet[2762]: E0128 01:45:29.840173 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.840288 kubelet[2762]: W0128 01:45:29.840190 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.840288 kubelet[2762]: E0128 01:45:29.840274 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.840394 kubelet[2762]: I0128 01:45:29.840302 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/760a12b1-4a99-4684-a026-7c55d7164578-varrun\") pod \"csi-node-driver-5r68l\" (UID: \"760a12b1-4a99-4684-a026-7c55d7164578\") " pod="calico-system/csi-node-driver-5r68l" Jan 28 01:45:29.840800 kubelet[2762]: E0128 01:45:29.840547 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.840800 kubelet[2762]: W0128 01:45:29.840677 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.841815 kubelet[2762]: E0128 01:45:29.840856 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.843435 kubelet[2762]: E0128 01:45:29.843073 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.844017 kubelet[2762]: W0128 01:45:29.843966 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.844191 kubelet[2762]: E0128 01:45:29.844161 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.846236 kubelet[2762]: E0128 01:45:29.845169 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.846236 kubelet[2762]: W0128 01:45:29.845181 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.846236 kubelet[2762]: E0128 01:45:29.845202 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.846236 kubelet[2762]: E0128 01:45:29.846326 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.846492 kubelet[2762]: W0128 01:45:29.846338 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.847206 kubelet[2762]: E0128 01:45:29.846823 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.847478 kubelet[2762]: E0128 01:45:29.847356 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.848531 kubelet[2762]: W0128 01:45:29.847720 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.848531 kubelet[2762]: E0128 01:45:29.847982 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.848966 kubelet[2762]: E0128 01:45:29.848721 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.848966 kubelet[2762]: W0128 01:45:29.848789 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.849286 kubelet[2762]: E0128 01:45:29.849107 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.850665 kubelet[2762]: E0128 01:45:29.850042 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.850665 kubelet[2762]: W0128 01:45:29.850085 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.850665 kubelet[2762]: E0128 01:45:29.850155 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.850665 kubelet[2762]: E0128 01:45:29.850514 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.850665 kubelet[2762]: W0128 01:45:29.850602 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.851104 kubelet[2762]: E0128 01:45:29.850679 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.851465 kubelet[2762]: E0128 01:45:29.851218 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.851465 kubelet[2762]: W0128 01:45:29.851281 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.851465 kubelet[2762]: E0128 01:45:29.851294 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.892499 containerd[1553]: time="2026-01-28T01:45:29.892264797Z" level=info msg="connecting to shim 92a3a45863f97fe1e1899122638a4ea427dcabdc30aaced53c07a0c87369e131" address="unix:///run/containerd/s/cd8cd0e0328bca4b8bd63a1efa897ee19581493f03bad8aa8e709d5a2caa76ff" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:45:29.937490 systemd[1]: Started cri-containerd-92a3a45863f97fe1e1899122638a4ea427dcabdc30aaced53c07a0c87369e131.scope - libcontainer container 92a3a45863f97fe1e1899122638a4ea427dcabdc30aaced53c07a0c87369e131. Jan 28 01:45:29.947079 kubelet[2762]: E0128 01:45:29.947009 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.947079 kubelet[2762]: W0128 01:45:29.947034 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.947079 kubelet[2762]: E0128 01:45:29.947054 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.948023 kubelet[2762]: E0128 01:45:29.947854 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.948023 kubelet[2762]: W0128 01:45:29.948016 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.948140 kubelet[2762]: E0128 01:45:29.948092 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.949759 kubelet[2762]: E0128 01:45:29.949493 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.949759 kubelet[2762]: W0128 01:45:29.949559 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.950236 kubelet[2762]: E0128 01:45:29.950159 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.950570 kubelet[2762]: E0128 01:45:29.950447 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.952034 kubelet[2762]: W0128 01:45:29.951806 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.952034 kubelet[2762]: E0128 01:45:29.951830 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.953162 kubelet[2762]: E0128 01:45:29.953143 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.953247 kubelet[2762]: W0128 01:45:29.953232 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.954981 kubelet[2762]: E0128 01:45:29.954235 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.954981 kubelet[2762]: W0128 01:45:29.954252 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.954981 kubelet[2762]: E0128 01:45:29.954763 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.954981 kubelet[2762]: W0128 01:45:29.954844 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.954981 kubelet[2762]: E0128 01:45:29.954862 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.955112 kubelet[2762]: E0128 01:45:29.955062 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.955240 kubelet[2762]: E0128 01:45:29.955141 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.956060 kubelet[2762]: E0128 01:45:29.955977 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.956060 kubelet[2762]: W0128 01:45:29.955995 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.956808 kubelet[2762]: E0128 01:45:29.956539 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.957771 kubelet[2762]: E0128 01:45:29.957165 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.957771 kubelet[2762]: W0128 01:45:29.957176 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.957771 kubelet[2762]: E0128 01:45:29.957430 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.958453 kubelet[2762]: E0128 01:45:29.958431 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.958453 kubelet[2762]: W0128 01:45:29.958446 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.959094 kubelet[2762]: E0128 01:45:29.958503 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.959261 kubelet[2762]: E0128 01:45:29.959113 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.959261 kubelet[2762]: W0128 01:45:29.959124 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.959261 kubelet[2762]: E0128 01:45:29.959172 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.960392 kubelet[2762]: E0128 01:45:29.960368 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.960392 kubelet[2762]: W0128 01:45:29.960383 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.961083 kubelet[2762]: E0128 01:45:29.960520 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.961775 kubelet[2762]: E0128 01:45:29.961556 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.961775 kubelet[2762]: W0128 01:45:29.961572 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.962591 kubelet[2762]: E0128 01:45:29.962341 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.964006 kubelet[2762]: E0128 01:45:29.963445 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.964006 kubelet[2762]: W0128 01:45:29.963770 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.964650 kubelet[2762]: E0128 01:45:29.964572 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.965948 kubelet[2762]: E0128 01:45:29.965828 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.965948 kubelet[2762]: W0128 01:45:29.965838 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.965948 kubelet[2762]: E0128 01:45:29.965865 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.966857 kubelet[2762]: E0128 01:45:29.966792 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.966857 kubelet[2762]: W0128 01:45:29.966806 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.966857 kubelet[2762]: E0128 01:45:29.966819 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.969162 kubelet[2762]: E0128 01:45:29.968474 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.969162 kubelet[2762]: W0128 01:45:29.968581 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.969862 kubelet[2762]: E0128 01:45:29.969728 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.970156 kubelet[2762]: E0128 01:45:29.970033 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.970156 kubelet[2762]: W0128 01:45:29.970150 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.970467 kubelet[2762]: E0128 01:45:29.970161 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.970994 kubelet[2762]: E0128 01:45:29.970858 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.970994 kubelet[2762]: W0128 01:45:29.970871 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.970994 kubelet[2762]: E0128 01:45:29.970880 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.972113 kubelet[2762]: E0128 01:45:29.971853 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.972235 kubelet[2762]: W0128 01:45:29.972161 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.972452 kubelet[2762]: E0128 01:45:29.972311 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.986196 kubelet[2762]: E0128 01:45:29.985880 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:29.986342 kubelet[2762]: W0128 01:45:29.986326 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:29.986405 kubelet[2762]: E0128 01:45:29.986394 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:29.993270 containerd[1553]: time="2026-01-28T01:45:29.993237892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bc78bc977-7qjzc,Uid:289d4d02-fd83-4f53-a733-6b8ed5f27800,Namespace:calico-system,Attempt:0,} returns sandbox id \"e78122a0d5414e475596652ce87a01af1a62e07fd2d05ecfdb1e86aadc3b7570\"" Jan 28 01:45:29.994271 kubelet[2762]: E0128 01:45:29.994251 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:45:29.997179 containerd[1553]: time="2026-01-28T01:45:29.997156749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 28 01:45:30.039873 containerd[1553]: time="2026-01-28T01:45:30.039714892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6jz86,Uid:3b1f1aba-09f8-422a-8899-a7ce366970de,Namespace:calico-system,Attempt:0,} returns sandbox id \"92a3a45863f97fe1e1899122638a4ea427dcabdc30aaced53c07a0c87369e131\"" Jan 28 01:45:30.041526 kubelet[2762]: E0128 01:45:30.041376 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:45:30.696860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2907414923.mount: Deactivated successfully. Jan 28 01:45:31.348870 kubelet[2762]: E0128 01:45:31.348294 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5r68l" podUID="760a12b1-4a99-4684-a026-7c55d7164578" Jan 28 01:45:31.920464 containerd[1553]: time="2026-01-28T01:45:31.920333464Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:45:31.923681 containerd[1553]: time="2026-01-28T01:45:31.923553965Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 28 01:45:31.925712 containerd[1553]: time="2026-01-28T01:45:31.925557602Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:45:31.930434 containerd[1553]: time="2026-01-28T01:45:31.930343136Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:45:31.931408 containerd[1553]: time="2026-01-28T01:45:31.931148055Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.933430663s" Jan 28 01:45:31.931408 containerd[1553]: time="2026-01-28T01:45:31.931182879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 28 01:45:31.939163 containerd[1553]: time="2026-01-28T01:45:31.938972969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 28 01:45:31.958366 containerd[1553]: time="2026-01-28T01:45:31.958228392Z" level=info msg="CreateContainer within sandbox \"e78122a0d5414e475596652ce87a01af1a62e07fd2d05ecfdb1e86aadc3b7570\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 28 01:45:31.984278 containerd[1553]: time="2026-01-28T01:45:31.984144332Z" level=info msg="Container 76e2f49685a09e4e6378a3d9e07181738342795d71395784091bdf390233d4ad: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:45:31.987724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount248273007.mount: Deactivated successfully. Jan 28 01:45:32.012177 containerd[1553]: time="2026-01-28T01:45:32.011493239Z" level=info msg="CreateContainer within sandbox \"e78122a0d5414e475596652ce87a01af1a62e07fd2d05ecfdb1e86aadc3b7570\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"76e2f49685a09e4e6378a3d9e07181738342795d71395784091bdf390233d4ad\"" Jan 28 01:45:32.014738 containerd[1553]: time="2026-01-28T01:45:32.014597732Z" level=info msg="StartContainer for \"76e2f49685a09e4e6378a3d9e07181738342795d71395784091bdf390233d4ad\"" Jan 28 01:45:32.015968 containerd[1553]: time="2026-01-28T01:45:32.015804879Z" level=info msg="connecting to shim 76e2f49685a09e4e6378a3d9e07181738342795d71395784091bdf390233d4ad" address="unix:///run/containerd/s/83b6e052fc5a4be72eb6bfdaa8195d911f152ed44e97cd09e1f0d0ff9d4ed901" protocol=ttrpc version=3 Jan 28 01:45:32.063195 systemd[1]: Started cri-containerd-76e2f49685a09e4e6378a3d9e07181738342795d71395784091bdf390233d4ad.scope - libcontainer container 76e2f49685a09e4e6378a3d9e07181738342795d71395784091bdf390233d4ad. Jan 28 01:45:32.249451 containerd[1553]: time="2026-01-28T01:45:32.249201274Z" level=info msg="StartContainer for \"76e2f49685a09e4e6378a3d9e07181738342795d71395784091bdf390233d4ad\" returns successfully" Jan 28 01:45:32.583070 kubelet[2762]: E0128 01:45:32.581232 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:45:32.673041 kubelet[2762]: E0128 01:45:32.672865 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:32.673041 kubelet[2762]: W0128 01:45:32.673024 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:32.674558 kubelet[2762]: E0128 01:45:32.673053 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:32.675550 kubelet[2762]: E0128 01:45:32.675533 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:32.677550 kubelet[2762]: W0128 01:45:32.677528 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:32.679543 kubelet[2762]: E0128 01:45:32.678205 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:32.679543 kubelet[2762]: E0128 01:45:32.679472 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:32.679543 kubelet[2762]: W0128 01:45:32.679499 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:32.679543 kubelet[2762]: E0128 01:45:32.679529 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:32.682581 kubelet[2762]: E0128 01:45:32.682187 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:32.682581 kubelet[2762]: W0128 01:45:32.682206 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:32.682581 kubelet[2762]: E0128 01:45:32.682230 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:32.685180 kubelet[2762]: E0128 01:45:32.685056 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:32.685180 kubelet[2762]: W0128 01:45:32.685125 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:32.685180 kubelet[2762]: E0128 01:45:32.685144 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:32.687328 kubelet[2762]: E0128 01:45:32.687300 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:32.687328 kubelet[2762]: W0128 01:45:32.687320 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:32.687425 kubelet[2762]: E0128 01:45:32.687336 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:32.689322 kubelet[2762]: E0128 01:45:32.689215 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:32.689385 kubelet[2762]: W0128 01:45:32.689369 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:32.689425 kubelet[2762]: E0128 01:45:32.689402 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:32.689832 kubelet[2762]: E0128 01:45:32.689754 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:32.689832 kubelet[2762]: W0128 01:45:32.689807 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:32.690092 kubelet[2762]: E0128 01:45:32.689825 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:32.691286 kubelet[2762]: E0128 01:45:32.691214 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:32.691286 kubelet[2762]: W0128 01:45:32.691265 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:32.691286 kubelet[2762]: E0128 01:45:32.691280 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:32.694550 kubelet[2762]: E0128 01:45:32.694404 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:32.695402 kubelet[2762]: W0128 01:45:32.695214 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:32.695402 kubelet[2762]: E0128 01:45:32.695383 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:32.697122 kubelet[2762]: E0128 01:45:32.697104 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:32.697224 kubelet[2762]: W0128 01:45:32.697208 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:32.697310 kubelet[2762]: E0128 01:45:32.697292 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:32.697734 kubelet[2762]: E0128 01:45:32.697717 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:32.697842 kubelet[2762]: W0128 01:45:32.697801 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:32.697842 kubelet[2762]: E0128 01:45:32.697818 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:32.702857 kubelet[2762]: E0128 01:45:32.702786 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:32.703815 kubelet[2762]: W0128 01:45:32.703119 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:32.703815 kubelet[2762]: E0128 01:45:32.703609 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:32.705357 kubelet[2762]: E0128 01:45:32.705088 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:32.705357 kubelet[2762]: W0128 01:45:32.705108 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:32.705357 kubelet[2762]: E0128 01:45:32.705122 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:32.709996 kubelet[2762]: E0128 01:45:32.709535 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:32.709996 kubelet[2762]: W0128 01:45:32.709557 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:32.709996 kubelet[2762]: E0128 01:45:32.709574 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:32.800991 kubelet[2762]: E0128 01:45:32.800316 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:32.800991 kubelet[2762]: W0128 01:45:32.800750 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:32.800991 kubelet[2762]: E0128 01:45:32.800777 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:32.801716 kubelet[2762]: E0128 01:45:32.801700 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:32.801790 kubelet[2762]: W0128 01:45:32.801779 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:32.801845 kubelet[2762]: E0128 01:45:32.801834 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:32.802437 kubelet[2762]: E0128 01:45:32.802407 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:32.802437 kubelet[2762]: W0128 01:45:32.802422 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:32.802788 kubelet[2762]: E0128 01:45:32.802728 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:32.804501 kubelet[2762]: E0128 01:45:32.804487 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:32.804567 kubelet[2762]: W0128 01:45:32.804550 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:32.805410 kubelet[2762]: E0128 01:45:32.805384 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:32.805810 kubelet[2762]: E0128 01:45:32.805784 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:32.805810 kubelet[2762]: W0128 01:45:32.805796 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:32.805981 kubelet[2762]: E0128 01:45:32.805882 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:32.806738 kubelet[2762]: E0128 01:45:32.806712 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:32.806738 kubelet[2762]: W0128 01:45:32.806724 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:32.808125 kubelet[2762]: E0128 01:45:32.807785 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:32.810344 kubelet[2762]: E0128 01:45:32.810256 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:32.811298 kubelet[2762]: W0128 01:45:32.810458 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:32.811354 kubelet[2762]: E0128 01:45:32.811315 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:32.813573 kubelet[2762]: E0128 01:45:32.812410 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:32.813573 kubelet[2762]: W0128 01:45:32.812429 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:32.813573 kubelet[2762]: E0128 01:45:32.813209 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:32.814517 kubelet[2762]: E0128 01:45:32.814464 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:32.814517 kubelet[2762]: W0128 01:45:32.814480 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:32.814517 kubelet[2762]: E0128 01:45:32.814500 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:32.815548 kubelet[2762]: E0128 01:45:32.815481 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:32.815548 kubelet[2762]: W0128 01:45:32.815542 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:32.815753 kubelet[2762]: E0128 01:45:32.815728 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:32.816484 kubelet[2762]: E0128 01:45:32.816420 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:32.816484 kubelet[2762]: W0128 01:45:32.816482 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:32.816565 kubelet[2762]: E0128 01:45:32.816544 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:32.817686 kubelet[2762]: E0128 01:45:32.817601 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:32.817746 kubelet[2762]: W0128 01:45:32.817687 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:32.817793 kubelet[2762]: E0128 01:45:32.817780 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:32.818571 kubelet[2762]: E0128 01:45:32.818478 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:32.818571 kubelet[2762]: W0128 01:45:32.818541 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:32.818739 kubelet[2762]: E0128 01:45:32.818727 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:32.819975 kubelet[2762]: E0128 01:45:32.819787 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:32.820318 kubelet[2762]: W0128 01:45:32.820190 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:32.820363 kubelet[2762]: E0128 01:45:32.820349 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:32.820744 kubelet[2762]: E0128 01:45:32.820572 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:32.820744 kubelet[2762]: W0128 01:45:32.820683 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:32.821022 kubelet[2762]: E0128 01:45:32.820860 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:32.822466 kubelet[2762]: E0128 01:45:32.822380 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:32.822466 kubelet[2762]: W0128 01:45:32.822421 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:32.822570 kubelet[2762]: E0128 01:45:32.822469 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:32.823161 kubelet[2762]: E0128 01:45:32.823084 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:32.823161 kubelet[2762]: W0128 01:45:32.823129 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:32.823161 kubelet[2762]: E0128 01:45:32.823139 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:32.825041 kubelet[2762]: E0128 01:45:32.824030 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:32.825041 kubelet[2762]: W0128 01:45:32.824044 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:32.825041 kubelet[2762]: E0128 01:45:32.824056 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:33.348546 kubelet[2762]: E0128 01:45:33.347331 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5r68l" podUID="760a12b1-4a99-4684-a026-7c55d7164578" Jan 28 01:45:33.591874 kubelet[2762]: E0128 01:45:33.589494 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:45:33.643557 kubelet[2762]: E0128 01:45:33.643219 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:33.643737 kubelet[2762]: W0128 01:45:33.643590 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:33.648840 kubelet[2762]: E0128 01:45:33.648365 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:33.660710 kubelet[2762]: I0128 01:45:33.658464 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-bc78bc977-7qjzc" podStartSLOduration=2.716509824 podStartE2EDuration="4.658438063s" podCreationTimestamp="2026-01-28 01:45:29 +0000 UTC" firstStartedPulling="2026-01-28 01:45:29.996603871 +0000 UTC m=+27.831111413" lastFinishedPulling="2026-01-28 01:45:31.93853211 +0000 UTC m=+29.773039652" observedRunningTime="2026-01-28 01:45:32.662435992 +0000 UTC m=+30.496943535" watchObservedRunningTime="2026-01-28 01:45:33.658438063 +0000 UTC m=+31.492945605" Jan 28 01:45:33.682236 kubelet[2762]: E0128 01:45:33.678059 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:33.682236 kubelet[2762]: W0128 01:45:33.680235 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:33.682236 kubelet[2762]: E0128 01:45:33.680274 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:33.682236 kubelet[2762]: E0128 01:45:33.681425 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:33.682236 kubelet[2762]: W0128 01:45:33.681441 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:33.682236 kubelet[2762]: E0128 01:45:33.681459 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:33.690754 kubelet[2762]: E0128 01:45:33.688034 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:33.690754 kubelet[2762]: W0128 01:45:33.688096 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:33.690754 kubelet[2762]: E0128 01:45:33.688125 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:33.695306 kubelet[2762]: E0128 01:45:33.691796 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:33.695306 kubelet[2762]: W0128 01:45:33.691857 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:33.695306 kubelet[2762]: E0128 01:45:33.691981 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:33.707478 kubelet[2762]: E0128 01:45:33.707300 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:33.707478 kubelet[2762]: W0128 01:45:33.707479 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:33.708021 kubelet[2762]: E0128 01:45:33.707810 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:33.721475 kubelet[2762]: E0128 01:45:33.721342 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:33.721475 kubelet[2762]: W0128 01:45:33.721419 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:33.721475 kubelet[2762]: E0128 01:45:33.721452 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:33.730750 kubelet[2762]: E0128 01:45:33.729724 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:33.730750 kubelet[2762]: W0128 01:45:33.729756 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:33.730750 kubelet[2762]: E0128 01:45:33.729787 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:33.740988 kubelet[2762]: E0128 01:45:33.735963 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:33.740988 kubelet[2762]: W0128 01:45:33.735999 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:33.740988 kubelet[2762]: E0128 01:45:33.736029 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:33.744807 kubelet[2762]: E0128 01:45:33.744126 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:33.744807 kubelet[2762]: W0128 01:45:33.744203 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:33.744807 kubelet[2762]: E0128 01:45:33.744236 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:33.748875 kubelet[2762]: E0128 01:45:33.748753 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:33.748875 kubelet[2762]: W0128 01:45:33.748781 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:33.748875 kubelet[2762]: E0128 01:45:33.748813 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:33.750781 kubelet[2762]: E0128 01:45:33.749251 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:33.750781 kubelet[2762]: W0128 01:45:33.749265 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:33.750781 kubelet[2762]: E0128 01:45:33.749285 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:33.750781 kubelet[2762]: E0128 01:45:33.749824 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:33.750781 kubelet[2762]: W0128 01:45:33.749839 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:33.750781 kubelet[2762]: E0128 01:45:33.749853 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:33.750781 kubelet[2762]: E0128 01:45:33.750264 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:33.750781 kubelet[2762]: W0128 01:45:33.750276 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:33.750781 kubelet[2762]: E0128 01:45:33.750290 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:33.760034 kubelet[2762]: E0128 01:45:33.751431 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:33.760034 kubelet[2762]: W0128 01:45:33.751445 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:33.760034 kubelet[2762]: E0128 01:45:33.751462 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:33.760034 kubelet[2762]: E0128 01:45:33.754868 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:33.760034 kubelet[2762]: W0128 01:45:33.755733 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:33.760034 kubelet[2762]: E0128 01:45:33.755770 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:33.760034 kubelet[2762]: E0128 01:45:33.756855 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:33.760034 kubelet[2762]: W0128 01:45:33.756872 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:33.760034 kubelet[2762]: E0128 01:45:33.757042 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:33.760034 kubelet[2762]: E0128 01:45:33.758250 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:33.760987 kubelet[2762]: W0128 01:45:33.758265 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:33.760987 kubelet[2762]: E0128 01:45:33.758494 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:33.760987 kubelet[2762]: E0128 01:45:33.759373 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:33.760987 kubelet[2762]: W0128 01:45:33.759390 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:33.760987 kubelet[2762]: E0128 01:45:33.759785 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:33.760987 kubelet[2762]: E0128 01:45:33.760140 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:33.760987 kubelet[2762]: W0128 01:45:33.760154 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:33.760987 kubelet[2762]: E0128 01:45:33.760498 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:33.760987 kubelet[2762]: E0128 01:45:33.760753 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:33.760987 kubelet[2762]: W0128 01:45:33.760765 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:33.761318 kubelet[2762]: E0128 01:45:33.761115 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:33.772838 kubelet[2762]: E0128 01:45:33.765301 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:33.772838 kubelet[2762]: W0128 01:45:33.765332 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:33.772838 kubelet[2762]: E0128 01:45:33.765712 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:33.772838 kubelet[2762]: E0128 01:45:33.768113 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:33.772838 kubelet[2762]: W0128 01:45:33.768136 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:33.772838 kubelet[2762]: E0128 01:45:33.768467 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:33.772838 kubelet[2762]: E0128 01:45:33.769181 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:33.772838 kubelet[2762]: W0128 01:45:33.769196 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:33.772838 kubelet[2762]: E0128 01:45:33.769466 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:33.772838 kubelet[2762]: E0128 01:45:33.771493 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:33.787494 kubelet[2762]: W0128 01:45:33.771510 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:33.787494 kubelet[2762]: E0128 01:45:33.771748 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:33.787494 kubelet[2762]: E0128 01:45:33.776320 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:33.787494 kubelet[2762]: W0128 01:45:33.776352 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:33.787494 kubelet[2762]: E0128 01:45:33.776536 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:33.787494 kubelet[2762]: E0128 01:45:33.781294 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:33.787494 kubelet[2762]: W0128 01:45:33.781318 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:33.787494 kubelet[2762]: E0128 01:45:33.781606 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:33.787494 kubelet[2762]: E0128 01:45:33.785246 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:33.787494 kubelet[2762]: W0128 01:45:33.785278 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:33.787992 kubelet[2762]: E0128 01:45:33.786457 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:33.787992 kubelet[2762]: E0128 01:45:33.787258 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:33.787992 kubelet[2762]: W0128 01:45:33.787273 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:33.787992 kubelet[2762]: E0128 01:45:33.787354 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:33.788158 kubelet[2762]: E0128 01:45:33.788099 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:33.788158 kubelet[2762]: W0128 01:45:33.788113 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:33.788336 kubelet[2762]: E0128 01:45:33.788286 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:33.794108 kubelet[2762]: E0128 01:45:33.794075 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:33.794499 kubelet[2762]: W0128 01:45:33.794257 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:33.794499 kubelet[2762]: E0128 01:45:33.794344 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:33.798853 kubelet[2762]: E0128 01:45:33.797075 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:33.798853 kubelet[2762]: W0128 01:45:33.797166 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:33.798853 kubelet[2762]: E0128 01:45:33.797201 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:33.798853 kubelet[2762]: E0128 01:45:33.798394 2762 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:45:33.798853 kubelet[2762]: W0128 01:45:33.798406 2762 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:45:33.798853 kubelet[2762]: E0128 01:45:33.798569 2762 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:45:34.228205 containerd[1553]: time="2026-01-28T01:45:34.227972388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:45:34.230491 containerd[1553]: time="2026-01-28T01:45:34.230226167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 28 01:45:34.233859 containerd[1553]: time="2026-01-28T01:45:34.232805986Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:45:34.245604 containerd[1553]: time="2026-01-28T01:45:34.245501462Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:45:34.252774 containerd[1553]: time="2026-01-28T01:45:34.251227622Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.311969831s" Jan 28 01:45:34.252774 containerd[1553]: time="2026-01-28T01:45:34.251325112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 28 01:45:34.258606 containerd[1553]: time="2026-01-28T01:45:34.258106020Z" level=info msg="CreateContainer within sandbox \"92a3a45863f97fe1e1899122638a4ea427dcabdc30aaced53c07a0c87369e131\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 28 01:45:34.288028 containerd[1553]: time="2026-01-28T01:45:34.287147469Z" level=info msg="Container 486942919abdfe57e48cdfe19a6f8035c2b7301726f76032f8d78fbb4542adad: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:45:34.316072 containerd[1553]: time="2026-01-28T01:45:34.315550451Z" level=info msg="CreateContainer within sandbox \"92a3a45863f97fe1e1899122638a4ea427dcabdc30aaced53c07a0c87369e131\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"486942919abdfe57e48cdfe19a6f8035c2b7301726f76032f8d78fbb4542adad\"" Jan 28 01:45:34.320119 containerd[1553]: time="2026-01-28T01:45:34.320064423Z" level=info msg="StartContainer for \"486942919abdfe57e48cdfe19a6f8035c2b7301726f76032f8d78fbb4542adad\"" Jan 28 01:45:34.323857 containerd[1553]: time="2026-01-28T01:45:34.323821108Z" level=info msg="connecting to shim 486942919abdfe57e48cdfe19a6f8035c2b7301726f76032f8d78fbb4542adad" address="unix:///run/containerd/s/cd8cd0e0328bca4b8bd63a1efa897ee19581493f03bad8aa8e709d5a2caa76ff" protocol=ttrpc version=3 Jan 28 01:45:34.384268 systemd[1]: Started cri-containerd-486942919abdfe57e48cdfe19a6f8035c2b7301726f76032f8d78fbb4542adad.scope - libcontainer container 486942919abdfe57e48cdfe19a6f8035c2b7301726f76032f8d78fbb4542adad. Jan 28 01:45:34.546482 containerd[1553]: time="2026-01-28T01:45:34.546150537Z" level=info msg="StartContainer for \"486942919abdfe57e48cdfe19a6f8035c2b7301726f76032f8d78fbb4542adad\" returns successfully" Jan 28 01:45:34.570698 systemd[1]: cri-containerd-486942919abdfe57e48cdfe19a6f8035c2b7301726f76032f8d78fbb4542adad.scope: Deactivated successfully. Jan 28 01:45:34.576978 containerd[1553]: time="2026-01-28T01:45:34.576810375Z" level=info msg="received container exit event container_id:\"486942919abdfe57e48cdfe19a6f8035c2b7301726f76032f8d78fbb4542adad\" id:\"486942919abdfe57e48cdfe19a6f8035c2b7301726f76032f8d78fbb4542adad\" pid:3496 exited_at:{seconds:1769564734 nanos:576091658}" Jan 28 01:45:34.600756 kubelet[2762]: E0128 01:45:34.600593 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:45:34.603390 kubelet[2762]: E0128 01:45:34.600860 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:45:34.657992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-486942919abdfe57e48cdfe19a6f8035c2b7301726f76032f8d78fbb4542adad-rootfs.mount: Deactivated successfully. Jan 28 01:45:35.347367 kubelet[2762]: E0128 01:45:35.347203 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5r68l" podUID="760a12b1-4a99-4684-a026-7c55d7164578" Jan 28 01:45:35.610340 kubelet[2762]: E0128 01:45:35.610043 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:45:35.611787 kubelet[2762]: E0128 01:45:35.611047 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:45:35.613307 containerd[1553]: time="2026-01-28T01:45:35.613218040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 28 01:45:37.348721 kubelet[2762]: E0128 01:45:37.347484 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5r68l" podUID="760a12b1-4a99-4684-a026-7c55d7164578" Jan 28 01:45:38.779528 containerd[1553]: time="2026-01-28T01:45:38.779314910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:45:38.784015 containerd[1553]: time="2026-01-28T01:45:38.783021239Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 28 01:45:38.787287 containerd[1553]: time="2026-01-28T01:45:38.786879706Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:45:38.793069 containerd[1553]: time="2026-01-28T01:45:38.792220544Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:45:38.794015 containerd[1553]: time="2026-01-28T01:45:38.793829515Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.180517188s" Jan 28 01:45:38.794130 containerd[1553]: time="2026-01-28T01:45:38.794024479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 28 01:45:38.798857 containerd[1553]: time="2026-01-28T01:45:38.798810555Z" level=info msg="CreateContainer within sandbox \"92a3a45863f97fe1e1899122638a4ea427dcabdc30aaced53c07a0c87369e131\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 28 01:45:38.833979 containerd[1553]: time="2026-01-28T01:45:38.832530380Z" level=info msg="Container 4bf3d6c33dbdfc05ce9306d9bc08a566c83ab5a4ff240d431a039bfe4e01a1e0: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:45:38.881985 containerd[1553]: time="2026-01-28T01:45:38.881490378Z" level=info msg="CreateContainer within sandbox \"92a3a45863f97fe1e1899122638a4ea427dcabdc30aaced53c07a0c87369e131\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4bf3d6c33dbdfc05ce9306d9bc08a566c83ab5a4ff240d431a039bfe4e01a1e0\"" Jan 28 01:45:38.886088 containerd[1553]: time="2026-01-28T01:45:38.882601815Z" level=info msg="StartContainer for \"4bf3d6c33dbdfc05ce9306d9bc08a566c83ab5a4ff240d431a039bfe4e01a1e0\"" Jan 28 01:45:38.886088 containerd[1553]: time="2026-01-28T01:45:38.885411859Z" level=info msg="connecting to shim 4bf3d6c33dbdfc05ce9306d9bc08a566c83ab5a4ff240d431a039bfe4e01a1e0" address="unix:///run/containerd/s/cd8cd0e0328bca4b8bd63a1efa897ee19581493f03bad8aa8e709d5a2caa76ff" protocol=ttrpc version=3 Jan 28 01:45:38.927236 systemd[1]: Started cri-containerd-4bf3d6c33dbdfc05ce9306d9bc08a566c83ab5a4ff240d431a039bfe4e01a1e0.scope - libcontainer container 4bf3d6c33dbdfc05ce9306d9bc08a566c83ab5a4ff240d431a039bfe4e01a1e0. Jan 28 01:45:39.067142 containerd[1553]: time="2026-01-28T01:45:39.066391768Z" level=info msg="StartContainer for \"4bf3d6c33dbdfc05ce9306d9bc08a566c83ab5a4ff240d431a039bfe4e01a1e0\" returns successfully" Jan 28 01:45:39.348256 kubelet[2762]: E0128 01:45:39.347459 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5r68l" podUID="760a12b1-4a99-4684-a026-7c55d7164578" Jan 28 01:45:39.632575 kubelet[2762]: E0128 01:45:39.631217 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:45:39.888287 systemd[1]: cri-containerd-4bf3d6c33dbdfc05ce9306d9bc08a566c83ab5a4ff240d431a039bfe4e01a1e0.scope: Deactivated successfully. Jan 28 01:45:39.888624 systemd[1]: cri-containerd-4bf3d6c33dbdfc05ce9306d9bc08a566c83ab5a4ff240d431a039bfe4e01a1e0.scope: Consumed 912ms CPU time, 177.1M memory peak, 3.2M read from disk, 171.3M written to disk. Jan 28 01:45:39.892472 containerd[1553]: time="2026-01-28T01:45:39.892371218Z" level=info msg="received container exit event container_id:\"4bf3d6c33dbdfc05ce9306d9bc08a566c83ab5a4ff240d431a039bfe4e01a1e0\" id:\"4bf3d6c33dbdfc05ce9306d9bc08a566c83ab5a4ff240d431a039bfe4e01a1e0\" pid:3557 exited_at:{seconds:1769564739 nanos:891875508}" Jan 28 01:45:39.952313 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4bf3d6c33dbdfc05ce9306d9bc08a566c83ab5a4ff240d431a039bfe4e01a1e0-rootfs.mount: Deactivated successfully. Jan 28 01:45:39.955462 kubelet[2762]: I0128 01:45:39.955334 2762 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 28 01:45:40.026354 systemd[1]: Created slice kubepods-besteffort-poda2048076_ab34_4562_b42d_515b64a0bfb4.slice - libcontainer container kubepods-besteffort-poda2048076_ab34_4562_b42d_515b64a0bfb4.slice. Jan 28 01:45:40.053228 kubelet[2762]: I0128 01:45:40.049750 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2zt7\" (UniqueName: \"kubernetes.io/projected/a2048076-ab34-4562-b42d-515b64a0bfb4-kube-api-access-p2zt7\") pod \"calico-apiserver-6954f9c796-gqzwx\" (UID: \"a2048076-ab34-4562-b42d-515b64a0bfb4\") " pod="calico-apiserver/calico-apiserver-6954f9c796-gqzwx" Jan 28 01:45:40.053228 kubelet[2762]: I0128 01:45:40.049810 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179-tigera-ca-bundle\") pod \"calico-kube-controllers-64456467b5-b47z9\" (UID: \"5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179\") " pod="calico-system/calico-kube-controllers-64456467b5-b47z9" Jan 28 01:45:40.054024 kubelet[2762]: I0128 01:45:40.053339 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a2048076-ab34-4562-b42d-515b64a0bfb4-calico-apiserver-certs\") pod \"calico-apiserver-6954f9c796-gqzwx\" (UID: \"a2048076-ab34-4562-b42d-515b64a0bfb4\") " pod="calico-apiserver/calico-apiserver-6954f9c796-gqzwx" Jan 28 01:45:40.054024 kubelet[2762]: I0128 01:45:40.053603 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2cd00be8-fccf-4399-b5b1-c60bf8266112-calico-apiserver-certs\") pod \"calico-apiserver-6954f9c796-rjrhf\" (UID: \"2cd00be8-fccf-4399-b5b1-c60bf8266112\") " pod="calico-apiserver/calico-apiserver-6954f9c796-rjrhf" Jan 28 01:45:40.055216 kubelet[2762]: I0128 01:45:40.054349 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt9hd\" (UniqueName: \"kubernetes.io/projected/2cd00be8-fccf-4399-b5b1-c60bf8266112-kube-api-access-tt9hd\") pod \"calico-apiserver-6954f9c796-rjrhf\" (UID: \"2cd00be8-fccf-4399-b5b1-c60bf8266112\") " pod="calico-apiserver/calico-apiserver-6954f9c796-rjrhf" Jan 28 01:45:40.055216 kubelet[2762]: I0128 01:45:40.054414 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82qmp\" (UniqueName: \"kubernetes.io/projected/5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179-kube-api-access-82qmp\") pod \"calico-kube-controllers-64456467b5-b47z9\" (UID: \"5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179\") " pod="calico-system/calico-kube-controllers-64456467b5-b47z9" Jan 28 01:45:40.066258 systemd[1]: Created slice kubepods-besteffort-pod2cd00be8_fccf_4399_b5b1_c60bf8266112.slice - libcontainer container kubepods-besteffort-pod2cd00be8_fccf_4399_b5b1_c60bf8266112.slice. Jan 28 01:45:40.086532 systemd[1]: Created slice kubepods-besteffort-pod5fd9c2ef_ecf6_4c2a_ace3_1669c7df4179.slice - libcontainer container kubepods-besteffort-pod5fd9c2ef_ecf6_4c2a_ace3_1669c7df4179.slice. Jan 28 01:45:40.098094 systemd[1]: Created slice kubepods-burstable-pod2b620013_7ee3_4980_87da_661ce5681449.slice - libcontainer container kubepods-burstable-pod2b620013_7ee3_4980_87da_661ce5681449.slice. Jan 28 01:45:40.123631 systemd[1]: Created slice kubepods-burstable-poda5967aa1_f4ae_477d_ae42_e205a147743e.slice - libcontainer container kubepods-burstable-poda5967aa1_f4ae_477d_ae42_e205a147743e.slice. Jan 28 01:45:40.148450 systemd[1]: Created slice kubepods-besteffort-pod79a20841_d4d9_461b_a1a9_1610a5791824.slice - libcontainer container kubepods-besteffort-pod79a20841_d4d9_461b_a1a9_1610a5791824.slice. Jan 28 01:45:40.155703 kubelet[2762]: I0128 01:45:40.155542 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b620013-7ee3-4980-87da-661ce5681449-config-volume\") pod \"coredns-668d6bf9bc-ch2px\" (UID: \"2b620013-7ee3-4980-87da-661ce5681449\") " pod="kube-system/coredns-668d6bf9bc-ch2px" Jan 28 01:45:40.155703 kubelet[2762]: I0128 01:45:40.155616 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79a20841-d4d9-461b-a1a9-1610a5791824-whisker-ca-bundle\") pod \"whisker-5f89845cd-2d67p\" (UID: \"79a20841-d4d9-461b-a1a9-1610a5791824\") " pod="calico-system/whisker-5f89845cd-2d67p" Jan 28 01:45:40.155703 kubelet[2762]: I0128 01:45:40.155636 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzw8q\" (UniqueName: \"kubernetes.io/projected/79a20841-d4d9-461b-a1a9-1610a5791824-kube-api-access-pzw8q\") pod \"whisker-5f89845cd-2d67p\" (UID: \"79a20841-d4d9-461b-a1a9-1610a5791824\") " pod="calico-system/whisker-5f89845cd-2d67p" Jan 28 01:45:40.156821 kubelet[2762]: I0128 01:45:40.155709 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmpt2\" (UniqueName: \"kubernetes.io/projected/2b620013-7ee3-4980-87da-661ce5681449-kube-api-access-rmpt2\") pod \"coredns-668d6bf9bc-ch2px\" (UID: \"2b620013-7ee3-4980-87da-661ce5681449\") " pod="kube-system/coredns-668d6bf9bc-ch2px" Jan 28 01:45:40.156821 kubelet[2762]: I0128 01:45:40.155733 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/79a20841-d4d9-461b-a1a9-1610a5791824-whisker-backend-key-pair\") pod \"whisker-5f89845cd-2d67p\" (UID: \"79a20841-d4d9-461b-a1a9-1610a5791824\") " pod="calico-system/whisker-5f89845cd-2d67p" Jan 28 01:45:40.156821 kubelet[2762]: I0128 01:45:40.155765 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6brpt\" (UniqueName: \"kubernetes.io/projected/a5967aa1-f4ae-477d-ae42-e205a147743e-kube-api-access-6brpt\") pod \"coredns-668d6bf9bc-fhrtx\" (UID: \"a5967aa1-f4ae-477d-ae42-e205a147743e\") " pod="kube-system/coredns-668d6bf9bc-fhrtx" Jan 28 01:45:40.156821 kubelet[2762]: I0128 01:45:40.155782 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5967aa1-f4ae-477d-ae42-e205a147743e-config-volume\") pod \"coredns-668d6bf9bc-fhrtx\" (UID: \"a5967aa1-f4ae-477d-ae42-e205a147743e\") " pod="kube-system/coredns-668d6bf9bc-fhrtx" Jan 28 01:45:40.156821 kubelet[2762]: I0128 01:45:40.155795 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c966ee1e-4a54-4737-b8f4-7c2be261a470-goldmane-key-pair\") pod \"goldmane-666569f655-vgk8b\" (UID: \"c966ee1e-4a54-4737-b8f4-7c2be261a470\") " pod="calico-system/goldmane-666569f655-vgk8b" Jan 28 01:45:40.157621 kubelet[2762]: I0128 01:45:40.155828 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c966ee1e-4a54-4737-b8f4-7c2be261a470-goldmane-ca-bundle\") pod \"goldmane-666569f655-vgk8b\" (UID: \"c966ee1e-4a54-4737-b8f4-7c2be261a470\") " pod="calico-system/goldmane-666569f655-vgk8b" Jan 28 01:45:40.157621 kubelet[2762]: I0128 01:45:40.155843 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c966ee1e-4a54-4737-b8f4-7c2be261a470-config\") pod \"goldmane-666569f655-vgk8b\" (UID: \"c966ee1e-4a54-4737-b8f4-7c2be261a470\") " pod="calico-system/goldmane-666569f655-vgk8b" Jan 28 01:45:40.157621 kubelet[2762]: I0128 01:45:40.155857 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sgb5\" (UniqueName: \"kubernetes.io/projected/c966ee1e-4a54-4737-b8f4-7c2be261a470-kube-api-access-8sgb5\") pod \"goldmane-666569f655-vgk8b\" (UID: \"c966ee1e-4a54-4737-b8f4-7c2be261a470\") " pod="calico-system/goldmane-666569f655-vgk8b" Jan 28 01:45:40.192801 systemd[1]: Created slice kubepods-besteffort-podc966ee1e_4a54_4737_b8f4_7c2be261a470.slice - libcontainer container kubepods-besteffort-podc966ee1e_4a54_4737_b8f4_7c2be261a470.slice. Jan 28 01:45:40.355388 containerd[1553]: time="2026-01-28T01:45:40.355251245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6954f9c796-gqzwx,Uid:a2048076-ab34-4562-b42d-515b64a0bfb4,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:45:40.379519 containerd[1553]: time="2026-01-28T01:45:40.379252893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6954f9c796-rjrhf,Uid:2cd00be8-fccf-4399-b5b1-c60bf8266112,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:45:40.400435 containerd[1553]: time="2026-01-28T01:45:40.400254340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64456467b5-b47z9,Uid:5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179,Namespace:calico-system,Attempt:0,}" Jan 28 01:45:40.411606 kubelet[2762]: E0128 01:45:40.410974 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:45:40.417416 containerd[1553]: time="2026-01-28T01:45:40.415449520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ch2px,Uid:2b620013-7ee3-4980-87da-661ce5681449,Namespace:kube-system,Attempt:0,}" Jan 28 01:45:40.444312 kubelet[2762]: E0128 01:45:40.444273 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:45:40.454253 containerd[1553]: time="2026-01-28T01:45:40.454213237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fhrtx,Uid:a5967aa1-f4ae-477d-ae42-e205a147743e,Namespace:kube-system,Attempt:0,}" Jan 28 01:45:40.507208 containerd[1553]: time="2026-01-28T01:45:40.506454841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f89845cd-2d67p,Uid:79a20841-d4d9-461b-a1a9-1610a5791824,Namespace:calico-system,Attempt:0,}" Jan 28 01:45:40.528441 containerd[1553]: time="2026-01-28T01:45:40.528388996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vgk8b,Uid:c966ee1e-4a54-4737-b8f4-7c2be261a470,Namespace:calico-system,Attempt:0,}" Jan 28 01:45:40.658362 kubelet[2762]: E0128 01:45:40.657354 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:45:40.666869 containerd[1553]: time="2026-01-28T01:45:40.666832366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 28 01:45:40.771980 containerd[1553]: time="2026-01-28T01:45:40.771836338Z" level=error msg="Failed to destroy network for sandbox \"922425edee8192e72c7797f6d5f209468dac784a81b534215a713fded5c303d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:40.780062 containerd[1553]: time="2026-01-28T01:45:40.780009715Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64456467b5-b47z9,Uid:5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"922425edee8192e72c7797f6d5f209468dac784a81b534215a713fded5c303d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:40.794600 containerd[1553]: time="2026-01-28T01:45:40.794030259Z" level=error msg="Failed to destroy network for sandbox \"a5715b15ef82f7c0e623773b1703937b59b6d16e43bbf81fb157808857e56a3a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:40.797796 containerd[1553]: time="2026-01-28T01:45:40.797627370Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fhrtx,Uid:a5967aa1-f4ae-477d-ae42-e205a147743e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5715b15ef82f7c0e623773b1703937b59b6d16e43bbf81fb157808857e56a3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:40.800007 containerd[1553]: time="2026-01-28T01:45:40.799974884Z" level=error msg="Failed to destroy network for sandbox \"5fe902efa5d965d85bf6ca77528ea91fab5e2d7150834000544fe6fb53957782\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:40.807419 containerd[1553]: time="2026-01-28T01:45:40.807374538Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6954f9c796-rjrhf,Uid:2cd00be8-fccf-4399-b5b1-c60bf8266112,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fe902efa5d965d85bf6ca77528ea91fab5e2d7150834000544fe6fb53957782\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:40.812075 containerd[1553]: time="2026-01-28T01:45:40.812009394Z" level=error msg="Failed to destroy network for sandbox \"9ae23b8cf7e51c7339368f67379d019d296fa1136cccfb2dc89f1878d1aaab3f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:40.812399 kubelet[2762]: E0128 01:45:40.812282 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fe902efa5d965d85bf6ca77528ea91fab5e2d7150834000544fe6fb53957782\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:40.812501 kubelet[2762]: E0128 01:45:40.812412 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fe902efa5d965d85bf6ca77528ea91fab5e2d7150834000544fe6fb53957782\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6954f9c796-rjrhf" Jan 28 01:45:40.812501 kubelet[2762]: E0128 01:45:40.812443 2762 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fe902efa5d965d85bf6ca77528ea91fab5e2d7150834000544fe6fb53957782\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6954f9c796-rjrhf" Jan 28 01:45:40.812758 kubelet[2762]: E0128 01:45:40.812615 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6954f9c796-rjrhf_calico-apiserver(2cd00be8-fccf-4399-b5b1-c60bf8266112)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6954f9c796-rjrhf_calico-apiserver(2cd00be8-fccf-4399-b5b1-c60bf8266112)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5fe902efa5d965d85bf6ca77528ea91fab5e2d7150834000544fe6fb53957782\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6954f9c796-rjrhf" podUID="2cd00be8-fccf-4399-b5b1-c60bf8266112" Jan 28 01:45:40.813592 kubelet[2762]: E0128 01:45:40.813446 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"922425edee8192e72c7797f6d5f209468dac784a81b534215a713fded5c303d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:40.813592 kubelet[2762]: E0128 01:45:40.813543 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"922425edee8192e72c7797f6d5f209468dac784a81b534215a713fded5c303d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64456467b5-b47z9" Jan 28 01:45:40.813592 kubelet[2762]: E0128 01:45:40.813580 2762 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"922425edee8192e72c7797f6d5f209468dac784a81b534215a713fded5c303d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64456467b5-b47z9" Jan 28 01:45:40.813777 kubelet[2762]: E0128 01:45:40.813647 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-64456467b5-b47z9_calico-system(5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-64456467b5-b47z9_calico-system(5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"922425edee8192e72c7797f6d5f209468dac784a81b534215a713fded5c303d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64456467b5-b47z9" podUID="5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179" Jan 28 01:45:40.814048 kubelet[2762]: E0128 01:45:40.813795 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5715b15ef82f7c0e623773b1703937b59b6d16e43bbf81fb157808857e56a3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:40.814048 kubelet[2762]: E0128 01:45:40.813839 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5715b15ef82f7c0e623773b1703937b59b6d16e43bbf81fb157808857e56a3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fhrtx" Jan 28 01:45:40.814048 kubelet[2762]: E0128 01:45:40.813866 2762 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5715b15ef82f7c0e623773b1703937b59b6d16e43bbf81fb157808857e56a3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fhrtx" Jan 28 01:45:40.814177 kubelet[2762]: E0128 01:45:40.814022 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-fhrtx_kube-system(a5967aa1-f4ae-477d-ae42-e205a147743e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-fhrtx_kube-system(a5967aa1-f4ae-477d-ae42-e205a147743e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5715b15ef82f7c0e623773b1703937b59b6d16e43bbf81fb157808857e56a3a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-fhrtx" podUID="a5967aa1-f4ae-477d-ae42-e205a147743e" Jan 28 01:45:40.816485 containerd[1553]: time="2026-01-28T01:45:40.816220886Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ch2px,Uid:2b620013-7ee3-4980-87da-661ce5681449,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ae23b8cf7e51c7339368f67379d019d296fa1136cccfb2dc89f1878d1aaab3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:40.817167 kubelet[2762]: E0128 01:45:40.817137 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ae23b8cf7e51c7339368f67379d019d296fa1136cccfb2dc89f1878d1aaab3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:40.818359 kubelet[2762]: E0128 01:45:40.818030 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ae23b8cf7e51c7339368f67379d019d296fa1136cccfb2dc89f1878d1aaab3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ch2px" Jan 28 01:45:40.819037 kubelet[2762]: E0128 01:45:40.818210 2762 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ae23b8cf7e51c7339368f67379d019d296fa1136cccfb2dc89f1878d1aaab3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ch2px" Jan 28 01:45:40.820255 kubelet[2762]: E0128 01:45:40.820155 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-ch2px_kube-system(2b620013-7ee3-4980-87da-661ce5681449)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-ch2px_kube-system(2b620013-7ee3-4980-87da-661ce5681449)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ae23b8cf7e51c7339368f67379d019d296fa1136cccfb2dc89f1878d1aaab3f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ch2px" podUID="2b620013-7ee3-4980-87da-661ce5681449" Jan 28 01:45:40.822529 containerd[1553]: time="2026-01-28T01:45:40.822422882Z" level=error msg="Failed to destroy network for sandbox \"7eeae419af7fcbbca829138a2a1242c00c064e7995db446a5416da210257d5bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:40.849240 containerd[1553]: time="2026-01-28T01:45:40.843515369Z" level=error msg="Failed to destroy network for sandbox \"2e8a6d620f20327d8a1a5d01d23712cfa46a7221309fd5e857d418d909fffb97\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:40.857081 containerd[1553]: time="2026-01-28T01:45:40.856838614Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6954f9c796-gqzwx,Uid:a2048076-ab34-4562-b42d-515b64a0bfb4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7eeae419af7fcbbca829138a2a1242c00c064e7995db446a5416da210257d5bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:40.857345 kubelet[2762]: E0128 01:45:40.857248 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7eeae419af7fcbbca829138a2a1242c00c064e7995db446a5416da210257d5bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:40.857425 kubelet[2762]: E0128 01:45:40.857380 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7eeae419af7fcbbca829138a2a1242c00c064e7995db446a5416da210257d5bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6954f9c796-gqzwx" Jan 28 01:45:40.857425 kubelet[2762]: E0128 01:45:40.857411 2762 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7eeae419af7fcbbca829138a2a1242c00c064e7995db446a5416da210257d5bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6954f9c796-gqzwx" Jan 28 01:45:40.857495 kubelet[2762]: E0128 01:45:40.857464 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6954f9c796-gqzwx_calico-apiserver(a2048076-ab34-4562-b42d-515b64a0bfb4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6954f9c796-gqzwx_calico-apiserver(a2048076-ab34-4562-b42d-515b64a0bfb4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7eeae419af7fcbbca829138a2a1242c00c064e7995db446a5416da210257d5bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6954f9c796-gqzwx" podUID="a2048076-ab34-4562-b42d-515b64a0bfb4" Jan 28 01:45:40.860513 containerd[1553]: time="2026-01-28T01:45:40.860313653Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vgk8b,Uid:c966ee1e-4a54-4737-b8f4-7c2be261a470,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e8a6d620f20327d8a1a5d01d23712cfa46a7221309fd5e857d418d909fffb97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:40.861104 kubelet[2762]: E0128 01:45:40.860643 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e8a6d620f20327d8a1a5d01d23712cfa46a7221309fd5e857d418d909fffb97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:40.861104 kubelet[2762]: E0128 01:45:40.860791 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e8a6d620f20327d8a1a5d01d23712cfa46a7221309fd5e857d418d909fffb97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-vgk8b" Jan 28 01:45:40.861104 kubelet[2762]: E0128 01:45:40.860818 2762 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e8a6d620f20327d8a1a5d01d23712cfa46a7221309fd5e857d418d909fffb97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-vgk8b" Jan 28 01:45:40.861873 kubelet[2762]: E0128 01:45:40.861283 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-vgk8b_calico-system(c966ee1e-4a54-4737-b8f4-7c2be261a470)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-vgk8b_calico-system(c966ee1e-4a54-4737-b8f4-7c2be261a470)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e8a6d620f20327d8a1a5d01d23712cfa46a7221309fd5e857d418d909fffb97\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-vgk8b" podUID="c966ee1e-4a54-4737-b8f4-7c2be261a470" Jan 28 01:45:40.881489 containerd[1553]: time="2026-01-28T01:45:40.881287486Z" level=error msg="Failed to destroy network for sandbox \"9c201ce6a4cdd37496acd7ed23e458597edf3f6deab1f18aa618d5a231f3238a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:40.884165 containerd[1553]: time="2026-01-28T01:45:40.884099914Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f89845cd-2d67p,Uid:79a20841-d4d9-461b-a1a9-1610a5791824,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c201ce6a4cdd37496acd7ed23e458597edf3f6deab1f18aa618d5a231f3238a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:40.886130 kubelet[2762]: E0128 01:45:40.885252 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c201ce6a4cdd37496acd7ed23e458597edf3f6deab1f18aa618d5a231f3238a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:40.886130 kubelet[2762]: E0128 01:45:40.885376 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c201ce6a4cdd37496acd7ed23e458597edf3f6deab1f18aa618d5a231f3238a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f89845cd-2d67p" Jan 28 01:45:40.886130 kubelet[2762]: E0128 01:45:40.885404 2762 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c201ce6a4cdd37496acd7ed23e458597edf3f6deab1f18aa618d5a231f3238a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f89845cd-2d67p" Jan 28 01:45:40.886375 kubelet[2762]: E0128 01:45:40.885455 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5f89845cd-2d67p_calico-system(79a20841-d4d9-461b-a1a9-1610a5791824)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5f89845cd-2d67p_calico-system(79a20841-d4d9-461b-a1a9-1610a5791824)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c201ce6a4cdd37496acd7ed23e458597edf3f6deab1f18aa618d5a231f3238a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5f89845cd-2d67p" podUID="79a20841-d4d9-461b-a1a9-1610a5791824" Jan 28 01:45:41.373465 systemd[1]: Created slice kubepods-besteffort-pod760a12b1_4a99_4684_a026_7c55d7164578.slice - libcontainer container kubepods-besteffort-pod760a12b1_4a99_4684_a026_7c55d7164578.slice. Jan 28 01:45:41.402261 containerd[1553]: time="2026-01-28T01:45:41.401344625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5r68l,Uid:760a12b1-4a99-4684-a026-7c55d7164578,Namespace:calico-system,Attempt:0,}" Jan 28 01:45:41.595237 containerd[1553]: time="2026-01-28T01:45:41.594160254Z" level=error msg="Failed to destroy network for sandbox \"2ae44ad3f4b6ad416a1b51d9db1a3251dad9affb7bbc45751d4be2225bd4cb44\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:41.598837 systemd[1]: run-netns-cni\x2d42befede\x2d2f79\x2d3a09\x2de8fc\x2d84252ccdc8ce.mount: Deactivated successfully. Jan 28 01:45:41.605220 containerd[1553]: time="2026-01-28T01:45:41.604867886Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5r68l,Uid:760a12b1-4a99-4684-a026-7c55d7164578,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ae44ad3f4b6ad416a1b51d9db1a3251dad9affb7bbc45751d4be2225bd4cb44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:41.605539 kubelet[2762]: E0128 01:45:41.605467 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ae44ad3f4b6ad416a1b51d9db1a3251dad9affb7bbc45751d4be2225bd4cb44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:41.606151 kubelet[2762]: E0128 01:45:41.605537 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ae44ad3f4b6ad416a1b51d9db1a3251dad9affb7bbc45751d4be2225bd4cb44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5r68l" Jan 28 01:45:41.606151 kubelet[2762]: E0128 01:45:41.605565 2762 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ae44ad3f4b6ad416a1b51d9db1a3251dad9affb7bbc45751d4be2225bd4cb44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5r68l" Jan 28 01:45:41.606151 kubelet[2762]: E0128 01:45:41.605615 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5r68l_calico-system(760a12b1-4a99-4684-a026-7c55d7164578)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5r68l_calico-system(760a12b1-4a99-4684-a026-7c55d7164578)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2ae44ad3f4b6ad416a1b51d9db1a3251dad9affb7bbc45751d4be2225bd4cb44\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5r68l" podUID="760a12b1-4a99-4684-a026-7c55d7164578" Jan 28 01:45:51.350007 containerd[1553]: time="2026-01-28T01:45:51.349785824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6954f9c796-gqzwx,Uid:a2048076-ab34-4562-b42d-515b64a0bfb4,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:45:51.556507 containerd[1553]: time="2026-01-28T01:45:51.556208745Z" level=error msg="Failed to destroy network for sandbox \"494abf8b85e74dde3498f7ba2837f60c5302018fbc90294d8a8b18073509fa01\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:51.561761 containerd[1553]: time="2026-01-28T01:45:51.560204542Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6954f9c796-gqzwx,Uid:a2048076-ab34-4562-b42d-515b64a0bfb4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"494abf8b85e74dde3498f7ba2837f60c5302018fbc90294d8a8b18073509fa01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:51.563361 kubelet[2762]: E0128 01:45:51.560991 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"494abf8b85e74dde3498f7ba2837f60c5302018fbc90294d8a8b18073509fa01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:51.563361 kubelet[2762]: E0128 01:45:51.561219 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"494abf8b85e74dde3498f7ba2837f60c5302018fbc90294d8a8b18073509fa01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6954f9c796-gqzwx" Jan 28 01:45:51.563361 kubelet[2762]: E0128 01:45:51.561248 2762 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"494abf8b85e74dde3498f7ba2837f60c5302018fbc90294d8a8b18073509fa01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6954f9c796-gqzwx" Jan 28 01:45:51.562655 systemd[1]: run-netns-cni\x2d143c1934\x2da610\x2dfd7e\x2ddee4\x2dcdc297a1995e.mount: Deactivated successfully. Jan 28 01:45:51.564318 kubelet[2762]: E0128 01:45:51.561307 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6954f9c796-gqzwx_calico-apiserver(a2048076-ab34-4562-b42d-515b64a0bfb4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6954f9c796-gqzwx_calico-apiserver(a2048076-ab34-4562-b42d-515b64a0bfb4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"494abf8b85e74dde3498f7ba2837f60c5302018fbc90294d8a8b18073509fa01\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6954f9c796-gqzwx" podUID="a2048076-ab34-4562-b42d-515b64a0bfb4" Jan 28 01:45:52.353104 containerd[1553]: time="2026-01-28T01:45:52.350349719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5r68l,Uid:760a12b1-4a99-4684-a026-7c55d7164578,Namespace:calico-system,Attempt:0,}" Jan 28 01:45:52.356307 containerd[1553]: time="2026-01-28T01:45:52.354234750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f89845cd-2d67p,Uid:79a20841-d4d9-461b-a1a9-1610a5791824,Namespace:calico-system,Attempt:0,}" Jan 28 01:45:52.698430 containerd[1553]: time="2026-01-28T01:45:52.695564928Z" level=error msg="Failed to destroy network for sandbox \"052ab5e56631918d2b749bd6d6c9ca0acc45df4c5b4d644adc3465e060fdff69\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:52.711342 containerd[1553]: time="2026-01-28T01:45:52.708742534Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5r68l,Uid:760a12b1-4a99-4684-a026-7c55d7164578,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"052ab5e56631918d2b749bd6d6c9ca0acc45df4c5b4d644adc3465e060fdff69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:52.711586 kubelet[2762]: E0128 01:45:52.709794 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"052ab5e56631918d2b749bd6d6c9ca0acc45df4c5b4d644adc3465e060fdff69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:52.711586 kubelet[2762]: E0128 01:45:52.709850 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"052ab5e56631918d2b749bd6d6c9ca0acc45df4c5b4d644adc3465e060fdff69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5r68l" Jan 28 01:45:52.711586 kubelet[2762]: E0128 01:45:52.709870 2762 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"052ab5e56631918d2b749bd6d6c9ca0acc45df4c5b4d644adc3465e060fdff69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5r68l" Jan 28 01:45:52.712284 kubelet[2762]: E0128 01:45:52.710739 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5r68l_calico-system(760a12b1-4a99-4684-a026-7c55d7164578)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5r68l_calico-system(760a12b1-4a99-4684-a026-7c55d7164578)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"052ab5e56631918d2b749bd6d6c9ca0acc45df4c5b4d644adc3465e060fdff69\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5r68l" podUID="760a12b1-4a99-4684-a026-7c55d7164578" Jan 28 01:45:52.716020 systemd[1]: run-netns-cni\x2df86e2a4b\x2db71a\x2d219c\x2dbabe\x2d6a111f08224e.mount: Deactivated successfully. Jan 28 01:45:52.769090 containerd[1553]: time="2026-01-28T01:45:52.768879777Z" level=error msg="Failed to destroy network for sandbox \"e63567a0ba30f5ea1f889cc262eacdb7a7d9c8bd102c90bc2cfa9531f2449d32\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:52.774194 containerd[1553]: time="2026-01-28T01:45:52.774001828Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f89845cd-2d67p,Uid:79a20841-d4d9-461b-a1a9-1610a5791824,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e63567a0ba30f5ea1f889cc262eacdb7a7d9c8bd102c90bc2cfa9531f2449d32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:52.775040 kubelet[2762]: E0128 01:45:52.774403 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e63567a0ba30f5ea1f889cc262eacdb7a7d9c8bd102c90bc2cfa9531f2449d32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:52.775040 kubelet[2762]: E0128 01:45:52.774474 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e63567a0ba30f5ea1f889cc262eacdb7a7d9c8bd102c90bc2cfa9531f2449d32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f89845cd-2d67p" Jan 28 01:45:52.775040 kubelet[2762]: E0128 01:45:52.774507 2762 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e63567a0ba30f5ea1f889cc262eacdb7a7d9c8bd102c90bc2cfa9531f2449d32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f89845cd-2d67p" Jan 28 01:45:52.774639 systemd[1]: run-netns-cni\x2d30d1d473\x2d6595\x2d8795\x2d191d\x2da6ba99c6456e.mount: Deactivated successfully. Jan 28 01:45:52.775287 kubelet[2762]: E0128 01:45:52.774559 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5f89845cd-2d67p_calico-system(79a20841-d4d9-461b-a1a9-1610a5791824)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5f89845cd-2d67p_calico-system(79a20841-d4d9-461b-a1a9-1610a5791824)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e63567a0ba30f5ea1f889cc262eacdb7a7d9c8bd102c90bc2cfa9531f2449d32\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5f89845cd-2d67p" podUID="79a20841-d4d9-461b-a1a9-1610a5791824" Jan 28 01:45:54.348226 kubelet[2762]: E0128 01:45:54.348177 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:45:54.350527 containerd[1553]: time="2026-01-28T01:45:54.350466294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vgk8b,Uid:c966ee1e-4a54-4737-b8f4-7c2be261a470,Namespace:calico-system,Attempt:0,}" Jan 28 01:45:54.352089 containerd[1553]: time="2026-01-28T01:45:54.351545843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ch2px,Uid:2b620013-7ee3-4980-87da-661ce5681449,Namespace:kube-system,Attempt:0,}" Jan 28 01:45:54.352840 kubelet[2762]: E0128 01:45:54.352569 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:45:54.353216 containerd[1553]: time="2026-01-28T01:45:54.353079224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64456467b5-b47z9,Uid:5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179,Namespace:calico-system,Attempt:0,}" Jan 28 01:45:54.353487 containerd[1553]: time="2026-01-28T01:45:54.353362554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fhrtx,Uid:a5967aa1-f4ae-477d-ae42-e205a147743e,Namespace:kube-system,Attempt:0,}" Jan 28 01:45:54.354174 containerd[1553]: time="2026-01-28T01:45:54.353813308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6954f9c796-rjrhf,Uid:2cd00be8-fccf-4399-b5b1-c60bf8266112,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:45:54.835039 containerd[1553]: time="2026-01-28T01:45:54.834244915Z" level=error msg="Failed to destroy network for sandbox \"c554d7e486eef7eaf264d9af0b8b027a4e4f0a5b79538fb264f005b45c553bdf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:54.847568 containerd[1553]: time="2026-01-28T01:45:54.847435436Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6954f9c796-rjrhf,Uid:2cd00be8-fccf-4399-b5b1-c60bf8266112,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c554d7e486eef7eaf264d9af0b8b027a4e4f0a5b79538fb264f005b45c553bdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:54.849279 kubelet[2762]: E0128 01:45:54.847882 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c554d7e486eef7eaf264d9af0b8b027a4e4f0a5b79538fb264f005b45c553bdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:54.849279 kubelet[2762]: E0128 01:45:54.849255 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c554d7e486eef7eaf264d9af0b8b027a4e4f0a5b79538fb264f005b45c553bdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6954f9c796-rjrhf" Jan 28 01:45:54.849395 kubelet[2762]: E0128 01:45:54.849283 2762 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c554d7e486eef7eaf264d9af0b8b027a4e4f0a5b79538fb264f005b45c553bdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6954f9c796-rjrhf" Jan 28 01:45:54.851367 kubelet[2762]: E0128 01:45:54.850520 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6954f9c796-rjrhf_calico-apiserver(2cd00be8-fccf-4399-b5b1-c60bf8266112)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6954f9c796-rjrhf_calico-apiserver(2cd00be8-fccf-4399-b5b1-c60bf8266112)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c554d7e486eef7eaf264d9af0b8b027a4e4f0a5b79538fb264f005b45c553bdf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6954f9c796-rjrhf" podUID="2cd00be8-fccf-4399-b5b1-c60bf8266112" Jan 28 01:45:54.912334 containerd[1553]: time="2026-01-28T01:45:54.911064574Z" level=error msg="Failed to destroy network for sandbox \"df67192d73ce6a7f6ca911922b531a5aba1980b931dfd4e997e218ee874001ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:54.931296 containerd[1553]: time="2026-01-28T01:45:54.928129322Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64456467b5-b47z9,Uid:5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"df67192d73ce6a7f6ca911922b531a5aba1980b931dfd4e997e218ee874001ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:54.931549 kubelet[2762]: E0128 01:45:54.928392 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df67192d73ce6a7f6ca911922b531a5aba1980b931dfd4e997e218ee874001ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:54.931549 kubelet[2762]: E0128 01:45:54.928468 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df67192d73ce6a7f6ca911922b531a5aba1980b931dfd4e997e218ee874001ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64456467b5-b47z9" Jan 28 01:45:54.931549 kubelet[2762]: E0128 01:45:54.928494 2762 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df67192d73ce6a7f6ca911922b531a5aba1980b931dfd4e997e218ee874001ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64456467b5-b47z9" Jan 28 01:45:54.931864 kubelet[2762]: E0128 01:45:54.928541 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-64456467b5-b47z9_calico-system(5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-64456467b5-b47z9_calico-system(5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df67192d73ce6a7f6ca911922b531a5aba1980b931dfd4e997e218ee874001ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64456467b5-b47z9" podUID="5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179" Jan 28 01:45:55.042419 containerd[1553]: time="2026-01-28T01:45:55.042257715Z" level=error msg="Failed to destroy network for sandbox \"db6dfb86d01da099eb428b46c9675c9e40b84d82950a6567037c3314d484d82a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:55.051758 containerd[1553]: time="2026-01-28T01:45:55.051694990Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fhrtx,Uid:a5967aa1-f4ae-477d-ae42-e205a147743e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"db6dfb86d01da099eb428b46c9675c9e40b84d82950a6567037c3314d484d82a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:55.053229 kubelet[2762]: E0128 01:45:55.052532 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db6dfb86d01da099eb428b46c9675c9e40b84d82950a6567037c3314d484d82a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:55.053229 kubelet[2762]: E0128 01:45:55.052692 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db6dfb86d01da099eb428b46c9675c9e40b84d82950a6567037c3314d484d82a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fhrtx" Jan 28 01:45:55.053229 kubelet[2762]: E0128 01:45:55.052721 2762 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db6dfb86d01da099eb428b46c9675c9e40b84d82950a6567037c3314d484d82a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fhrtx" Jan 28 01:45:55.053414 kubelet[2762]: E0128 01:45:55.052776 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-fhrtx_kube-system(a5967aa1-f4ae-477d-ae42-e205a147743e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-fhrtx_kube-system(a5967aa1-f4ae-477d-ae42-e205a147743e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db6dfb86d01da099eb428b46c9675c9e40b84d82950a6567037c3314d484d82a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-fhrtx" podUID="a5967aa1-f4ae-477d-ae42-e205a147743e" Jan 28 01:45:55.055273 containerd[1553]: time="2026-01-28T01:45:55.053844690Z" level=error msg="Failed to destroy network for sandbox \"69022b15423a9e8dbf017979ac71dd3cf45e3ce55093296e7825aa0c9faee649\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:55.066570 containerd[1553]: time="2026-01-28T01:45:55.066370335Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ch2px,Uid:2b620013-7ee3-4980-87da-661ce5681449,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"69022b15423a9e8dbf017979ac71dd3cf45e3ce55093296e7825aa0c9faee649\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:55.067360 kubelet[2762]: E0128 01:45:55.067227 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69022b15423a9e8dbf017979ac71dd3cf45e3ce55093296e7825aa0c9faee649\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:55.067360 kubelet[2762]: E0128 01:45:55.067290 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69022b15423a9e8dbf017979ac71dd3cf45e3ce55093296e7825aa0c9faee649\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ch2px" Jan 28 01:45:55.067360 kubelet[2762]: E0128 01:45:55.067320 2762 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69022b15423a9e8dbf017979ac71dd3cf45e3ce55093296e7825aa0c9faee649\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ch2px" Jan 28 01:45:55.068852 kubelet[2762]: E0128 01:45:55.067365 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-ch2px_kube-system(2b620013-7ee3-4980-87da-661ce5681449)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-ch2px_kube-system(2b620013-7ee3-4980-87da-661ce5681449)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"69022b15423a9e8dbf017979ac71dd3cf45e3ce55093296e7825aa0c9faee649\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ch2px" podUID="2b620013-7ee3-4980-87da-661ce5681449" Jan 28 01:45:55.106258 containerd[1553]: time="2026-01-28T01:45:55.102783834Z" level=error msg="Failed to destroy network for sandbox \"3afe193fe40b7001a5713e8480c64d5181e73b0ee989dc21f9ad78293f4bf245\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:55.121396 containerd[1553]: time="2026-01-28T01:45:55.120074560Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vgk8b,Uid:c966ee1e-4a54-4737-b8f4-7c2be261a470,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3afe193fe40b7001a5713e8480c64d5181e73b0ee989dc21f9ad78293f4bf245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:55.122452 kubelet[2762]: E0128 01:45:55.122181 2762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3afe193fe40b7001a5713e8480c64d5181e73b0ee989dc21f9ad78293f4bf245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:45:55.123128 kubelet[2762]: E0128 01:45:55.122555 2762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3afe193fe40b7001a5713e8480c64d5181e73b0ee989dc21f9ad78293f4bf245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-vgk8b" Jan 28 01:45:55.123128 kubelet[2762]: E0128 01:45:55.122649 2762 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3afe193fe40b7001a5713e8480c64d5181e73b0ee989dc21f9ad78293f4bf245\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-vgk8b" Jan 28 01:45:55.123128 kubelet[2762]: E0128 01:45:55.122711 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-vgk8b_calico-system(c966ee1e-4a54-4737-b8f4-7c2be261a470)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-vgk8b_calico-system(c966ee1e-4a54-4737-b8f4-7c2be261a470)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3afe193fe40b7001a5713e8480c64d5181e73b0ee989dc21f9ad78293f4bf245\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-vgk8b" podUID="c966ee1e-4a54-4737-b8f4-7c2be261a470" Jan 28 01:45:55.469027 systemd[1]: run-netns-cni\x2d39a7fa86\x2d5acc\x2d7643\x2d31d7\x2d29c8615e8e7e.mount: Deactivated successfully. Jan 28 01:45:55.469185 systemd[1]: run-netns-cni\x2d8b74428c\x2d28ee\x2d9a1d\x2d043f\x2d0cb49fb45801.mount: Deactivated successfully. Jan 28 01:45:55.469295 systemd[1]: run-netns-cni\x2ddeb9d3fc\x2dae9b\x2d4d94\x2d7814\x2da95c9ebd9da2.mount: Deactivated successfully. Jan 28 01:45:55.469399 systemd[1]: run-netns-cni\x2d70e12fbd\x2d2a5b\x2d4b7f\x2d7812\x2d140a84935e45.mount: Deactivated successfully. Jan 28 01:45:55.469498 systemd[1]: run-netns-cni\x2d381d452b\x2da432\x2d58c6\x2df02e\x2d9ea1894f4d7e.mount: Deactivated successfully. Jan 28 01:45:56.071721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1478187641.mount: Deactivated successfully. Jan 28 01:45:56.211468 containerd[1553]: time="2026-01-28T01:45:56.211233286Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:45:56.212172 containerd[1553]: time="2026-01-28T01:45:56.211736726Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 28 01:45:56.215018 containerd[1553]: time="2026-01-28T01:45:56.214506086Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:45:56.227705 containerd[1553]: time="2026-01-28T01:45:56.226386852Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:45:56.227705 containerd[1553]: time="2026-01-28T01:45:56.226865150Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 15.559726412s" Jan 28 01:45:56.227705 containerd[1553]: time="2026-01-28T01:45:56.226997933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 28 01:45:56.274684 containerd[1553]: time="2026-01-28T01:45:56.274360504Z" level=info msg="CreateContainer within sandbox \"92a3a45863f97fe1e1899122638a4ea427dcabdc30aaced53c07a0c87369e131\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 28 01:45:56.366859 containerd[1553]: time="2026-01-28T01:45:56.366114936Z" level=info msg="Container 9740cb196f87740725ddd4fe1f63cd8d824ed15ea640d4a32539d02665f02c63: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:45:56.435824 containerd[1553]: time="2026-01-28T01:45:56.432529463Z" level=info msg="CreateContainer within sandbox \"92a3a45863f97fe1e1899122638a4ea427dcabdc30aaced53c07a0c87369e131\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9740cb196f87740725ddd4fe1f63cd8d824ed15ea640d4a32539d02665f02c63\"" Jan 28 01:45:56.435824 containerd[1553]: time="2026-01-28T01:45:56.434109861Z" level=info msg="StartContainer for \"9740cb196f87740725ddd4fe1f63cd8d824ed15ea640d4a32539d02665f02c63\"" Jan 28 01:45:56.444032 containerd[1553]: time="2026-01-28T01:45:56.443982327Z" level=info msg="connecting to shim 9740cb196f87740725ddd4fe1f63cd8d824ed15ea640d4a32539d02665f02c63" address="unix:///run/containerd/s/cd8cd0e0328bca4b8bd63a1efa897ee19581493f03bad8aa8e709d5a2caa76ff" protocol=ttrpc version=3 Jan 28 01:45:56.471161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1043495792.mount: Deactivated successfully. Jan 28 01:45:56.573263 systemd[1]: Started cri-containerd-9740cb196f87740725ddd4fe1f63cd8d824ed15ea640d4a32539d02665f02c63.scope - libcontainer container 9740cb196f87740725ddd4fe1f63cd8d824ed15ea640d4a32539d02665f02c63. Jan 28 01:45:56.796811 containerd[1553]: time="2026-01-28T01:45:56.796215991Z" level=info msg="StartContainer for \"9740cb196f87740725ddd4fe1f63cd8d824ed15ea640d4a32539d02665f02c63\" returns successfully" Jan 28 01:45:57.263356 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 28 01:45:57.266363 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 28 01:45:57.722270 kubelet[2762]: I0128 01:45:57.722231 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79a20841-d4d9-461b-a1a9-1610a5791824-whisker-ca-bundle\") pod \"79a20841-d4d9-461b-a1a9-1610a5791824\" (UID: \"79a20841-d4d9-461b-a1a9-1610a5791824\") " Jan 28 01:45:57.725203 kubelet[2762]: I0128 01:45:57.724310 2762 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79a20841-d4d9-461b-a1a9-1610a5791824-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "79a20841-d4d9-461b-a1a9-1610a5791824" (UID: "79a20841-d4d9-461b-a1a9-1610a5791824"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 28 01:45:57.728125 kubelet[2762]: I0128 01:45:57.725365 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/79a20841-d4d9-461b-a1a9-1610a5791824-whisker-backend-key-pair\") pod \"79a20841-d4d9-461b-a1a9-1610a5791824\" (UID: \"79a20841-d4d9-461b-a1a9-1610a5791824\") " Jan 28 01:45:57.728125 kubelet[2762]: I0128 01:45:57.725414 2762 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzw8q\" (UniqueName: \"kubernetes.io/projected/79a20841-d4d9-461b-a1a9-1610a5791824-kube-api-access-pzw8q\") pod \"79a20841-d4d9-461b-a1a9-1610a5791824\" (UID: \"79a20841-d4d9-461b-a1a9-1610a5791824\") " Jan 28 01:45:57.746074 systemd[1]: var-lib-kubelet-pods-79a20841\x2dd4d9\x2d461b\x2da1a9\x2d1610a5791824-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpzw8q.mount: Deactivated successfully. Jan 28 01:45:57.752574 kubelet[2762]: I0128 01:45:57.752468 2762 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79a20841-d4d9-461b-a1a9-1610a5791824-kube-api-access-pzw8q" (OuterVolumeSpecName: "kube-api-access-pzw8q") pod "79a20841-d4d9-461b-a1a9-1610a5791824" (UID: "79a20841-d4d9-461b-a1a9-1610a5791824"). InnerVolumeSpecName "kube-api-access-pzw8q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 01:45:57.755041 kubelet[2762]: I0128 01:45:57.754865 2762 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79a20841-d4d9-461b-a1a9-1610a5791824-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "79a20841-d4d9-461b-a1a9-1610a5791824" (UID: "79a20841-d4d9-461b-a1a9-1610a5791824"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 28 01:45:57.754996 systemd[1]: var-lib-kubelet-pods-79a20841\x2dd4d9\x2d461b\x2da1a9\x2d1610a5791824-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 28 01:45:57.830511 kubelet[2762]: I0128 01:45:57.830352 2762 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79a20841-d4d9-461b-a1a9-1610a5791824-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 28 01:45:57.830511 kubelet[2762]: I0128 01:45:57.830386 2762 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/79a20841-d4d9-461b-a1a9-1610a5791824-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 28 01:45:57.830511 kubelet[2762]: I0128 01:45:57.830426 2762 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pzw8q\" (UniqueName: \"kubernetes.io/projected/79a20841-d4d9-461b-a1a9-1610a5791824-kube-api-access-pzw8q\") on node \"localhost\" DevicePath \"\"" Jan 28 01:45:57.838418 systemd[1]: Removed slice kubepods-besteffort-pod79a20841_d4d9_461b_a1a9_1610a5791824.slice - libcontainer container kubepods-besteffort-pod79a20841_d4d9_461b_a1a9_1610a5791824.slice. Jan 28 01:45:57.839848 kubelet[2762]: E0128 01:45:57.839064 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:45:57.909225 kubelet[2762]: I0128 01:45:57.903004 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6jz86" podStartSLOduration=2.716800697 podStartE2EDuration="28.902882516s" podCreationTimestamp="2026-01-28 01:45:29 +0000 UTC" firstStartedPulling="2026-01-28 01:45:30.044117527 +0000 UTC m=+27.878625069" lastFinishedPulling="2026-01-28 01:45:56.230199346 +0000 UTC m=+54.064706888" observedRunningTime="2026-01-28 01:45:57.896810314 +0000 UTC m=+55.731317856" watchObservedRunningTime="2026-01-28 01:45:57.902882516 +0000 UTC m=+55.737390058" Jan 28 01:45:58.126619 kubelet[2762]: W0128 01:45:58.117413 2762 reflector.go:569] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Jan 28 01:45:58.126619 kubelet[2762]: E0128 01:45:58.117455 2762 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 28 01:45:58.126619 kubelet[2762]: I0128 01:45:58.117498 2762 status_manager.go:890] "Failed to get status for pod" podUID="bfdea9eb-0bce-4c15-b321-f9c7a00efdf0" pod="calico-system/whisker-6cc7f69c44-p8qpq" err="pods \"whisker-6cc7f69c44-p8qpq\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" Jan 28 01:45:58.124698 systemd[1]: Created slice kubepods-besteffort-podbfdea9eb_0bce_4c15_b321_f9c7a00efdf0.slice - libcontainer container kubepods-besteffort-podbfdea9eb_0bce_4c15_b321_f9c7a00efdf0.slice. Jan 28 01:45:58.132733 kubelet[2762]: I0128 01:45:58.132672 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bfdea9eb-0bce-4c15-b321-f9c7a00efdf0-whisker-backend-key-pair\") pod \"whisker-6cc7f69c44-p8qpq\" (UID: \"bfdea9eb-0bce-4c15-b321-f9c7a00efdf0\") " pod="calico-system/whisker-6cc7f69c44-p8qpq" Jan 28 01:45:58.132733 kubelet[2762]: I0128 01:45:58.132727 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfdea9eb-0bce-4c15-b321-f9c7a00efdf0-whisker-ca-bundle\") pod \"whisker-6cc7f69c44-p8qpq\" (UID: \"bfdea9eb-0bce-4c15-b321-f9c7a00efdf0\") " pod="calico-system/whisker-6cc7f69c44-p8qpq" Jan 28 01:45:58.133091 kubelet[2762]: I0128 01:45:58.132750 2762 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmwnt\" (UniqueName: \"kubernetes.io/projected/bfdea9eb-0bce-4c15-b321-f9c7a00efdf0-kube-api-access-xmwnt\") pod \"whisker-6cc7f69c44-p8qpq\" (UID: \"bfdea9eb-0bce-4c15-b321-f9c7a00efdf0\") " pod="calico-system/whisker-6cc7f69c44-p8qpq" Jan 28 01:45:58.360696 kubelet[2762]: I0128 01:45:58.356876 2762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79a20841-d4d9-461b-a1a9-1610a5791824" path="/var/lib/kubelet/pods/79a20841-d4d9-461b-a1a9-1610a5791824/volumes" Jan 28 01:45:58.828217 kubelet[2762]: E0128 01:45:58.827401 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:45:59.236441 kubelet[2762]: E0128 01:45:59.236187 2762 secret.go:189] Couldn't get secret calico-system/whisker-backend-key-pair: failed to sync secret cache: timed out waiting for the condition Jan 28 01:45:59.236441 kubelet[2762]: E0128 01:45:59.236304 2762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfdea9eb-0bce-4c15-b321-f9c7a00efdf0-whisker-backend-key-pair podName:bfdea9eb-0bce-4c15-b321-f9c7a00efdf0 nodeName:}" failed. No retries permitted until 2026-01-28 01:45:59.736273806 +0000 UTC m=+57.570781358 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-backend-key-pair" (UniqueName: "kubernetes.io/secret/bfdea9eb-0bce-4c15-b321-f9c7a00efdf0-whisker-backend-key-pair") pod "whisker-6cc7f69c44-p8qpq" (UID: "bfdea9eb-0bce-4c15-b321-f9c7a00efdf0") : failed to sync secret cache: timed out waiting for the condition Jan 28 01:45:59.938239 containerd[1553]: time="2026-01-28T01:45:59.937974306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cc7f69c44-p8qpq,Uid:bfdea9eb-0bce-4c15-b321-f9c7a00efdf0,Namespace:calico-system,Attempt:0,}" Jan 28 01:46:00.642044 systemd-networkd[1475]: vxlan.calico: Link UP Jan 28 01:46:00.642056 systemd-networkd[1475]: vxlan.calico: Gained carrier Jan 28 01:46:00.677138 systemd-networkd[1475]: cali393142aa4d1: Link UP Jan 28 01:46:00.682686 systemd-networkd[1475]: cali393142aa4d1: Gained carrier Jan 28 01:46:00.724244 containerd[1553]: 2026-01-28 01:46:00.191 [INFO][4381] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6cc7f69c44--p8qpq-eth0 whisker-6cc7f69c44- calico-system bfdea9eb-0bce-4c15-b321-f9c7a00efdf0 934 0 2026-01-28 01:45:58 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6cc7f69c44 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6cc7f69c44-p8qpq eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali393142aa4d1 [] [] }} ContainerID="efa35f2311d73e1d8f09919c62ef4890f429e314ec5de350180fd27e1d08dea8" Namespace="calico-system" Pod="whisker-6cc7f69c44-p8qpq" WorkloadEndpoint="localhost-k8s-whisker--6cc7f69c44--p8qpq-" Jan 28 01:46:00.724244 containerd[1553]: 2026-01-28 01:46:00.192 [INFO][4381] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="efa35f2311d73e1d8f09919c62ef4890f429e314ec5de350180fd27e1d08dea8" Namespace="calico-system" Pod="whisker-6cc7f69c44-p8qpq" WorkloadEndpoint="localhost-k8s-whisker--6cc7f69c44--p8qpq-eth0" Jan 28 01:46:00.724244 containerd[1553]: 2026-01-28 01:46:00.458 [INFO][4397] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="efa35f2311d73e1d8f09919c62ef4890f429e314ec5de350180fd27e1d08dea8" HandleID="k8s-pod-network.efa35f2311d73e1d8f09919c62ef4890f429e314ec5de350180fd27e1d08dea8" Workload="localhost-k8s-whisker--6cc7f69c44--p8qpq-eth0" Jan 28 01:46:00.724759 containerd[1553]: 2026-01-28 01:46:00.460 [INFO][4397] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="efa35f2311d73e1d8f09919c62ef4890f429e314ec5de350180fd27e1d08dea8" HandleID="k8s-pod-network.efa35f2311d73e1d8f09919c62ef4890f429e314ec5de350180fd27e1d08dea8" Workload="localhost-k8s-whisker--6cc7f69c44--p8qpq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e25a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6cc7f69c44-p8qpq", "timestamp":"2026-01-28 01:46:00.458102098 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:46:00.724759 containerd[1553]: 2026-01-28 01:46:00.469 [INFO][4397] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:46:00.724759 containerd[1553]: 2026-01-28 01:46:00.470 [INFO][4397] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:46:00.724759 containerd[1553]: 2026-01-28 01:46:00.474 [INFO][4397] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:46:00.724759 containerd[1553]: 2026-01-28 01:46:00.494 [INFO][4397] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.efa35f2311d73e1d8f09919c62ef4890f429e314ec5de350180fd27e1d08dea8" host="localhost" Jan 28 01:46:00.724759 containerd[1553]: 2026-01-28 01:46:00.538 [INFO][4397] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:46:00.724759 containerd[1553]: 2026-01-28 01:46:00.558 [INFO][4397] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:46:00.724759 containerd[1553]: 2026-01-28 01:46:00.571 [INFO][4397] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:46:00.724759 containerd[1553]: 2026-01-28 01:46:00.583 [INFO][4397] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:46:00.724759 containerd[1553]: 2026-01-28 01:46:00.583 [INFO][4397] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.efa35f2311d73e1d8f09919c62ef4890f429e314ec5de350180fd27e1d08dea8" host="localhost" Jan 28 01:46:00.726285 containerd[1553]: 2026-01-28 01:46:00.591 [INFO][4397] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.efa35f2311d73e1d8f09919c62ef4890f429e314ec5de350180fd27e1d08dea8 Jan 28 01:46:00.726285 containerd[1553]: 2026-01-28 01:46:00.606 [INFO][4397] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.efa35f2311d73e1d8f09919c62ef4890f429e314ec5de350180fd27e1d08dea8" host="localhost" Jan 28 01:46:00.726285 containerd[1553]: 2026-01-28 01:46:00.628 [INFO][4397] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.efa35f2311d73e1d8f09919c62ef4890f429e314ec5de350180fd27e1d08dea8" host="localhost" Jan 28 01:46:00.726285 containerd[1553]: 2026-01-28 01:46:00.629 [INFO][4397] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.efa35f2311d73e1d8f09919c62ef4890f429e314ec5de350180fd27e1d08dea8" host="localhost" Jan 28 01:46:00.726285 containerd[1553]: 2026-01-28 01:46:00.629 [INFO][4397] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:46:00.726285 containerd[1553]: 2026-01-28 01:46:00.632 [INFO][4397] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="efa35f2311d73e1d8f09919c62ef4890f429e314ec5de350180fd27e1d08dea8" HandleID="k8s-pod-network.efa35f2311d73e1d8f09919c62ef4890f429e314ec5de350180fd27e1d08dea8" Workload="localhost-k8s-whisker--6cc7f69c44--p8qpq-eth0" Jan 28 01:46:00.726538 containerd[1553]: 2026-01-28 01:46:00.651 [INFO][4381] cni-plugin/k8s.go 418: Populated endpoint ContainerID="efa35f2311d73e1d8f09919c62ef4890f429e314ec5de350180fd27e1d08dea8" Namespace="calico-system" Pod="whisker-6cc7f69c44-p8qpq" WorkloadEndpoint="localhost-k8s-whisker--6cc7f69c44--p8qpq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6cc7f69c44--p8qpq-eth0", GenerateName:"whisker-6cc7f69c44-", Namespace:"calico-system", SelfLink:"", UID:"bfdea9eb-0bce-4c15-b321-f9c7a00efdf0", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 45, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6cc7f69c44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6cc7f69c44-p8qpq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali393142aa4d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:46:00.726538 containerd[1553]: 2026-01-28 01:46:00.651 [INFO][4381] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="efa35f2311d73e1d8f09919c62ef4890f429e314ec5de350180fd27e1d08dea8" Namespace="calico-system" Pod="whisker-6cc7f69c44-p8qpq" WorkloadEndpoint="localhost-k8s-whisker--6cc7f69c44--p8qpq-eth0" Jan 28 01:46:00.726719 containerd[1553]: 2026-01-28 01:46:00.651 [INFO][4381] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali393142aa4d1 ContainerID="efa35f2311d73e1d8f09919c62ef4890f429e314ec5de350180fd27e1d08dea8" Namespace="calico-system" Pod="whisker-6cc7f69c44-p8qpq" WorkloadEndpoint="localhost-k8s-whisker--6cc7f69c44--p8qpq-eth0" Jan 28 01:46:00.726719 containerd[1553]: 2026-01-28 01:46:00.673 [INFO][4381] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="efa35f2311d73e1d8f09919c62ef4890f429e314ec5de350180fd27e1d08dea8" Namespace="calico-system" Pod="whisker-6cc7f69c44-p8qpq" WorkloadEndpoint="localhost-k8s-whisker--6cc7f69c44--p8qpq-eth0" Jan 28 01:46:00.726786 containerd[1553]: 2026-01-28 01:46:00.674 [INFO][4381] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="efa35f2311d73e1d8f09919c62ef4890f429e314ec5de350180fd27e1d08dea8" Namespace="calico-system" Pod="whisker-6cc7f69c44-p8qpq" WorkloadEndpoint="localhost-k8s-whisker--6cc7f69c44--p8qpq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6cc7f69c44--p8qpq-eth0", GenerateName:"whisker-6cc7f69c44-", Namespace:"calico-system", SelfLink:"", UID:"bfdea9eb-0bce-4c15-b321-f9c7a00efdf0", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 45, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6cc7f69c44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"efa35f2311d73e1d8f09919c62ef4890f429e314ec5de350180fd27e1d08dea8", Pod:"whisker-6cc7f69c44-p8qpq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali393142aa4d1", MAC:"ae:92:36:71:41:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:46:00.728179 containerd[1553]: 2026-01-28 01:46:00.714 [INFO][4381] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="efa35f2311d73e1d8f09919c62ef4890f429e314ec5de350180fd27e1d08dea8" Namespace="calico-system" Pod="whisker-6cc7f69c44-p8qpq" WorkloadEndpoint="localhost-k8s-whisker--6cc7f69c44--p8qpq-eth0" Jan 28 01:46:01.015369 containerd[1553]: time="2026-01-28T01:46:01.014218638Z" level=info msg="connecting to shim efa35f2311d73e1d8f09919c62ef4890f429e314ec5de350180fd27e1d08dea8" address="unix:///run/containerd/s/3ef51265c73e84769318301c4331d21273f4d27ace685ffe9fac48f5dc8435d9" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:46:01.084329 systemd[1]: Started cri-containerd-efa35f2311d73e1d8f09919c62ef4890f429e314ec5de350180fd27e1d08dea8.scope - libcontainer container efa35f2311d73e1d8f09919c62ef4890f429e314ec5de350180fd27e1d08dea8. Jan 28 01:46:01.112069 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:46:01.231392 containerd[1553]: time="2026-01-28T01:46:01.231255396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cc7f69c44-p8qpq,Uid:bfdea9eb-0bce-4c15-b321-f9c7a00efdf0,Namespace:calico-system,Attempt:0,} returns sandbox id \"efa35f2311d73e1d8f09919c62ef4890f429e314ec5de350180fd27e1d08dea8\"" Jan 28 01:46:01.262159 containerd[1553]: time="2026-01-28T01:46:01.261832026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:46:01.332701 containerd[1553]: time="2026-01-28T01:46:01.328089842Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:46:01.332701 containerd[1553]: time="2026-01-28T01:46:01.330709944Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 01:46:01.346013 containerd[1553]: time="2026-01-28T01:46:01.345661226Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:46:01.346650 kubelet[2762]: E0128 01:46:01.346210 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:46:01.346650 kubelet[2762]: E0128 01:46:01.346306 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:46:01.361964 kubelet[2762]: E0128 01:46:01.361802 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:298b27a7925644a4836dbb58d943c269,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xmwnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cc7f69c44-p8qpq_calico-system(bfdea9eb-0bce-4c15-b321-f9c7a00efdf0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:46:01.366815 containerd[1553]: time="2026-01-28T01:46:01.366750603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:46:01.450850 containerd[1553]: time="2026-01-28T01:46:01.450801452Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:46:01.458203 containerd[1553]: time="2026-01-28T01:46:01.457766821Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:46:01.458203 containerd[1553]: time="2026-01-28T01:46:01.457860704Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 01:46:01.458727 kubelet[2762]: E0128 01:46:01.458419 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:46:01.458727 kubelet[2762]: E0128 01:46:01.458552 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:46:01.458827 kubelet[2762]: E0128 01:46:01.458680 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xmwnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cc7f69c44-p8qpq_calico-system(bfdea9eb-0bce-4c15-b321-f9c7a00efdf0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:46:01.462588 kubelet[2762]: E0128 01:46:01.461598 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cc7f69c44-p8qpq" podUID="bfdea9eb-0bce-4c15-b321-f9c7a00efdf0" Jan 28 01:46:01.853660 kubelet[2762]: E0128 01:46:01.853317 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cc7f69c44-p8qpq" podUID="bfdea9eb-0bce-4c15-b321-f9c7a00efdf0" Jan 28 01:46:02.306335 systemd-networkd[1475]: vxlan.calico: Gained IPv6LL Jan 28 01:46:02.690698 systemd-networkd[1475]: cali393142aa4d1: Gained IPv6LL Jan 28 01:46:02.873149 kubelet[2762]: E0128 01:46:02.873045 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cc7f69c44-p8qpq" podUID="bfdea9eb-0bce-4c15-b321-f9c7a00efdf0" Jan 28 01:46:03.350242 containerd[1553]: time="2026-01-28T01:46:03.349812740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6954f9c796-gqzwx,Uid:a2048076-ab34-4562-b42d-515b64a0bfb4,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:46:03.734192 systemd-networkd[1475]: cali9bf68550478: Link UP Jan 28 01:46:03.735049 systemd-networkd[1475]: cali9bf68550478: Gained carrier Jan 28 01:46:03.792040 containerd[1553]: 2026-01-28 01:46:03.463 [INFO][4543] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6954f9c796--gqzwx-eth0 calico-apiserver-6954f9c796- calico-apiserver a2048076-ab34-4562-b42d-515b64a0bfb4 839 0 2026-01-28 01:45:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6954f9c796 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6954f9c796-gqzwx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9bf68550478 [] [] }} ContainerID="1cd754989db1fb7093e14c1c7793cf726a3b2c755bf73ae3194f06b11d127bce" Namespace="calico-apiserver" Pod="calico-apiserver-6954f9c796-gqzwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6954f9c796--gqzwx-" Jan 28 01:46:03.792040 containerd[1553]: 2026-01-28 01:46:03.463 [INFO][4543] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1cd754989db1fb7093e14c1c7793cf726a3b2c755bf73ae3194f06b11d127bce" Namespace="calico-apiserver" Pod="calico-apiserver-6954f9c796-gqzwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6954f9c796--gqzwx-eth0" Jan 28 01:46:03.792040 containerd[1553]: 2026-01-28 01:46:03.587 [INFO][4556] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1cd754989db1fb7093e14c1c7793cf726a3b2c755bf73ae3194f06b11d127bce" HandleID="k8s-pod-network.1cd754989db1fb7093e14c1c7793cf726a3b2c755bf73ae3194f06b11d127bce" Workload="localhost-k8s-calico--apiserver--6954f9c796--gqzwx-eth0" Jan 28 01:46:03.792477 containerd[1553]: 2026-01-28 01:46:03.588 [INFO][4556] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1cd754989db1fb7093e14c1c7793cf726a3b2c755bf73ae3194f06b11d127bce" HandleID="k8s-pod-network.1cd754989db1fb7093e14c1c7793cf726a3b2c755bf73ae3194f06b11d127bce" Workload="localhost-k8s-calico--apiserver--6954f9c796--gqzwx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034d4f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6954f9c796-gqzwx", "timestamp":"2026-01-28 01:46:03.587489657 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:46:03.792477 containerd[1553]: 2026-01-28 01:46:03.588 [INFO][4556] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:46:03.792477 containerd[1553]: 2026-01-28 01:46:03.588 [INFO][4556] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:46:03.792477 containerd[1553]: 2026-01-28 01:46:03.588 [INFO][4556] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:46:03.792477 containerd[1553]: 2026-01-28 01:46:03.611 [INFO][4556] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1cd754989db1fb7093e14c1c7793cf726a3b2c755bf73ae3194f06b11d127bce" host="localhost" Jan 28 01:46:03.792477 containerd[1553]: 2026-01-28 01:46:03.630 [INFO][4556] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:46:03.792477 containerd[1553]: 2026-01-28 01:46:03.645 [INFO][4556] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:46:03.792477 containerd[1553]: 2026-01-28 01:46:03.653 [INFO][4556] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:46:03.792477 containerd[1553]: 2026-01-28 01:46:03.660 [INFO][4556] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:46:03.792477 containerd[1553]: 2026-01-28 01:46:03.660 [INFO][4556] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1cd754989db1fb7093e14c1c7793cf726a3b2c755bf73ae3194f06b11d127bce" host="localhost" Jan 28 01:46:03.793177 containerd[1553]: 2026-01-28 01:46:03.670 [INFO][4556] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1cd754989db1fb7093e14c1c7793cf726a3b2c755bf73ae3194f06b11d127bce Jan 28 01:46:03.793177 containerd[1553]: 2026-01-28 01:46:03.686 [INFO][4556] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1cd754989db1fb7093e14c1c7793cf726a3b2c755bf73ae3194f06b11d127bce" host="localhost" Jan 28 01:46:03.793177 containerd[1553]: 2026-01-28 01:46:03.717 [INFO][4556] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.1cd754989db1fb7093e14c1c7793cf726a3b2c755bf73ae3194f06b11d127bce" host="localhost" Jan 28 01:46:03.793177 containerd[1553]: 2026-01-28 01:46:03.717 [INFO][4556] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.1cd754989db1fb7093e14c1c7793cf726a3b2c755bf73ae3194f06b11d127bce" host="localhost" Jan 28 01:46:03.793177 containerd[1553]: 2026-01-28 01:46:03.717 [INFO][4556] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:46:03.793177 containerd[1553]: 2026-01-28 01:46:03.717 [INFO][4556] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="1cd754989db1fb7093e14c1c7793cf726a3b2c755bf73ae3194f06b11d127bce" HandleID="k8s-pod-network.1cd754989db1fb7093e14c1c7793cf726a3b2c755bf73ae3194f06b11d127bce" Workload="localhost-k8s-calico--apiserver--6954f9c796--gqzwx-eth0" Jan 28 01:46:03.793485 containerd[1553]: 2026-01-28 01:46:03.724 [INFO][4543] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1cd754989db1fb7093e14c1c7793cf726a3b2c755bf73ae3194f06b11d127bce" Namespace="calico-apiserver" Pod="calico-apiserver-6954f9c796-gqzwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6954f9c796--gqzwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6954f9c796--gqzwx-eth0", GenerateName:"calico-apiserver-6954f9c796-", Namespace:"calico-apiserver", SelfLink:"", UID:"a2048076-ab34-4562-b42d-515b64a0bfb4", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 45, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6954f9c796", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6954f9c796-gqzwx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9bf68550478", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:46:03.793686 containerd[1553]: 2026-01-28 01:46:03.724 [INFO][4543] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="1cd754989db1fb7093e14c1c7793cf726a3b2c755bf73ae3194f06b11d127bce" Namespace="calico-apiserver" Pod="calico-apiserver-6954f9c796-gqzwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6954f9c796--gqzwx-eth0" Jan 28 01:46:03.793686 containerd[1553]: 2026-01-28 01:46:03.726 [INFO][4543] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9bf68550478 ContainerID="1cd754989db1fb7093e14c1c7793cf726a3b2c755bf73ae3194f06b11d127bce" Namespace="calico-apiserver" Pod="calico-apiserver-6954f9c796-gqzwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6954f9c796--gqzwx-eth0" Jan 28 01:46:03.793686 containerd[1553]: 2026-01-28 01:46:03.733 [INFO][4543] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1cd754989db1fb7093e14c1c7793cf726a3b2c755bf73ae3194f06b11d127bce" Namespace="calico-apiserver" Pod="calico-apiserver-6954f9c796-gqzwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6954f9c796--gqzwx-eth0" Jan 28 01:46:03.793790 containerd[1553]: 2026-01-28 01:46:03.737 [INFO][4543] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1cd754989db1fb7093e14c1c7793cf726a3b2c755bf73ae3194f06b11d127bce" Namespace="calico-apiserver" Pod="calico-apiserver-6954f9c796-gqzwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6954f9c796--gqzwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6954f9c796--gqzwx-eth0", GenerateName:"calico-apiserver-6954f9c796-", Namespace:"calico-apiserver", SelfLink:"", UID:"a2048076-ab34-4562-b42d-515b64a0bfb4", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 45, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6954f9c796", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1cd754989db1fb7093e14c1c7793cf726a3b2c755bf73ae3194f06b11d127bce", Pod:"calico-apiserver-6954f9c796-gqzwx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9bf68550478", MAC:"46:47:61:6c:8b:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:46:03.797174 containerd[1553]: 2026-01-28 01:46:03.782 [INFO][4543] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1cd754989db1fb7093e14c1c7793cf726a3b2c755bf73ae3194f06b11d127bce" Namespace="calico-apiserver" Pod="calico-apiserver-6954f9c796-gqzwx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6954f9c796--gqzwx-eth0" Jan 28 01:46:03.860228 containerd[1553]: time="2026-01-28T01:46:03.860120357Z" level=info msg="connecting to shim 1cd754989db1fb7093e14c1c7793cf726a3b2c755bf73ae3194f06b11d127bce" address="unix:///run/containerd/s/356d38a09f94e49b3f55e429122f2b65e9fdd602acd3c3b3fed73a6e60842cf7" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:46:03.931370 systemd[1]: Started cri-containerd-1cd754989db1fb7093e14c1c7793cf726a3b2c755bf73ae3194f06b11d127bce.scope - libcontainer container 1cd754989db1fb7093e14c1c7793cf726a3b2c755bf73ae3194f06b11d127bce. Jan 28 01:46:03.972671 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:46:04.069867 containerd[1553]: time="2026-01-28T01:46:04.069686849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6954f9c796-gqzwx,Uid:a2048076-ab34-4562-b42d-515b64a0bfb4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1cd754989db1fb7093e14c1c7793cf726a3b2c755bf73ae3194f06b11d127bce\"" Jan 28 01:46:04.075214 containerd[1553]: time="2026-01-28T01:46:04.075119340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:46:04.179278 containerd[1553]: time="2026-01-28T01:46:04.179150872Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:46:04.183856 containerd[1553]: time="2026-01-28T01:46:04.183643671Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:46:04.183856 containerd[1553]: time="2026-01-28T01:46:04.183819695Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:46:04.184539 kubelet[2762]: E0128 01:46:04.184153 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:46:04.184539 kubelet[2762]: E0128 01:46:04.184212 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:46:04.186134 kubelet[2762]: E0128 01:46:04.184365 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p2zt7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6954f9c796-gqzwx_calico-apiserver(a2048076-ab34-4562-b42d-515b64a0bfb4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:46:04.187179 kubelet[2762]: E0128 01:46:04.187111 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-gqzwx" podUID="a2048076-ab34-4562-b42d-515b64a0bfb4" Jan 28 01:46:04.359066 containerd[1553]: time="2026-01-28T01:46:04.354107986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5r68l,Uid:760a12b1-4a99-4684-a026-7c55d7164578,Namespace:calico-system,Attempt:0,}" Jan 28 01:46:04.746439 systemd-networkd[1475]: cali4c9acbf28a0: Link UP Jan 28 01:46:04.747612 systemd-networkd[1475]: cali4c9acbf28a0: Gained carrier Jan 28 01:46:04.775156 containerd[1553]: 2026-01-28 01:46:04.505 [INFO][4621] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--5r68l-eth0 csi-node-driver- calico-system 760a12b1-4a99-4684-a026-7c55d7164578 730 0 2026-01-28 01:45:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-5r68l eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4c9acbf28a0 [] [] }} ContainerID="ff675f953fff9c6c22c57baec1fc4cbedcfd07b4591166857eea0281920a8ab0" Namespace="calico-system" Pod="csi-node-driver-5r68l" WorkloadEndpoint="localhost-k8s-csi--node--driver--5r68l-" Jan 28 01:46:04.775156 containerd[1553]: 2026-01-28 01:46:04.506 [INFO][4621] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ff675f953fff9c6c22c57baec1fc4cbedcfd07b4591166857eea0281920a8ab0" Namespace="calico-system" Pod="csi-node-driver-5r68l" WorkloadEndpoint="localhost-k8s-csi--node--driver--5r68l-eth0" Jan 28 01:46:04.775156 containerd[1553]: 2026-01-28 01:46:04.596 [INFO][4636] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff675f953fff9c6c22c57baec1fc4cbedcfd07b4591166857eea0281920a8ab0" HandleID="k8s-pod-network.ff675f953fff9c6c22c57baec1fc4cbedcfd07b4591166857eea0281920a8ab0" Workload="localhost-k8s-csi--node--driver--5r68l-eth0" Jan 28 01:46:04.776010 containerd[1553]: 2026-01-28 01:46:04.597 [INFO][4636] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ff675f953fff9c6c22c57baec1fc4cbedcfd07b4591166857eea0281920a8ab0" HandleID="k8s-pod-network.ff675f953fff9c6c22c57baec1fc4cbedcfd07b4591166857eea0281920a8ab0" Workload="localhost-k8s-csi--node--driver--5r68l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002bfb20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-5r68l", "timestamp":"2026-01-28 01:46:04.596713004 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:46:04.776010 containerd[1553]: 2026-01-28 01:46:04.597 [INFO][4636] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:46:04.776010 containerd[1553]: 2026-01-28 01:46:04.597 [INFO][4636] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:46:04.776010 containerd[1553]: 2026-01-28 01:46:04.597 [INFO][4636] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:46:04.776010 containerd[1553]: 2026-01-28 01:46:04.611 [INFO][4636] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ff675f953fff9c6c22c57baec1fc4cbedcfd07b4591166857eea0281920a8ab0" host="localhost" Jan 28 01:46:04.776010 containerd[1553]: 2026-01-28 01:46:04.641 [INFO][4636] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:46:04.776010 containerd[1553]: 2026-01-28 01:46:04.662 [INFO][4636] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:46:04.776010 containerd[1553]: 2026-01-28 01:46:04.667 [INFO][4636] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:46:04.776010 containerd[1553]: 2026-01-28 01:46:04.678 [INFO][4636] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:46:04.776010 containerd[1553]: 2026-01-28 01:46:04.678 [INFO][4636] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ff675f953fff9c6c22c57baec1fc4cbedcfd07b4591166857eea0281920a8ab0" host="localhost" Jan 28 01:46:04.778856 containerd[1553]: 2026-01-28 01:46:04.694 [INFO][4636] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ff675f953fff9c6c22c57baec1fc4cbedcfd07b4591166857eea0281920a8ab0 Jan 28 01:46:04.778856 containerd[1553]: 2026-01-28 01:46:04.711 [INFO][4636] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ff675f953fff9c6c22c57baec1fc4cbedcfd07b4591166857eea0281920a8ab0" host="localhost" Jan 28 01:46:04.778856 containerd[1553]: 2026-01-28 01:46:04.732 [INFO][4636] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.ff675f953fff9c6c22c57baec1fc4cbedcfd07b4591166857eea0281920a8ab0" host="localhost" Jan 28 01:46:04.778856 containerd[1553]: 2026-01-28 01:46:04.733 [INFO][4636] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.ff675f953fff9c6c22c57baec1fc4cbedcfd07b4591166857eea0281920a8ab0" host="localhost" Jan 28 01:46:04.778856 containerd[1553]: 2026-01-28 01:46:04.733 [INFO][4636] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:46:04.778856 containerd[1553]: 2026-01-28 01:46:04.733 [INFO][4636] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="ff675f953fff9c6c22c57baec1fc4cbedcfd07b4591166857eea0281920a8ab0" HandleID="k8s-pod-network.ff675f953fff9c6c22c57baec1fc4cbedcfd07b4591166857eea0281920a8ab0" Workload="localhost-k8s-csi--node--driver--5r68l-eth0" Jan 28 01:46:04.779326 containerd[1553]: 2026-01-28 01:46:04.739 [INFO][4621] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ff675f953fff9c6c22c57baec1fc4cbedcfd07b4591166857eea0281920a8ab0" Namespace="calico-system" Pod="csi-node-driver-5r68l" WorkloadEndpoint="localhost-k8s-csi--node--driver--5r68l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5r68l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"760a12b1-4a99-4684-a026-7c55d7164578", ResourceVersion:"730", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 45, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-5r68l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4c9acbf28a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:46:04.780063 containerd[1553]: 2026-01-28 01:46:04.739 [INFO][4621] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="ff675f953fff9c6c22c57baec1fc4cbedcfd07b4591166857eea0281920a8ab0" Namespace="calico-system" Pod="csi-node-driver-5r68l" WorkloadEndpoint="localhost-k8s-csi--node--driver--5r68l-eth0" Jan 28 01:46:04.780063 containerd[1553]: 2026-01-28 01:46:04.739 [INFO][4621] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c9acbf28a0 ContainerID="ff675f953fff9c6c22c57baec1fc4cbedcfd07b4591166857eea0281920a8ab0" Namespace="calico-system" Pod="csi-node-driver-5r68l" WorkloadEndpoint="localhost-k8s-csi--node--driver--5r68l-eth0" Jan 28 01:46:04.780063 containerd[1553]: 2026-01-28 01:46:04.747 [INFO][4621] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ff675f953fff9c6c22c57baec1fc4cbedcfd07b4591166857eea0281920a8ab0" Namespace="calico-system" Pod="csi-node-driver-5r68l" WorkloadEndpoint="localhost-k8s-csi--node--driver--5r68l-eth0" Jan 28 01:46:04.780265 containerd[1553]: 2026-01-28 01:46:04.748 [INFO][4621] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ff675f953fff9c6c22c57baec1fc4cbedcfd07b4591166857eea0281920a8ab0" Namespace="calico-system" Pod="csi-node-driver-5r68l" WorkloadEndpoint="localhost-k8s-csi--node--driver--5r68l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5r68l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"760a12b1-4a99-4684-a026-7c55d7164578", ResourceVersion:"730", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 45, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ff675f953fff9c6c22c57baec1fc4cbedcfd07b4591166857eea0281920a8ab0", Pod:"csi-node-driver-5r68l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4c9acbf28a0", MAC:"e2:58:7f:c6:92:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:46:04.780469 containerd[1553]: 2026-01-28 01:46:04.767 [INFO][4621] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ff675f953fff9c6c22c57baec1fc4cbedcfd07b4591166857eea0281920a8ab0" Namespace="calico-system" Pod="csi-node-driver-5r68l" WorkloadEndpoint="localhost-k8s-csi--node--driver--5r68l-eth0" Jan 28 01:46:04.864678 containerd[1553]: time="2026-01-28T01:46:04.864461160Z" level=info msg="connecting to shim ff675f953fff9c6c22c57baec1fc4cbedcfd07b4591166857eea0281920a8ab0" address="unix:///run/containerd/s/c04e39c586c6b51d34d499b2fb3eddf8789e854fc44eea3a0213aab71f6f6c92" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:46:04.881125 kubelet[2762]: E0128 01:46:04.880595 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-gqzwx" podUID="a2048076-ab34-4562-b42d-515b64a0bfb4" Jan 28 01:46:04.930027 systemd[1]: Started cri-containerd-ff675f953fff9c6c22c57baec1fc4cbedcfd07b4591166857eea0281920a8ab0.scope - libcontainer container ff675f953fff9c6c22c57baec1fc4cbedcfd07b4591166857eea0281920a8ab0. Jan 28 01:46:04.975739 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:46:05.029128 containerd[1553]: time="2026-01-28T01:46:05.028517899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5r68l,Uid:760a12b1-4a99-4684-a026-7c55d7164578,Namespace:calico-system,Attempt:0,} returns sandbox id \"ff675f953fff9c6c22c57baec1fc4cbedcfd07b4591166857eea0281920a8ab0\"" Jan 28 01:46:05.032108 containerd[1553]: time="2026-01-28T01:46:05.032078519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:46:05.105703 containerd[1553]: time="2026-01-28T01:46:05.105557331Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:46:05.109532 containerd[1553]: time="2026-01-28T01:46:05.108763809Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:46:05.109532 containerd[1553]: time="2026-01-28T01:46:05.108870676Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 01:46:05.109778 kubelet[2762]: E0128 01:46:05.109471 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:46:05.109778 kubelet[2762]: E0128 01:46:05.109528 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:46:05.109778 kubelet[2762]: E0128 01:46:05.109690 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nn7zh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5r68l_calico-system(760a12b1-4a99-4684-a026-7c55d7164578): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:46:05.114009 containerd[1553]: time="2026-01-28T01:46:05.113732789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:46:05.182034 containerd[1553]: time="2026-01-28T01:46:05.181299803Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:46:05.184435 containerd[1553]: time="2026-01-28T01:46:05.184174161Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:46:05.184435 containerd[1553]: time="2026-01-28T01:46:05.184330368Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 01:46:05.184727 kubelet[2762]: E0128 01:46:05.184653 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:46:05.185284 kubelet[2762]: E0128 01:46:05.184721 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:46:05.185284 kubelet[2762]: E0128 01:46:05.184847 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nn7zh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5r68l_calico-system(760a12b1-4a99-4684-a026-7c55d7164578): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:46:05.186554 kubelet[2762]: E0128 01:46:05.186320 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5r68l" podUID="760a12b1-4a99-4684-a026-7c55d7164578" Jan 28 01:46:05.508509 systemd-networkd[1475]: cali9bf68550478: Gained IPv6LL Jan 28 01:46:05.886022 kubelet[2762]: E0128 01:46:05.885706 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-gqzwx" podUID="a2048076-ab34-4562-b42d-515b64a0bfb4" Jan 28 01:46:05.894545 kubelet[2762]: E0128 01:46:05.893980 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5r68l" podUID="760a12b1-4a99-4684-a026-7c55d7164578" Jan 28 01:46:06.082841 systemd-networkd[1475]: cali4c9acbf28a0: Gained IPv6LL Jan 28 01:46:06.353538 kubelet[2762]: E0128 01:46:06.353305 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:46:06.356585 kubelet[2762]: E0128 01:46:06.356175 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:46:06.358400 containerd[1553]: time="2026-01-28T01:46:06.357836127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ch2px,Uid:2b620013-7ee3-4980-87da-661ce5681449,Namespace:kube-system,Attempt:0,}" Jan 28 01:46:06.359086 containerd[1553]: time="2026-01-28T01:46:06.359057815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fhrtx,Uid:a5967aa1-f4ae-477d-ae42-e205a147743e,Namespace:kube-system,Attempt:0,}" Jan 28 01:46:06.669179 systemd-networkd[1475]: cali97c4f28e06f: Link UP Jan 28 01:46:06.671628 systemd-networkd[1475]: cali97c4f28e06f: Gained carrier Jan 28 01:46:06.702498 containerd[1553]: 2026-01-28 01:46:06.486 [INFO][4708] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--ch2px-eth0 coredns-668d6bf9bc- kube-system 2b620013-7ee3-4980-87da-661ce5681449 844 0 2026-01-28 01:45:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-ch2px eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali97c4f28e06f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9cc745c3815d15c62d046697c8fcf339824bdd130d4b9f9792290a4485a24426" Namespace="kube-system" Pod="coredns-668d6bf9bc-ch2px" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ch2px-" Jan 28 01:46:06.702498 containerd[1553]: 2026-01-28 01:46:06.486 [INFO][4708] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9cc745c3815d15c62d046697c8fcf339824bdd130d4b9f9792290a4485a24426" Namespace="kube-system" Pod="coredns-668d6bf9bc-ch2px" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ch2px-eth0" Jan 28 01:46:06.702498 containerd[1553]: 2026-01-28 01:46:06.568 [INFO][4736] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9cc745c3815d15c62d046697c8fcf339824bdd130d4b9f9792290a4485a24426" HandleID="k8s-pod-network.9cc745c3815d15c62d046697c8fcf339824bdd130d4b9f9792290a4485a24426" Workload="localhost-k8s-coredns--668d6bf9bc--ch2px-eth0" Jan 28 01:46:06.702851 containerd[1553]: 2026-01-28 01:46:06.569 [INFO][4736] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9cc745c3815d15c62d046697c8fcf339824bdd130d4b9f9792290a4485a24426" HandleID="k8s-pod-network.9cc745c3815d15c62d046697c8fcf339824bdd130d4b9f9792290a4485a24426" Workload="localhost-k8s-coredns--668d6bf9bc--ch2px-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000120990), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-ch2px", "timestamp":"2026-01-28 01:46:06.5685798 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:46:06.702851 containerd[1553]: 2026-01-28 01:46:06.569 [INFO][4736] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:46:06.702851 containerd[1553]: 2026-01-28 01:46:06.569 [INFO][4736] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:46:06.702851 containerd[1553]: 2026-01-28 01:46:06.569 [INFO][4736] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:46:06.702851 containerd[1553]: 2026-01-28 01:46:06.584 [INFO][4736] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9cc745c3815d15c62d046697c8fcf339824bdd130d4b9f9792290a4485a24426" host="localhost" Jan 28 01:46:06.702851 containerd[1553]: 2026-01-28 01:46:06.595 [INFO][4736] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:46:06.702851 containerd[1553]: 2026-01-28 01:46:06.609 [INFO][4736] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:46:06.702851 containerd[1553]: 2026-01-28 01:46:06.615 [INFO][4736] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:46:06.702851 containerd[1553]: 2026-01-28 01:46:06.620 [INFO][4736] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:46:06.702851 containerd[1553]: 2026-01-28 01:46:06.620 [INFO][4736] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9cc745c3815d15c62d046697c8fcf339824bdd130d4b9f9792290a4485a24426" host="localhost" Jan 28 01:46:06.703504 containerd[1553]: 2026-01-28 01:46:06.626 [INFO][4736] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9cc745c3815d15c62d046697c8fcf339824bdd130d4b9f9792290a4485a24426 Jan 28 01:46:06.703504 containerd[1553]: 2026-01-28 01:46:06.637 [INFO][4736] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9cc745c3815d15c62d046697c8fcf339824bdd130d4b9f9792290a4485a24426" host="localhost" Jan 28 01:46:06.703504 containerd[1553]: 2026-01-28 01:46:06.653 [INFO][4736] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.9cc745c3815d15c62d046697c8fcf339824bdd130d4b9f9792290a4485a24426" host="localhost" Jan 28 01:46:06.703504 containerd[1553]: 2026-01-28 01:46:06.653 [INFO][4736] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.9cc745c3815d15c62d046697c8fcf339824bdd130d4b9f9792290a4485a24426" host="localhost" Jan 28 01:46:06.703504 containerd[1553]: 2026-01-28 01:46:06.653 [INFO][4736] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:46:06.703504 containerd[1553]: 2026-01-28 01:46:06.653 [INFO][4736] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="9cc745c3815d15c62d046697c8fcf339824bdd130d4b9f9792290a4485a24426" HandleID="k8s-pod-network.9cc745c3815d15c62d046697c8fcf339824bdd130d4b9f9792290a4485a24426" Workload="localhost-k8s-coredns--668d6bf9bc--ch2px-eth0" Jan 28 01:46:06.703683 containerd[1553]: 2026-01-28 01:46:06.659 [INFO][4708] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9cc745c3815d15c62d046697c8fcf339824bdd130d4b9f9792290a4485a24426" Namespace="kube-system" Pod="coredns-668d6bf9bc-ch2px" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ch2px-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--ch2px-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2b620013-7ee3-4980-87da-661ce5681449", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 45, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-ch2px", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali97c4f28e06f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:46:06.703840 containerd[1553]: 2026-01-28 01:46:06.660 [INFO][4708] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="9cc745c3815d15c62d046697c8fcf339824bdd130d4b9f9792290a4485a24426" Namespace="kube-system" Pod="coredns-668d6bf9bc-ch2px" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ch2px-eth0" Jan 28 01:46:06.703840 containerd[1553]: 2026-01-28 01:46:06.660 [INFO][4708] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali97c4f28e06f ContainerID="9cc745c3815d15c62d046697c8fcf339824bdd130d4b9f9792290a4485a24426" Namespace="kube-system" Pod="coredns-668d6bf9bc-ch2px" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ch2px-eth0" Jan 28 01:46:06.703840 containerd[1553]: 2026-01-28 01:46:06.672 [INFO][4708] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9cc745c3815d15c62d046697c8fcf339824bdd130d4b9f9792290a4485a24426" Namespace="kube-system" Pod="coredns-668d6bf9bc-ch2px" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ch2px-eth0" Jan 28 01:46:06.704057 containerd[1553]: 2026-01-28 01:46:06.673 [INFO][4708] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9cc745c3815d15c62d046697c8fcf339824bdd130d4b9f9792290a4485a24426" Namespace="kube-system" Pod="coredns-668d6bf9bc-ch2px" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ch2px-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--ch2px-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2b620013-7ee3-4980-87da-661ce5681449", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 45, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9cc745c3815d15c62d046697c8fcf339824bdd130d4b9f9792290a4485a24426", Pod:"coredns-668d6bf9bc-ch2px", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali97c4f28e06f", MAC:"32:e3:1d:20:24:6c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:46:06.704057 containerd[1553]: 2026-01-28 01:46:06.695 [INFO][4708] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9cc745c3815d15c62d046697c8fcf339824bdd130d4b9f9792290a4485a24426" Namespace="kube-system" Pod="coredns-668d6bf9bc-ch2px" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--ch2px-eth0" Jan 28 01:46:06.786219 containerd[1553]: time="2026-01-28T01:46:06.786098029Z" level=info msg="connecting to shim 9cc745c3815d15c62d046697c8fcf339824bdd130d4b9f9792290a4485a24426" address="unix:///run/containerd/s/200a7ff99a5052f7fae8e11545d1d08c3c7eb7f4fcd5faaba5d0f0c290c7896b" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:46:06.820666 systemd-networkd[1475]: calia034d3621bf: Link UP Jan 28 01:46:06.821635 systemd-networkd[1475]: calia034d3621bf: Gained carrier Jan 28 01:46:06.865177 containerd[1553]: 2026-01-28 01:46:06.503 [INFO][4715] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--fhrtx-eth0 coredns-668d6bf9bc- kube-system a5967aa1-f4ae-477d-ae42-e205a147743e 849 0 2026-01-28 01:45:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-fhrtx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia034d3621bf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ff12894ca505134761d40db0f732765ecd346c1713dc15f40a45f37310ade2bd" Namespace="kube-system" Pod="coredns-668d6bf9bc-fhrtx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fhrtx-" Jan 28 01:46:06.865177 containerd[1553]: 2026-01-28 01:46:06.503 [INFO][4715] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ff12894ca505134761d40db0f732765ecd346c1713dc15f40a45f37310ade2bd" Namespace="kube-system" Pod="coredns-668d6bf9bc-fhrtx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fhrtx-eth0" Jan 28 01:46:06.865177 containerd[1553]: 2026-01-28 01:46:06.622 [INFO][4744] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff12894ca505134761d40db0f732765ecd346c1713dc15f40a45f37310ade2bd" HandleID="k8s-pod-network.ff12894ca505134761d40db0f732765ecd346c1713dc15f40a45f37310ade2bd" Workload="localhost-k8s-coredns--668d6bf9bc--fhrtx-eth0" Jan 28 01:46:06.865177 containerd[1553]: 2026-01-28 01:46:06.623 [INFO][4744] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ff12894ca505134761d40db0f732765ecd346c1713dc15f40a45f37310ade2bd" HandleID="k8s-pod-network.ff12894ca505134761d40db0f732765ecd346c1713dc15f40a45f37310ade2bd" Workload="localhost-k8s-coredns--668d6bf9bc--fhrtx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f6e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-fhrtx", "timestamp":"2026-01-28 01:46:06.622426451 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:46:06.865177 containerd[1553]: 2026-01-28 01:46:06.624 [INFO][4744] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:46:06.865177 containerd[1553]: 2026-01-28 01:46:06.653 [INFO][4744] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:46:06.865177 containerd[1553]: 2026-01-28 01:46:06.653 [INFO][4744] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:46:06.865177 containerd[1553]: 2026-01-28 01:46:06.689 [INFO][4744] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ff12894ca505134761d40db0f732765ecd346c1713dc15f40a45f37310ade2bd" host="localhost" Jan 28 01:46:06.865177 containerd[1553]: 2026-01-28 01:46:06.713 [INFO][4744] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:46:06.865177 containerd[1553]: 2026-01-28 01:46:06.734 [INFO][4744] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:46:06.865177 containerd[1553]: 2026-01-28 01:46:06.740 [INFO][4744] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:46:06.865177 containerd[1553]: 2026-01-28 01:46:06.748 [INFO][4744] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:46:06.865177 containerd[1553]: 2026-01-28 01:46:06.748 [INFO][4744] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ff12894ca505134761d40db0f732765ecd346c1713dc15f40a45f37310ade2bd" host="localhost" Jan 28 01:46:06.865177 containerd[1553]: 2026-01-28 01:46:06.755 [INFO][4744] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ff12894ca505134761d40db0f732765ecd346c1713dc15f40a45f37310ade2bd Jan 28 01:46:06.865177 containerd[1553]: 2026-01-28 01:46:06.774 [INFO][4744] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ff12894ca505134761d40db0f732765ecd346c1713dc15f40a45f37310ade2bd" host="localhost" Jan 28 01:46:06.865177 containerd[1553]: 2026-01-28 01:46:06.801 [INFO][4744] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.ff12894ca505134761d40db0f732765ecd346c1713dc15f40a45f37310ade2bd" host="localhost" Jan 28 01:46:06.865177 containerd[1553]: 2026-01-28 01:46:06.801 [INFO][4744] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.ff12894ca505134761d40db0f732765ecd346c1713dc15f40a45f37310ade2bd" host="localhost" Jan 28 01:46:06.865177 containerd[1553]: 2026-01-28 01:46:06.801 [INFO][4744] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:46:06.865177 containerd[1553]: 2026-01-28 01:46:06.801 [INFO][4744] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="ff12894ca505134761d40db0f732765ecd346c1713dc15f40a45f37310ade2bd" HandleID="k8s-pod-network.ff12894ca505134761d40db0f732765ecd346c1713dc15f40a45f37310ade2bd" Workload="localhost-k8s-coredns--668d6bf9bc--fhrtx-eth0" Jan 28 01:46:06.866420 containerd[1553]: 2026-01-28 01:46:06.812 [INFO][4715] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ff12894ca505134761d40db0f732765ecd346c1713dc15f40a45f37310ade2bd" Namespace="kube-system" Pod="coredns-668d6bf9bc-fhrtx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fhrtx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--fhrtx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a5967aa1-f4ae-477d-ae42-e205a147743e", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 45, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-fhrtx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia034d3621bf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:46:06.866420 containerd[1553]: 2026-01-28 01:46:06.812 [INFO][4715] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="ff12894ca505134761d40db0f732765ecd346c1713dc15f40a45f37310ade2bd" Namespace="kube-system" Pod="coredns-668d6bf9bc-fhrtx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fhrtx-eth0" Jan 28 01:46:06.866420 containerd[1553]: 2026-01-28 01:46:06.812 [INFO][4715] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia034d3621bf ContainerID="ff12894ca505134761d40db0f732765ecd346c1713dc15f40a45f37310ade2bd" Namespace="kube-system" Pod="coredns-668d6bf9bc-fhrtx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fhrtx-eth0" Jan 28 01:46:06.866420 containerd[1553]: 2026-01-28 01:46:06.824 [INFO][4715] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ff12894ca505134761d40db0f732765ecd346c1713dc15f40a45f37310ade2bd" Namespace="kube-system" Pod="coredns-668d6bf9bc-fhrtx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fhrtx-eth0" Jan 28 01:46:06.866420 containerd[1553]: 2026-01-28 01:46:06.825 [INFO][4715] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ff12894ca505134761d40db0f732765ecd346c1713dc15f40a45f37310ade2bd" Namespace="kube-system" Pod="coredns-668d6bf9bc-fhrtx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fhrtx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--fhrtx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a5967aa1-f4ae-477d-ae42-e205a147743e", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 45, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ff12894ca505134761d40db0f732765ecd346c1713dc15f40a45f37310ade2bd", Pod:"coredns-668d6bf9bc-fhrtx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia034d3621bf", MAC:"ba:5c:d1:cd:99:a6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:46:06.866420 containerd[1553]: 2026-01-28 01:46:06.855 [INFO][4715] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ff12894ca505134761d40db0f732765ecd346c1713dc15f40a45f37310ade2bd" Namespace="kube-system" Pod="coredns-668d6bf9bc-fhrtx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--fhrtx-eth0" Jan 28 01:46:06.882646 systemd[1]: Started cri-containerd-9cc745c3815d15c62d046697c8fcf339824bdd130d4b9f9792290a4485a24426.scope - libcontainer container 9cc745c3815d15c62d046697c8fcf339824bdd130d4b9f9792290a4485a24426. Jan 28 01:46:06.893275 kubelet[2762]: E0128 01:46:06.893176 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5r68l" podUID="760a12b1-4a99-4684-a026-7c55d7164578" Jan 28 01:46:06.928718 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:46:06.953752 containerd[1553]: time="2026-01-28T01:46:06.953648794Z" level=info msg="connecting to shim ff12894ca505134761d40db0f732765ecd346c1713dc15f40a45f37310ade2bd" address="unix:///run/containerd/s/76495c6528599401ce9e7047ba6fc2d8157878ae32800e762cba410f60e6115e" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:46:06.989403 containerd[1553]: time="2026-01-28T01:46:06.989171994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ch2px,Uid:2b620013-7ee3-4980-87da-661ce5681449,Namespace:kube-system,Attempt:0,} returns sandbox id \"9cc745c3815d15c62d046697c8fcf339824bdd130d4b9f9792290a4485a24426\"" Jan 28 01:46:06.990967 kubelet[2762]: E0128 01:46:06.990278 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:46:07.001407 containerd[1553]: time="2026-01-28T01:46:07.001277187Z" level=info msg="CreateContainer within sandbox \"9cc745c3815d15c62d046697c8fcf339824bdd130d4b9f9792290a4485a24426\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 01:46:07.005377 systemd[1]: Started cri-containerd-ff12894ca505134761d40db0f732765ecd346c1713dc15f40a45f37310ade2bd.scope - libcontainer container ff12894ca505134761d40db0f732765ecd346c1713dc15f40a45f37310ade2bd. Jan 28 01:46:07.025576 containerd[1553]: time="2026-01-28T01:46:07.025504095Z" level=info msg="Container 6f55c47dff97d3fd9497f937f3469f08f7c979f59913900d2e6bc78a2bebe6cf: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:46:07.032089 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:46:07.040859 containerd[1553]: time="2026-01-28T01:46:07.040792133Z" level=info msg="CreateContainer within sandbox \"9cc745c3815d15c62d046697c8fcf339824bdd130d4b9f9792290a4485a24426\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6f55c47dff97d3fd9497f937f3469f08f7c979f59913900d2e6bc78a2bebe6cf\"" Jan 28 01:46:07.047406 containerd[1553]: time="2026-01-28T01:46:07.047192126Z" level=info msg="StartContainer for \"6f55c47dff97d3fd9497f937f3469f08f7c979f59913900d2e6bc78a2bebe6cf\"" Jan 28 01:46:07.051131 containerd[1553]: time="2026-01-28T01:46:07.050975301Z" level=info msg="connecting to shim 6f55c47dff97d3fd9497f937f3469f08f7c979f59913900d2e6bc78a2bebe6cf" address="unix:///run/containerd/s/200a7ff99a5052f7fae8e11545d1d08c3c7eb7f4fcd5faaba5d0f0c290c7896b" protocol=ttrpc version=3 Jan 28 01:46:07.079123 systemd[1]: Started cri-containerd-6f55c47dff97d3fd9497f937f3469f08f7c979f59913900d2e6bc78a2bebe6cf.scope - libcontainer container 6f55c47dff97d3fd9497f937f3469f08f7c979f59913900d2e6bc78a2bebe6cf. Jan 28 01:46:07.098649 containerd[1553]: time="2026-01-28T01:46:07.098609640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fhrtx,Uid:a5967aa1-f4ae-477d-ae42-e205a147743e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff12894ca505134761d40db0f732765ecd346c1713dc15f40a45f37310ade2bd\"" Jan 28 01:46:07.100587 kubelet[2762]: E0128 01:46:07.100560 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:46:07.103701 containerd[1553]: time="2026-01-28T01:46:07.103674804Z" level=info msg="CreateContainer within sandbox \"ff12894ca505134761d40db0f732765ecd346c1713dc15f40a45f37310ade2bd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 01:46:07.124624 containerd[1553]: time="2026-01-28T01:46:07.124514363Z" level=info msg="Container 2583fcec1d3e8d9f711066a037eb835ede9abb59287805e66625d0801a884dd9: CDI devices from CRI Config.CDIDevices: []" Jan 28 01:46:07.141960 containerd[1553]: time="2026-01-28T01:46:07.141766645Z" level=info msg="CreateContainer within sandbox \"ff12894ca505134761d40db0f732765ecd346c1713dc15f40a45f37310ade2bd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2583fcec1d3e8d9f711066a037eb835ede9abb59287805e66625d0801a884dd9\"" Jan 28 01:46:07.143740 containerd[1553]: time="2026-01-28T01:46:07.143651030Z" level=info msg="StartContainer for \"2583fcec1d3e8d9f711066a037eb835ede9abb59287805e66625d0801a884dd9\"" Jan 28 01:46:07.146183 containerd[1553]: time="2026-01-28T01:46:07.146121268Z" level=info msg="connecting to shim 2583fcec1d3e8d9f711066a037eb835ede9abb59287805e66625d0801a884dd9" address="unix:///run/containerd/s/76495c6528599401ce9e7047ba6fc2d8157878ae32800e762cba410f60e6115e" protocol=ttrpc version=3 Jan 28 01:46:07.161202 containerd[1553]: time="2026-01-28T01:46:07.161101286Z" level=info msg="StartContainer for \"6f55c47dff97d3fd9497f937f3469f08f7c979f59913900d2e6bc78a2bebe6cf\" returns successfully" Jan 28 01:46:07.184371 systemd[1]: Started cri-containerd-2583fcec1d3e8d9f711066a037eb835ede9abb59287805e66625d0801a884dd9.scope - libcontainer container 2583fcec1d3e8d9f711066a037eb835ede9abb59287805e66625d0801a884dd9. Jan 28 01:46:07.254558 containerd[1553]: time="2026-01-28T01:46:07.254437123Z" level=info msg="StartContainer for \"2583fcec1d3e8d9f711066a037eb835ede9abb59287805e66625d0801a884dd9\" returns successfully" Jan 28 01:46:07.746292 systemd-networkd[1475]: cali97c4f28e06f: Gained IPv6LL Jan 28 01:46:07.891615 kubelet[2762]: E0128 01:46:07.891408 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:46:07.896800 kubelet[2762]: E0128 01:46:07.896607 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:46:07.938461 kubelet[2762]: I0128 01:46:07.938167 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ch2px" podStartSLOduration=60.938142822 podStartE2EDuration="1m0.938142822s" podCreationTimestamp="2026-01-28 01:45:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:46:07.913135166 +0000 UTC m=+65.747642729" watchObservedRunningTime="2026-01-28 01:46:07.938142822 +0000 UTC m=+65.772650364" Jan 28 01:46:07.958636 kubelet[2762]: I0128 01:46:07.958515 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-fhrtx" podStartSLOduration=60.958489486 podStartE2EDuration="1m0.958489486s" podCreationTimestamp="2026-01-28 01:45:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:46:07.934771576 +0000 UTC m=+65.769279118" watchObservedRunningTime="2026-01-28 01:46:07.958489486 +0000 UTC m=+65.792997028" Jan 28 01:46:08.067237 systemd-networkd[1475]: calia034d3621bf: Gained IPv6LL Jan 28 01:46:08.348999 containerd[1553]: time="2026-01-28T01:46:08.348474168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vgk8b,Uid:c966ee1e-4a54-4737-b8f4-7c2be261a470,Namespace:calico-system,Attempt:0,}" Jan 28 01:46:08.594049 systemd-networkd[1475]: cali9afa578169b: Link UP Jan 28 01:46:08.595137 systemd-networkd[1475]: cali9afa578169b: Gained carrier Jan 28 01:46:08.620740 containerd[1553]: 2026-01-28 01:46:08.431 [INFO][4937] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--vgk8b-eth0 goldmane-666569f655- calico-system c966ee1e-4a54-4737-b8f4-7c2be261a470 846 0 2026-01-28 01:45:26 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-vgk8b eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali9afa578169b [] [] }} ContainerID="5180201bfdfe5d04038010a596a4eee0c195b89870fa15c235efb1378d0dd921" Namespace="calico-system" Pod="goldmane-666569f655-vgk8b" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vgk8b-" Jan 28 01:46:08.620740 containerd[1553]: 2026-01-28 01:46:08.432 [INFO][4937] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5180201bfdfe5d04038010a596a4eee0c195b89870fa15c235efb1378d0dd921" Namespace="calico-system" Pod="goldmane-666569f655-vgk8b" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vgk8b-eth0" Jan 28 01:46:08.620740 containerd[1553]: 2026-01-28 01:46:08.505 [INFO][4952] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5180201bfdfe5d04038010a596a4eee0c195b89870fa15c235efb1378d0dd921" HandleID="k8s-pod-network.5180201bfdfe5d04038010a596a4eee0c195b89870fa15c235efb1378d0dd921" Workload="localhost-k8s-goldmane--666569f655--vgk8b-eth0" Jan 28 01:46:08.620740 containerd[1553]: 2026-01-28 01:46:08.506 [INFO][4952] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5180201bfdfe5d04038010a596a4eee0c195b89870fa15c235efb1378d0dd921" HandleID="k8s-pod-network.5180201bfdfe5d04038010a596a4eee0c195b89870fa15c235efb1378d0dd921" Workload="localhost-k8s-goldmane--666569f655--vgk8b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00021d480), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-vgk8b", "timestamp":"2026-01-28 01:46:08.505965209 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:46:08.620740 containerd[1553]: 2026-01-28 01:46:08.506 [INFO][4952] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:46:08.620740 containerd[1553]: 2026-01-28 01:46:08.507 [INFO][4952] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:46:08.620740 containerd[1553]: 2026-01-28 01:46:08.507 [INFO][4952] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:46:08.620740 containerd[1553]: 2026-01-28 01:46:08.524 [INFO][4952] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5180201bfdfe5d04038010a596a4eee0c195b89870fa15c235efb1378d0dd921" host="localhost" Jan 28 01:46:08.620740 containerd[1553]: 2026-01-28 01:46:08.543 [INFO][4952] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:46:08.620740 containerd[1553]: 2026-01-28 01:46:08.553 [INFO][4952] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:46:08.620740 containerd[1553]: 2026-01-28 01:46:08.557 [INFO][4952] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:46:08.620740 containerd[1553]: 2026-01-28 01:46:08.561 [INFO][4952] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:46:08.620740 containerd[1553]: 2026-01-28 01:46:08.561 [INFO][4952] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5180201bfdfe5d04038010a596a4eee0c195b89870fa15c235efb1378d0dd921" host="localhost" Jan 28 01:46:08.620740 containerd[1553]: 2026-01-28 01:46:08.564 [INFO][4952] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5180201bfdfe5d04038010a596a4eee0c195b89870fa15c235efb1378d0dd921 Jan 28 01:46:08.620740 containerd[1553]: 2026-01-28 01:46:08.571 [INFO][4952] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5180201bfdfe5d04038010a596a4eee0c195b89870fa15c235efb1378d0dd921" host="localhost" Jan 28 01:46:08.620740 containerd[1553]: 2026-01-28 01:46:08.584 [INFO][4952] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.5180201bfdfe5d04038010a596a4eee0c195b89870fa15c235efb1378d0dd921" host="localhost" Jan 28 01:46:08.620740 containerd[1553]: 2026-01-28 01:46:08.584 [INFO][4952] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.5180201bfdfe5d04038010a596a4eee0c195b89870fa15c235efb1378d0dd921" host="localhost" Jan 28 01:46:08.620740 containerd[1553]: 2026-01-28 01:46:08.584 [INFO][4952] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:46:08.620740 containerd[1553]: 2026-01-28 01:46:08.584 [INFO][4952] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="5180201bfdfe5d04038010a596a4eee0c195b89870fa15c235efb1378d0dd921" HandleID="k8s-pod-network.5180201bfdfe5d04038010a596a4eee0c195b89870fa15c235efb1378d0dd921" Workload="localhost-k8s-goldmane--666569f655--vgk8b-eth0" Jan 28 01:46:08.621792 containerd[1553]: 2026-01-28 01:46:08.589 [INFO][4937] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5180201bfdfe5d04038010a596a4eee0c195b89870fa15c235efb1378d0dd921" Namespace="calico-system" Pod="goldmane-666569f655-vgk8b" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vgk8b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--vgk8b-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"c966ee1e-4a54-4737-b8f4-7c2be261a470", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 45, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-vgk8b", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9afa578169b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:46:08.621792 containerd[1553]: 2026-01-28 01:46:08.590 [INFO][4937] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="5180201bfdfe5d04038010a596a4eee0c195b89870fa15c235efb1378d0dd921" Namespace="calico-system" Pod="goldmane-666569f655-vgk8b" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vgk8b-eth0" Jan 28 01:46:08.621792 containerd[1553]: 2026-01-28 01:46:08.590 [INFO][4937] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9afa578169b ContainerID="5180201bfdfe5d04038010a596a4eee0c195b89870fa15c235efb1378d0dd921" Namespace="calico-system" Pod="goldmane-666569f655-vgk8b" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vgk8b-eth0" Jan 28 01:46:08.621792 containerd[1553]: 2026-01-28 01:46:08.595 [INFO][4937] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5180201bfdfe5d04038010a596a4eee0c195b89870fa15c235efb1378d0dd921" Namespace="calico-system" Pod="goldmane-666569f655-vgk8b" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vgk8b-eth0" Jan 28 01:46:08.621792 containerd[1553]: 2026-01-28 01:46:08.596 [INFO][4937] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5180201bfdfe5d04038010a596a4eee0c195b89870fa15c235efb1378d0dd921" Namespace="calico-system" Pod="goldmane-666569f655-vgk8b" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vgk8b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--vgk8b-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"c966ee1e-4a54-4737-b8f4-7c2be261a470", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 45, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5180201bfdfe5d04038010a596a4eee0c195b89870fa15c235efb1378d0dd921", Pod:"goldmane-666569f655-vgk8b", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9afa578169b", MAC:"d2:e7:3b:a7:b4:5d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:46:08.621792 containerd[1553]: 2026-01-28 01:46:08.614 [INFO][4937] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5180201bfdfe5d04038010a596a4eee0c195b89870fa15c235efb1378d0dd921" Namespace="calico-system" Pod="goldmane-666569f655-vgk8b" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vgk8b-eth0" Jan 28 01:46:08.681162 containerd[1553]: time="2026-01-28T01:46:08.681026983Z" level=info msg="connecting to shim 5180201bfdfe5d04038010a596a4eee0c195b89870fa15c235efb1378d0dd921" address="unix:///run/containerd/s/8e17539083f110f8ba7545c56ffad48c771183441c3c6e856a996aba18fb09d4" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:46:08.788040 systemd[1]: Started cri-containerd-5180201bfdfe5d04038010a596a4eee0c195b89870fa15c235efb1378d0dd921.scope - libcontainer container 5180201bfdfe5d04038010a596a4eee0c195b89870fa15c235efb1378d0dd921. Jan 28 01:46:08.819195 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:46:08.894609 containerd[1553]: time="2026-01-28T01:46:08.894407854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vgk8b,Uid:c966ee1e-4a54-4737-b8f4-7c2be261a470,Namespace:calico-system,Attempt:0,} returns sandbox id \"5180201bfdfe5d04038010a596a4eee0c195b89870fa15c235efb1378d0dd921\"" Jan 28 01:46:08.897440 containerd[1553]: time="2026-01-28T01:46:08.897166082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:46:08.904047 kubelet[2762]: E0128 01:46:08.903728 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:46:08.905992 kubelet[2762]: E0128 01:46:08.905768 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:46:08.960384 containerd[1553]: time="2026-01-28T01:46:08.960207634Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:46:08.962275 containerd[1553]: time="2026-01-28T01:46:08.962100290Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:46:08.962275 containerd[1553]: time="2026-01-28T01:46:08.962224188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 01:46:08.962768 kubelet[2762]: E0128 01:46:08.962690 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:46:08.962837 kubelet[2762]: E0128 01:46:08.962780 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:46:08.963122 kubelet[2762]: E0128 01:46:08.963019 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8sgb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vgk8b_calico-system(c966ee1e-4a54-4737-b8f4-7c2be261a470): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:46:08.964771 kubelet[2762]: E0128 01:46:08.964572 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vgk8b" podUID="c966ee1e-4a54-4737-b8f4-7c2be261a470" Jan 28 01:46:09.347797 containerd[1553]: time="2026-01-28T01:46:09.347609244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6954f9c796-rjrhf,Uid:2cd00be8-fccf-4399-b5b1-c60bf8266112,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:46:09.348055 containerd[1553]: time="2026-01-28T01:46:09.347609173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64456467b5-b47z9,Uid:5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179,Namespace:calico-system,Attempt:0,}" Jan 28 01:46:09.573775 systemd-networkd[1475]: cali233390b6603: Link UP Jan 28 01:46:09.574162 systemd-networkd[1475]: cali233390b6603: Gained carrier Jan 28 01:46:09.597709 containerd[1553]: 2026-01-28 01:46:09.426 [INFO][5017] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--64456467b5--b47z9-eth0 calico-kube-controllers-64456467b5- calico-system 5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179 848 0 2026-01-28 01:45:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:64456467b5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-64456467b5-b47z9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali233390b6603 [] [] }} ContainerID="35ebfb6b17572a32230bdfdaedb249cf9eb1339de9fe0736ec5b607abb5ee8ac" Namespace="calico-system" Pod="calico-kube-controllers-64456467b5-b47z9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64456467b5--b47z9-" Jan 28 01:46:09.597709 containerd[1553]: 2026-01-28 01:46:09.427 [INFO][5017] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="35ebfb6b17572a32230bdfdaedb249cf9eb1339de9fe0736ec5b607abb5ee8ac" Namespace="calico-system" Pod="calico-kube-controllers-64456467b5-b47z9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64456467b5--b47z9-eth0" Jan 28 01:46:09.597709 containerd[1553]: 2026-01-28 01:46:09.492 [INFO][5047] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="35ebfb6b17572a32230bdfdaedb249cf9eb1339de9fe0736ec5b607abb5ee8ac" HandleID="k8s-pod-network.35ebfb6b17572a32230bdfdaedb249cf9eb1339de9fe0736ec5b607abb5ee8ac" Workload="localhost-k8s-calico--kube--controllers--64456467b5--b47z9-eth0" Jan 28 01:46:09.597709 containerd[1553]: 2026-01-28 01:46:09.492 [INFO][5047] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="35ebfb6b17572a32230bdfdaedb249cf9eb1339de9fe0736ec5b607abb5ee8ac" HandleID="k8s-pod-network.35ebfb6b17572a32230bdfdaedb249cf9eb1339de9fe0736ec5b607abb5ee8ac" Workload="localhost-k8s-calico--kube--controllers--64456467b5--b47z9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fb50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-64456467b5-b47z9", "timestamp":"2026-01-28 01:46:09.492004516 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:46:09.597709 containerd[1553]: 2026-01-28 01:46:09.492 [INFO][5047] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:46:09.597709 containerd[1553]: 2026-01-28 01:46:09.493 [INFO][5047] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:46:09.597709 containerd[1553]: 2026-01-28 01:46:09.493 [INFO][5047] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:46:09.597709 containerd[1553]: 2026-01-28 01:46:09.507 [INFO][5047] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.35ebfb6b17572a32230bdfdaedb249cf9eb1339de9fe0736ec5b607abb5ee8ac" host="localhost" Jan 28 01:46:09.597709 containerd[1553]: 2026-01-28 01:46:09.520 [INFO][5047] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:46:09.597709 containerd[1553]: 2026-01-28 01:46:09.531 [INFO][5047] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:46:09.597709 containerd[1553]: 2026-01-28 01:46:09.534 [INFO][5047] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:46:09.597709 containerd[1553]: 2026-01-28 01:46:09.539 [INFO][5047] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:46:09.597709 containerd[1553]: 2026-01-28 01:46:09.539 [INFO][5047] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.35ebfb6b17572a32230bdfdaedb249cf9eb1339de9fe0736ec5b607abb5ee8ac" host="localhost" Jan 28 01:46:09.597709 containerd[1553]: 2026-01-28 01:46:09.543 [INFO][5047] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.35ebfb6b17572a32230bdfdaedb249cf9eb1339de9fe0736ec5b607abb5ee8ac Jan 28 01:46:09.597709 containerd[1553]: 2026-01-28 01:46:09.549 [INFO][5047] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.35ebfb6b17572a32230bdfdaedb249cf9eb1339de9fe0736ec5b607abb5ee8ac" host="localhost" Jan 28 01:46:09.597709 containerd[1553]: 2026-01-28 01:46:09.562 [INFO][5047] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.35ebfb6b17572a32230bdfdaedb249cf9eb1339de9fe0736ec5b607abb5ee8ac" host="localhost" Jan 28 01:46:09.597709 containerd[1553]: 2026-01-28 01:46:09.563 [INFO][5047] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.35ebfb6b17572a32230bdfdaedb249cf9eb1339de9fe0736ec5b607abb5ee8ac" host="localhost" Jan 28 01:46:09.597709 containerd[1553]: 2026-01-28 01:46:09.563 [INFO][5047] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:46:09.597709 containerd[1553]: 2026-01-28 01:46:09.563 [INFO][5047] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="35ebfb6b17572a32230bdfdaedb249cf9eb1339de9fe0736ec5b607abb5ee8ac" HandleID="k8s-pod-network.35ebfb6b17572a32230bdfdaedb249cf9eb1339de9fe0736ec5b607abb5ee8ac" Workload="localhost-k8s-calico--kube--controllers--64456467b5--b47z9-eth0" Jan 28 01:46:09.599647 containerd[1553]: 2026-01-28 01:46:09.568 [INFO][5017] cni-plugin/k8s.go 418: Populated endpoint ContainerID="35ebfb6b17572a32230bdfdaedb249cf9eb1339de9fe0736ec5b607abb5ee8ac" Namespace="calico-system" Pod="calico-kube-controllers-64456467b5-b47z9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64456467b5--b47z9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--64456467b5--b47z9-eth0", GenerateName:"calico-kube-controllers-64456467b5-", Namespace:"calico-system", SelfLink:"", UID:"5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 45, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64456467b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-64456467b5-b47z9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali233390b6603", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:46:09.599647 containerd[1553]: 2026-01-28 01:46:09.569 [INFO][5017] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="35ebfb6b17572a32230bdfdaedb249cf9eb1339de9fe0736ec5b607abb5ee8ac" Namespace="calico-system" Pod="calico-kube-controllers-64456467b5-b47z9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64456467b5--b47z9-eth0" Jan 28 01:46:09.599647 containerd[1553]: 2026-01-28 01:46:09.569 [INFO][5017] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali233390b6603 ContainerID="35ebfb6b17572a32230bdfdaedb249cf9eb1339de9fe0736ec5b607abb5ee8ac" Namespace="calico-system" Pod="calico-kube-controllers-64456467b5-b47z9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64456467b5--b47z9-eth0" Jan 28 01:46:09.599647 containerd[1553]: 2026-01-28 01:46:09.574 [INFO][5017] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="35ebfb6b17572a32230bdfdaedb249cf9eb1339de9fe0736ec5b607abb5ee8ac" Namespace="calico-system" Pod="calico-kube-controllers-64456467b5-b47z9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64456467b5--b47z9-eth0" Jan 28 01:46:09.599647 containerd[1553]: 2026-01-28 01:46:09.574 [INFO][5017] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="35ebfb6b17572a32230bdfdaedb249cf9eb1339de9fe0736ec5b607abb5ee8ac" Namespace="calico-system" Pod="calico-kube-controllers-64456467b5-b47z9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64456467b5--b47z9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--64456467b5--b47z9-eth0", GenerateName:"calico-kube-controllers-64456467b5-", Namespace:"calico-system", SelfLink:"", UID:"5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 45, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64456467b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"35ebfb6b17572a32230bdfdaedb249cf9eb1339de9fe0736ec5b607abb5ee8ac", Pod:"calico-kube-controllers-64456467b5-b47z9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali233390b6603", MAC:"ee:4c:df:ff:9f:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:46:09.599647 containerd[1553]: 2026-01-28 01:46:09.593 [INFO][5017] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="35ebfb6b17572a32230bdfdaedb249cf9eb1339de9fe0736ec5b607abb5ee8ac" Namespace="calico-system" Pod="calico-kube-controllers-64456467b5-b47z9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64456467b5--b47z9-eth0" Jan 28 01:46:09.656067 containerd[1553]: time="2026-01-28T01:46:09.655972977Z" level=info msg="connecting to shim 35ebfb6b17572a32230bdfdaedb249cf9eb1339de9fe0736ec5b607abb5ee8ac" address="unix:///run/containerd/s/993f6049aa6caddd8a557b2ecb05fef95a151df95de4ea39af78e694a4e4a194" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:46:09.691744 systemd-networkd[1475]: cali85441e5ef94: Link UP Jan 28 01:46:09.693511 systemd-networkd[1475]: cali85441e5ef94: Gained carrier Jan 28 01:46:09.724629 systemd[1]: Started cri-containerd-35ebfb6b17572a32230bdfdaedb249cf9eb1339de9fe0736ec5b607abb5ee8ac.scope - libcontainer container 35ebfb6b17572a32230bdfdaedb249cf9eb1339de9fe0736ec5b607abb5ee8ac. Jan 28 01:46:09.730681 containerd[1553]: 2026-01-28 01:46:09.456 [INFO][5016] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6954f9c796--rjrhf-eth0 calico-apiserver-6954f9c796- calico-apiserver 2cd00be8-fccf-4399-b5b1-c60bf8266112 847 0 2026-01-28 01:45:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6954f9c796 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6954f9c796-rjrhf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali85441e5ef94 [] [] }} ContainerID="8a1161edd2e5bf8fad87ab56677eeeb5af1ddb8512eb3897fc1cec53193e5cda" Namespace="calico-apiserver" Pod="calico-apiserver-6954f9c796-rjrhf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6954f9c796--rjrhf-" Jan 28 01:46:09.730681 containerd[1553]: 2026-01-28 01:46:09.456 [INFO][5016] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8a1161edd2e5bf8fad87ab56677eeeb5af1ddb8512eb3897fc1cec53193e5cda" Namespace="calico-apiserver" Pod="calico-apiserver-6954f9c796-rjrhf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6954f9c796--rjrhf-eth0" Jan 28 01:46:09.730681 containerd[1553]: 2026-01-28 01:46:09.512 [INFO][5054] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a1161edd2e5bf8fad87ab56677eeeb5af1ddb8512eb3897fc1cec53193e5cda" HandleID="k8s-pod-network.8a1161edd2e5bf8fad87ab56677eeeb5af1ddb8512eb3897fc1cec53193e5cda" Workload="localhost-k8s-calico--apiserver--6954f9c796--rjrhf-eth0" Jan 28 01:46:09.730681 containerd[1553]: 2026-01-28 01:46:09.512 [INFO][5054] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8a1161edd2e5bf8fad87ab56677eeeb5af1ddb8512eb3897fc1cec53193e5cda" HandleID="k8s-pod-network.8a1161edd2e5bf8fad87ab56677eeeb5af1ddb8512eb3897fc1cec53193e5cda" Workload="localhost-k8s-calico--apiserver--6954f9c796--rjrhf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003919e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6954f9c796-rjrhf", "timestamp":"2026-01-28 01:46:09.512566052 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:46:09.730681 containerd[1553]: 2026-01-28 01:46:09.512 [INFO][5054] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:46:09.730681 containerd[1553]: 2026-01-28 01:46:09.563 [INFO][5054] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:46:09.730681 containerd[1553]: 2026-01-28 01:46:09.563 [INFO][5054] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 28 01:46:09.730681 containerd[1553]: 2026-01-28 01:46:09.608 [INFO][5054] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8a1161edd2e5bf8fad87ab56677eeeb5af1ddb8512eb3897fc1cec53193e5cda" host="localhost" Jan 28 01:46:09.730681 containerd[1553]: 2026-01-28 01:46:09.620 [INFO][5054] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 28 01:46:09.730681 containerd[1553]: 2026-01-28 01:46:09.633 [INFO][5054] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 28 01:46:09.730681 containerd[1553]: 2026-01-28 01:46:09.640 [INFO][5054] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 28 01:46:09.730681 containerd[1553]: 2026-01-28 01:46:09.647 [INFO][5054] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 28 01:46:09.730681 containerd[1553]: 2026-01-28 01:46:09.647 [INFO][5054] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8a1161edd2e5bf8fad87ab56677eeeb5af1ddb8512eb3897fc1cec53193e5cda" host="localhost" Jan 28 01:46:09.730681 containerd[1553]: 2026-01-28 01:46:09.653 [INFO][5054] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8a1161edd2e5bf8fad87ab56677eeeb5af1ddb8512eb3897fc1cec53193e5cda Jan 28 01:46:09.730681 containerd[1553]: 2026-01-28 01:46:09.662 [INFO][5054] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8a1161edd2e5bf8fad87ab56677eeeb5af1ddb8512eb3897fc1cec53193e5cda" host="localhost" Jan 28 01:46:09.730681 containerd[1553]: 2026-01-28 01:46:09.676 [INFO][5054] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.8a1161edd2e5bf8fad87ab56677eeeb5af1ddb8512eb3897fc1cec53193e5cda" host="localhost" Jan 28 01:46:09.730681 containerd[1553]: 2026-01-28 01:46:09.676 [INFO][5054] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.8a1161edd2e5bf8fad87ab56677eeeb5af1ddb8512eb3897fc1cec53193e5cda" host="localhost" Jan 28 01:46:09.730681 containerd[1553]: 2026-01-28 01:46:09.676 [INFO][5054] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:46:09.730681 containerd[1553]: 2026-01-28 01:46:09.677 [INFO][5054] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="8a1161edd2e5bf8fad87ab56677eeeb5af1ddb8512eb3897fc1cec53193e5cda" HandleID="k8s-pod-network.8a1161edd2e5bf8fad87ab56677eeeb5af1ddb8512eb3897fc1cec53193e5cda" Workload="localhost-k8s-calico--apiserver--6954f9c796--rjrhf-eth0" Jan 28 01:46:09.731658 containerd[1553]: 2026-01-28 01:46:09.686 [INFO][5016] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8a1161edd2e5bf8fad87ab56677eeeb5af1ddb8512eb3897fc1cec53193e5cda" Namespace="calico-apiserver" Pod="calico-apiserver-6954f9c796-rjrhf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6954f9c796--rjrhf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6954f9c796--rjrhf-eth0", GenerateName:"calico-apiserver-6954f9c796-", Namespace:"calico-apiserver", SelfLink:"", UID:"2cd00be8-fccf-4399-b5b1-c60bf8266112", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 45, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6954f9c796", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6954f9c796-rjrhf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali85441e5ef94", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:46:09.731658 containerd[1553]: 2026-01-28 01:46:09.686 [INFO][5016] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="8a1161edd2e5bf8fad87ab56677eeeb5af1ddb8512eb3897fc1cec53193e5cda" Namespace="calico-apiserver" Pod="calico-apiserver-6954f9c796-rjrhf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6954f9c796--rjrhf-eth0" Jan 28 01:46:09.731658 containerd[1553]: 2026-01-28 01:46:09.686 [INFO][5016] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali85441e5ef94 ContainerID="8a1161edd2e5bf8fad87ab56677eeeb5af1ddb8512eb3897fc1cec53193e5cda" Namespace="calico-apiserver" Pod="calico-apiserver-6954f9c796-rjrhf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6954f9c796--rjrhf-eth0" Jan 28 01:46:09.731658 containerd[1553]: 2026-01-28 01:46:09.695 [INFO][5016] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a1161edd2e5bf8fad87ab56677eeeb5af1ddb8512eb3897fc1cec53193e5cda" Namespace="calico-apiserver" Pod="calico-apiserver-6954f9c796-rjrhf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6954f9c796--rjrhf-eth0" Jan 28 01:46:09.731658 containerd[1553]: 2026-01-28 01:46:09.696 [INFO][5016] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8a1161edd2e5bf8fad87ab56677eeeb5af1ddb8512eb3897fc1cec53193e5cda" Namespace="calico-apiserver" Pod="calico-apiserver-6954f9c796-rjrhf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6954f9c796--rjrhf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6954f9c796--rjrhf-eth0", GenerateName:"calico-apiserver-6954f9c796-", Namespace:"calico-apiserver", SelfLink:"", UID:"2cd00be8-fccf-4399-b5b1-c60bf8266112", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 45, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6954f9c796", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a1161edd2e5bf8fad87ab56677eeeb5af1ddb8512eb3897fc1cec53193e5cda", Pod:"calico-apiserver-6954f9c796-rjrhf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali85441e5ef94", MAC:"46:36:fa:40:73:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:46:09.731658 containerd[1553]: 2026-01-28 01:46:09.720 [INFO][5016] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8a1161edd2e5bf8fad87ab56677eeeb5af1ddb8512eb3897fc1cec53193e5cda" Namespace="calico-apiserver" Pod="calico-apiserver-6954f9c796-rjrhf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6954f9c796--rjrhf-eth0" Jan 28 01:46:09.808423 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:46:09.816023 containerd[1553]: time="2026-01-28T01:46:09.814150852Z" level=info msg="connecting to shim 8a1161edd2e5bf8fad87ab56677eeeb5af1ddb8512eb3897fc1cec53193e5cda" address="unix:///run/containerd/s/bc71d317683e1bf32265e6e4a3a68b3df7abea4196c225ea817e4df9a2ff4775" namespace=k8s.io protocol=ttrpc version=3 Jan 28 01:46:09.868210 systemd[1]: Started cri-containerd-8a1161edd2e5bf8fad87ab56677eeeb5af1ddb8512eb3897fc1cec53193e5cda.scope - libcontainer container 8a1161edd2e5bf8fad87ab56677eeeb5af1ddb8512eb3897fc1cec53193e5cda. Jan 28 01:46:09.893199 containerd[1553]: time="2026-01-28T01:46:09.893100907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64456467b5-b47z9,Uid:5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179,Namespace:calico-system,Attempt:0,} returns sandbox id \"35ebfb6b17572a32230bdfdaedb249cf9eb1339de9fe0736ec5b607abb5ee8ac\"" Jan 28 01:46:09.901729 containerd[1553]: time="2026-01-28T01:46:09.900732356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:46:09.905404 systemd-resolved[1476]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 28 01:46:09.914671 kubelet[2762]: E0128 01:46:09.914633 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:46:09.915970 kubelet[2762]: E0128 01:46:09.915185 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:46:09.915970 kubelet[2762]: E0128 01:46:09.915486 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vgk8b" podUID="c966ee1e-4a54-4737-b8f4-7c2be261a470" Jan 28 01:46:09.987025 containerd[1553]: time="2026-01-28T01:46:09.986769074Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:46:09.990426 containerd[1553]: time="2026-01-28T01:46:09.990336566Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:46:09.990516 containerd[1553]: time="2026-01-28T01:46:09.990471957Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 01:46:09.990740 kubelet[2762]: E0128 01:46:09.990598 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:46:09.990740 kubelet[2762]: E0128 01:46:09.990696 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:46:09.991209 kubelet[2762]: E0128 01:46:09.990842 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-82qmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-64456467b5-b47z9_calico-system(5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:46:09.995367 kubelet[2762]: E0128 01:46:09.994681 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64456467b5-b47z9" podUID="5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179" Jan 28 01:46:10.014479 containerd[1553]: time="2026-01-28T01:46:10.014373728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6954f9c796-rjrhf,Uid:2cd00be8-fccf-4399-b5b1-c60bf8266112,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8a1161edd2e5bf8fad87ab56677eeeb5af1ddb8512eb3897fc1cec53193e5cda\"" Jan 28 01:46:10.017553 containerd[1553]: time="2026-01-28T01:46:10.017225663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:46:10.050273 systemd-networkd[1475]: cali9afa578169b: Gained IPv6LL Jan 28 01:46:10.085348 containerd[1553]: time="2026-01-28T01:46:10.085183754Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:46:10.088199 containerd[1553]: time="2026-01-28T01:46:10.087866026Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:46:10.088199 containerd[1553]: time="2026-01-28T01:46:10.087999663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:46:10.088392 kubelet[2762]: E0128 01:46:10.088269 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:46:10.088392 kubelet[2762]: E0128 01:46:10.088385 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:46:10.088646 kubelet[2762]: E0128 01:46:10.088521 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tt9hd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6954f9c796-rjrhf_calico-apiserver(2cd00be8-fccf-4399-b5b1-c60bf8266112): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:46:10.090709 kubelet[2762]: E0128 01:46:10.090611 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-rjrhf" podUID="2cd00be8-fccf-4399-b5b1-c60bf8266112" Jan 28 01:46:10.921113 kubelet[2762]: E0128 01:46:10.919564 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-rjrhf" podUID="2cd00be8-fccf-4399-b5b1-c60bf8266112" Jan 28 01:46:10.922695 kubelet[2762]: E0128 01:46:10.920381 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64456467b5-b47z9" podUID="5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179" Jan 28 01:46:11.331588 systemd-networkd[1475]: cali85441e5ef94: Gained IPv6LL Jan 28 01:46:11.347776 kubelet[2762]: E0128 01:46:11.347534 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:46:11.394373 systemd-networkd[1475]: cali233390b6603: Gained IPv6LL Jan 28 01:46:11.923238 kubelet[2762]: E0128 01:46:11.922363 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-rjrhf" podUID="2cd00be8-fccf-4399-b5b1-c60bf8266112" Jan 28 01:46:16.350165 containerd[1553]: time="2026-01-28T01:46:16.350072557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:46:16.417971 containerd[1553]: time="2026-01-28T01:46:16.417815666Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:46:16.420434 containerd[1553]: time="2026-01-28T01:46:16.420049169Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:46:16.420434 containerd[1553]: time="2026-01-28T01:46:16.420171555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 01:46:16.421214 kubelet[2762]: E0128 01:46:16.420955 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:46:16.421214 kubelet[2762]: E0128 01:46:16.421053 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:46:16.421214 kubelet[2762]: E0128 01:46:16.421176 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:298b27a7925644a4836dbb58d943c269,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xmwnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cc7f69c44-p8qpq_calico-system(bfdea9eb-0bce-4c15-b321-f9c7a00efdf0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:46:16.425190 containerd[1553]: time="2026-01-28T01:46:16.425110583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:46:16.492677 containerd[1553]: time="2026-01-28T01:46:16.492571995Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:46:16.494744 containerd[1553]: time="2026-01-28T01:46:16.494635244Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:46:16.494802 containerd[1553]: time="2026-01-28T01:46:16.494677255Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 01:46:16.495391 kubelet[2762]: E0128 01:46:16.495208 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:46:16.495391 kubelet[2762]: E0128 01:46:16.495329 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:46:16.495604 kubelet[2762]: E0128 01:46:16.495475 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xmwnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cc7f69c44-p8qpq_calico-system(bfdea9eb-0bce-4c15-b321-f9c7a00efdf0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:46:16.497314 kubelet[2762]: E0128 01:46:16.497186 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cc7f69c44-p8qpq" podUID="bfdea9eb-0bce-4c15-b321-f9c7a00efdf0" Jan 28 01:46:19.348511 kubelet[2762]: E0128 01:46:19.348409 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:46:19.350528 containerd[1553]: time="2026-01-28T01:46:19.350476418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:46:19.424732 containerd[1553]: time="2026-01-28T01:46:19.424579454Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:46:19.427271 containerd[1553]: time="2026-01-28T01:46:19.427088616Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:46:19.427271 containerd[1553]: time="2026-01-28T01:46:19.427165308Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:46:19.428031 kubelet[2762]: E0128 01:46:19.427722 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:46:19.428031 kubelet[2762]: E0128 01:46:19.427779 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:46:19.428867 kubelet[2762]: E0128 01:46:19.428811 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p2zt7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6954f9c796-gqzwx_calico-apiserver(a2048076-ab34-4562-b42d-515b64a0bfb4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:46:19.430763 kubelet[2762]: E0128 01:46:19.430506 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-gqzwx" podUID="a2048076-ab34-4562-b42d-515b64a0bfb4" Jan 28 01:46:20.352075 containerd[1553]: time="2026-01-28T01:46:20.351502095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:46:20.421977 containerd[1553]: time="2026-01-28T01:46:20.421762005Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:46:20.425066 containerd[1553]: time="2026-01-28T01:46:20.424713740Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:46:20.425066 containerd[1553]: time="2026-01-28T01:46:20.424842388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 01:46:20.425173 kubelet[2762]: E0128 01:46:20.425034 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:46:20.425173 kubelet[2762]: E0128 01:46:20.425082 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:46:20.425677 kubelet[2762]: E0128 01:46:20.425256 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nn7zh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5r68l_calico-system(760a12b1-4a99-4684-a026-7c55d7164578): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:46:20.428993 containerd[1553]: time="2026-01-28T01:46:20.428812998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:46:20.499686 containerd[1553]: time="2026-01-28T01:46:20.499129717Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:46:20.504609 containerd[1553]: time="2026-01-28T01:46:20.504504070Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:46:20.504711 containerd[1553]: time="2026-01-28T01:46:20.504653977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 01:46:20.505411 kubelet[2762]: E0128 01:46:20.505303 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:46:20.505411 kubelet[2762]: E0128 01:46:20.505368 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:46:20.505677 kubelet[2762]: E0128 01:46:20.505555 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nn7zh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5r68l_calico-system(760a12b1-4a99-4684-a026-7c55d7164578): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:46:20.507802 kubelet[2762]: E0128 01:46:20.507724 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5r68l" podUID="760a12b1-4a99-4684-a026-7c55d7164578" Jan 28 01:46:22.351568 containerd[1553]: time="2026-01-28T01:46:22.350750927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:46:22.430643 containerd[1553]: time="2026-01-28T01:46:22.430093266Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:46:22.433364 containerd[1553]: time="2026-01-28T01:46:22.433138492Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:46:22.433456 containerd[1553]: time="2026-01-28T01:46:22.433431524Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 01:46:22.434918 kubelet[2762]: E0128 01:46:22.434709 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:46:22.435596 kubelet[2762]: E0128 01:46:22.435020 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:46:22.435596 kubelet[2762]: E0128 01:46:22.435297 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8sgb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vgk8b_calico-system(c966ee1e-4a54-4737-b8f4-7c2be261a470): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:46:22.437526 kubelet[2762]: E0128 01:46:22.437296 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vgk8b" podUID="c966ee1e-4a54-4737-b8f4-7c2be261a470" Jan 28 01:46:24.349001 containerd[1553]: time="2026-01-28T01:46:24.348808345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:46:24.426841 containerd[1553]: time="2026-01-28T01:46:24.426257154Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:46:24.430314 containerd[1553]: time="2026-01-28T01:46:24.430049617Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:46:24.430314 containerd[1553]: time="2026-01-28T01:46:24.430112095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 01:46:24.430834 kubelet[2762]: E0128 01:46:24.430623 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:46:24.430834 kubelet[2762]: E0128 01:46:24.430723 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:46:24.431423 kubelet[2762]: E0128 01:46:24.430860 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-82qmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-64456467b5-b47z9_calico-system(5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:46:24.433504 kubelet[2762]: E0128 01:46:24.433342 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64456467b5-b47z9" podUID="5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179" Jan 28 01:46:26.349836 containerd[1553]: time="2026-01-28T01:46:26.349591245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:46:26.423941 containerd[1553]: time="2026-01-28T01:46:26.423700125Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:46:26.425716 containerd[1553]: time="2026-01-28T01:46:26.425604967Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:46:26.425716 containerd[1553]: time="2026-01-28T01:46:26.425695294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:46:26.426083 kubelet[2762]: E0128 01:46:26.426004 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:46:26.426433 kubelet[2762]: E0128 01:46:26.426089 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:46:26.426433 kubelet[2762]: E0128 01:46:26.426321 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tt9hd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6954f9c796-rjrhf_calico-apiserver(2cd00be8-fccf-4399-b5b1-c60bf8266112): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:46:26.428342 kubelet[2762]: E0128 01:46:26.428226 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-rjrhf" podUID="2cd00be8-fccf-4399-b5b1-c60bf8266112" Jan 28 01:46:29.023992 kubelet[2762]: E0128 01:46:29.023754 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:46:30.352051 kubelet[2762]: E0128 01:46:30.351854 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-gqzwx" podUID="a2048076-ab34-4562-b42d-515b64a0bfb4" Jan 28 01:46:31.355041 kubelet[2762]: E0128 01:46:31.354049 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cc7f69c44-p8qpq" podUID="bfdea9eb-0bce-4c15-b321-f9c7a00efdf0" Jan 28 01:46:31.357690 kubelet[2762]: E0128 01:46:31.357626 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5r68l" podUID="760a12b1-4a99-4684-a026-7c55d7164578" Jan 28 01:46:33.350822 kubelet[2762]: E0128 01:46:33.350350 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vgk8b" podUID="c966ee1e-4a54-4737-b8f4-7c2be261a470" Jan 28 01:46:36.348473 kubelet[2762]: E0128 01:46:36.348209 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:46:36.714837 systemd[1]: Started sshd@9-10.0.0.33:22-10.0.0.1:56088.service - OpenSSH per-connection server daemon (10.0.0.1:56088). Jan 28 01:46:37.017782 sshd[5228]: Accepted publickey for core from 10.0.0.1 port 56088 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:46:37.024978 sshd-session[5228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:46:37.050319 systemd-logind[1540]: New session 10 of user core. Jan 28 01:46:37.074438 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 28 01:46:37.403142 kubelet[2762]: E0128 01:46:37.402619 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64456467b5-b47z9" podUID="5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179" Jan 28 01:46:37.535126 sshd[5232]: Connection closed by 10.0.0.1 port 56088 Jan 28 01:46:37.534841 sshd-session[5228]: pam_unix(sshd:session): session closed for user core Jan 28 01:46:37.550402 systemd[1]: sshd@9-10.0.0.33:22-10.0.0.1:56088.service: Deactivated successfully. Jan 28 01:46:37.558995 systemd[1]: session-10.scope: Deactivated successfully. Jan 28 01:46:37.565246 systemd-logind[1540]: Session 10 logged out. Waiting for processes to exit. Jan 28 01:46:37.572016 systemd-logind[1540]: Removed session 10. Jan 28 01:46:38.355297 kubelet[2762]: E0128 01:46:38.355004 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-rjrhf" podUID="2cd00be8-fccf-4399-b5b1-c60bf8266112" Jan 28 01:46:41.348195 kubelet[2762]: E0128 01:46:41.348149 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:46:42.553773 systemd[1]: Started sshd@10-10.0.0.33:22-10.0.0.1:56104.service - OpenSSH per-connection server daemon (10.0.0.1:56104). Jan 28 01:46:42.656712 sshd[5256]: Accepted publickey for core from 10.0.0.1 port 56104 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:46:42.659603 sshd-session[5256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:46:42.675481 systemd-logind[1540]: New session 11 of user core. Jan 28 01:46:42.692305 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 28 01:46:42.917410 sshd[5259]: Connection closed by 10.0.0.1 port 56104 Jan 28 01:46:42.918249 sshd-session[5256]: pam_unix(sshd:session): session closed for user core Jan 28 01:46:42.926591 systemd[1]: sshd@10-10.0.0.33:22-10.0.0.1:56104.service: Deactivated successfully. Jan 28 01:46:42.929691 systemd[1]: session-11.scope: Deactivated successfully. Jan 28 01:46:42.933165 systemd-logind[1540]: Session 11 logged out. Waiting for processes to exit. Jan 28 01:46:42.937395 systemd-logind[1540]: Removed session 11. Jan 28 01:46:43.355838 containerd[1553]: time="2026-01-28T01:46:43.351374683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:46:43.512467 containerd[1553]: time="2026-01-28T01:46:43.512417139Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:46:43.522289 containerd[1553]: time="2026-01-28T01:46:43.522231909Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:46:43.526569 containerd[1553]: time="2026-01-28T01:46:43.522969195Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 01:46:43.527402 kubelet[2762]: E0128 01:46:43.526874 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:46:43.527402 kubelet[2762]: E0128 01:46:43.527106 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:46:43.527402 kubelet[2762]: E0128 01:46:43.527247 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nn7zh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5r68l_calico-system(760a12b1-4a99-4684-a026-7c55d7164578): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:46:43.548475 containerd[1553]: time="2026-01-28T01:46:43.538747840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:46:43.670546 containerd[1553]: time="2026-01-28T01:46:43.661223523Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:46:43.706990 containerd[1553]: time="2026-01-28T01:46:43.705161831Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:46:43.706990 containerd[1553]: time="2026-01-28T01:46:43.705269501Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 01:46:43.707238 kubelet[2762]: E0128 01:46:43.706317 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:46:43.707238 kubelet[2762]: E0128 01:46:43.706431 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:46:43.707429 kubelet[2762]: E0128 01:46:43.707355 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nn7zh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5r68l_calico-system(760a12b1-4a99-4684-a026-7c55d7164578): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:46:43.710138 kubelet[2762]: E0128 01:46:43.710081 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5r68l" podUID="760a12b1-4a99-4684-a026-7c55d7164578" Jan 28 01:46:44.366653 containerd[1553]: time="2026-01-28T01:46:44.366611150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:46:44.510561 containerd[1553]: time="2026-01-28T01:46:44.510305680Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:46:44.535781 containerd[1553]: time="2026-01-28T01:46:44.535735055Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:46:44.536131 containerd[1553]: time="2026-01-28T01:46:44.536089583Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:46:44.536979 kubelet[2762]: E0128 01:46:44.536534 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:46:44.536979 kubelet[2762]: E0128 01:46:44.536590 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:46:44.536979 kubelet[2762]: E0128 01:46:44.536723 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p2zt7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6954f9c796-gqzwx_calico-apiserver(a2048076-ab34-4562-b42d-515b64a0bfb4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:46:44.538325 kubelet[2762]: E0128 01:46:44.537845 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-gqzwx" podUID="a2048076-ab34-4562-b42d-515b64a0bfb4" Jan 28 01:46:46.354997 kubelet[2762]: E0128 01:46:46.354620 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:46:46.356819 containerd[1553]: time="2026-01-28T01:46:46.356784210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:46:46.461108 containerd[1553]: time="2026-01-28T01:46:46.461002263Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:46:46.465671 containerd[1553]: time="2026-01-28T01:46:46.465624677Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:46:46.466305 containerd[1553]: time="2026-01-28T01:46:46.466261929Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 01:46:46.467769 kubelet[2762]: E0128 01:46:46.467509 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:46:46.468232 kubelet[2762]: E0128 01:46:46.467879 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:46:46.468232 kubelet[2762]: E0128 01:46:46.468542 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:298b27a7925644a4836dbb58d943c269,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xmwnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cc7f69c44-p8qpq_calico-system(bfdea9eb-0bce-4c15-b321-f9c7a00efdf0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:46:46.472094 containerd[1553]: time="2026-01-28T01:46:46.470864592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:46:46.561109 containerd[1553]: time="2026-01-28T01:46:46.561003056Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:46:46.569182 containerd[1553]: time="2026-01-28T01:46:46.568812500Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:46:46.569426 containerd[1553]: time="2026-01-28T01:46:46.569306617Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 01:46:46.570677 kubelet[2762]: E0128 01:46:46.570632 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:46:46.570992 kubelet[2762]: E0128 01:46:46.570776 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:46:46.571367 kubelet[2762]: E0128 01:46:46.571215 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8sgb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vgk8b_calico-system(c966ee1e-4a54-4737-b8f4-7c2be261a470): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:46:46.572228 containerd[1553]: time="2026-01-28T01:46:46.571823508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:46:46.572419 kubelet[2762]: E0128 01:46:46.572394 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vgk8b" podUID="c966ee1e-4a54-4737-b8f4-7c2be261a470" Jan 28 01:46:46.654538 containerd[1553]: time="2026-01-28T01:46:46.650669096Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:46:46.662635 containerd[1553]: time="2026-01-28T01:46:46.662116516Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:46:46.662635 containerd[1553]: time="2026-01-28T01:46:46.662240517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 01:46:46.667245 kubelet[2762]: E0128 01:46:46.666433 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:46:46.667245 kubelet[2762]: E0128 01:46:46.666501 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:46:46.667245 kubelet[2762]: E0128 01:46:46.666660 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xmwnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cc7f69c44-p8qpq_calico-system(bfdea9eb-0bce-4c15-b321-f9c7a00efdf0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:46:46.672525 kubelet[2762]: E0128 01:46:46.670661 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cc7f69c44-p8qpq" podUID="bfdea9eb-0bce-4c15-b321-f9c7a00efdf0" Jan 28 01:46:47.972688 systemd[1]: Started sshd@11-10.0.0.33:22-10.0.0.1:52396.service - OpenSSH per-connection server daemon (10.0.0.1:52396). Jan 28 01:46:48.176254 sshd[5274]: Accepted publickey for core from 10.0.0.1 port 52396 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:46:48.182616 sshd-session[5274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:46:48.212274 systemd-logind[1540]: New session 12 of user core. Jan 28 01:46:48.233388 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 28 01:46:48.839225 sshd[5277]: Connection closed by 10.0.0.1 port 52396 Jan 28 01:46:48.836067 sshd-session[5274]: pam_unix(sshd:session): session closed for user core Jan 28 01:46:48.859780 systemd[1]: sshd@11-10.0.0.33:22-10.0.0.1:52396.service: Deactivated successfully. Jan 28 01:46:48.873355 systemd[1]: session-12.scope: Deactivated successfully. Jan 28 01:46:48.887685 systemd-logind[1540]: Session 12 logged out. Waiting for processes to exit. Jan 28 01:46:48.904095 systemd-logind[1540]: Removed session 12. Jan 28 01:46:50.395319 containerd[1553]: time="2026-01-28T01:46:50.390996318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:46:50.526648 containerd[1553]: time="2026-01-28T01:46:50.522680511Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:46:50.534245 containerd[1553]: time="2026-01-28T01:46:50.530698498Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:46:50.534245 containerd[1553]: time="2026-01-28T01:46:50.531339067Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:46:50.550759 kubelet[2762]: E0128 01:46:50.550679 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:46:50.555065 kubelet[2762]: E0128 01:46:50.554562 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:46:50.555065 kubelet[2762]: E0128 01:46:50.554742 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tt9hd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6954f9c796-rjrhf_calico-apiserver(2cd00be8-fccf-4399-b5b1-c60bf8266112): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:46:50.559575 kubelet[2762]: E0128 01:46:50.559508 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-rjrhf" podUID="2cd00be8-fccf-4399-b5b1-c60bf8266112" Jan 28 01:46:51.361640 containerd[1553]: time="2026-01-28T01:46:51.361595621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:46:51.478704 containerd[1553]: time="2026-01-28T01:46:51.478474866Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:46:51.490561 containerd[1553]: time="2026-01-28T01:46:51.490444602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 01:46:51.490725 containerd[1553]: time="2026-01-28T01:46:51.490658537Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:46:51.491531 kubelet[2762]: E0128 01:46:51.490878 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:46:51.491531 kubelet[2762]: E0128 01:46:51.491313 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:46:51.492163 kubelet[2762]: E0128 01:46:51.491867 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-82qmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-64456467b5-b47z9_calico-system(5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:46:51.497363 kubelet[2762]: E0128 01:46:51.497312 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64456467b5-b47z9" podUID="5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179" Jan 28 01:46:53.926430 systemd[1]: Started sshd@12-10.0.0.33:22-10.0.0.1:52400.service - OpenSSH per-connection server daemon (10.0.0.1:52400). Jan 28 01:46:54.150113 sshd[5294]: Accepted publickey for core from 10.0.0.1 port 52400 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:46:54.152398 sshd-session[5294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:46:54.186977 systemd-logind[1540]: New session 13 of user core. Jan 28 01:46:54.199271 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 28 01:46:54.785700 sshd[5297]: Connection closed by 10.0.0.1 port 52400 Jan 28 01:46:54.789355 sshd-session[5294]: pam_unix(sshd:session): session closed for user core Jan 28 01:46:54.813346 systemd[1]: sshd@12-10.0.0.33:22-10.0.0.1:52400.service: Deactivated successfully. Jan 28 01:46:54.826744 systemd[1]: session-13.scope: Deactivated successfully. Jan 28 01:46:54.841310 systemd-logind[1540]: Session 13 logged out. Waiting for processes to exit. Jan 28 01:46:54.857697 systemd-logind[1540]: Removed session 13. Jan 28 01:46:55.356537 kubelet[2762]: E0128 01:46:55.356112 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5r68l" podUID="760a12b1-4a99-4684-a026-7c55d7164578" Jan 28 01:46:57.362290 kubelet[2762]: E0128 01:46:57.358661 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-gqzwx" podUID="a2048076-ab34-4562-b42d-515b64a0bfb4" Jan 28 01:46:59.840872 systemd[1]: Started sshd@13-10.0.0.33:22-10.0.0.1:39414.service - OpenSSH per-connection server daemon (10.0.0.1:39414). Jan 28 01:47:00.133631 sshd[5336]: Accepted publickey for core from 10.0.0.1 port 39414 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:47:00.151433 sshd-session[5336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:47:00.191717 systemd-logind[1540]: New session 14 of user core. Jan 28 01:47:00.216431 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 28 01:47:00.995839 sshd[5339]: Connection closed by 10.0.0.1 port 39414 Jan 28 01:47:00.999384 sshd-session[5336]: pam_unix(sshd:session): session closed for user core Jan 28 01:47:01.028580 systemd[1]: sshd@13-10.0.0.33:22-10.0.0.1:39414.service: Deactivated successfully. Jan 28 01:47:01.050505 systemd[1]: session-14.scope: Deactivated successfully. Jan 28 01:47:01.067681 systemd-logind[1540]: Session 14 logged out. Waiting for processes to exit. Jan 28 01:47:01.078594 systemd-logind[1540]: Removed session 14. Jan 28 01:47:01.363774 kubelet[2762]: E0128 01:47:01.363266 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vgk8b" podUID="c966ee1e-4a54-4737-b8f4-7c2be261a470" Jan 28 01:47:01.369771 kubelet[2762]: E0128 01:47:01.365622 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cc7f69c44-p8qpq" podUID="bfdea9eb-0bce-4c15-b321-f9c7a00efdf0" Jan 28 01:47:05.358113 kubelet[2762]: E0128 01:47:05.356535 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-rjrhf" podUID="2cd00be8-fccf-4399-b5b1-c60bf8266112" Jan 28 01:47:06.071422 systemd[1]: Started sshd@14-10.0.0.33:22-10.0.0.1:53104.service - OpenSSH per-connection server daemon (10.0.0.1:53104). Jan 28 01:47:06.306283 sshd[5355]: Accepted publickey for core from 10.0.0.1 port 53104 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:47:06.309628 sshd-session[5355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:47:06.327481 systemd-logind[1540]: New session 15 of user core. Jan 28 01:47:06.351487 kubelet[2762]: E0128 01:47:06.348490 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64456467b5-b47z9" podUID="5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179" Jan 28 01:47:06.353264 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 28 01:47:07.067969 sshd[5358]: Connection closed by 10.0.0.1 port 53104 Jan 28 01:47:07.068239 sshd-session[5355]: pam_unix(sshd:session): session closed for user core Jan 28 01:47:07.083330 systemd-logind[1540]: Session 15 logged out. Waiting for processes to exit. Jan 28 01:47:07.089522 systemd[1]: sshd@14-10.0.0.33:22-10.0.0.1:53104.service: Deactivated successfully. Jan 28 01:47:07.095290 systemd[1]: session-15.scope: Deactivated successfully. Jan 28 01:47:07.110778 systemd-logind[1540]: Removed session 15. Jan 28 01:47:07.357654 kubelet[2762]: E0128 01:47:07.357456 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5r68l" podUID="760a12b1-4a99-4684-a026-7c55d7164578" Jan 28 01:47:08.360231 kubelet[2762]: E0128 01:47:08.357310 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-gqzwx" podUID="a2048076-ab34-4562-b42d-515b64a0bfb4" Jan 28 01:47:12.122655 systemd[1]: Started sshd@15-10.0.0.33:22-10.0.0.1:53114.service - OpenSSH per-connection server daemon (10.0.0.1:53114). Jan 28 01:47:12.362651 kubelet[2762]: E0128 01:47:12.361150 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:47:12.450267 sshd[5375]: Accepted publickey for core from 10.0.0.1 port 53114 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:47:12.448653 sshd-session[5375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:47:12.475413 systemd-logind[1540]: New session 16 of user core. Jan 28 01:47:12.510595 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 28 01:47:13.148324 sshd[5378]: Connection closed by 10.0.0.1 port 53114 Jan 28 01:47:13.152729 sshd-session[5375]: pam_unix(sshd:session): session closed for user core Jan 28 01:47:13.163273 systemd[1]: sshd@15-10.0.0.33:22-10.0.0.1:53114.service: Deactivated successfully. Jan 28 01:47:13.178333 systemd[1]: session-16.scope: Deactivated successfully. Jan 28 01:47:13.183467 systemd-logind[1540]: Session 16 logged out. Waiting for processes to exit. Jan 28 01:47:13.198147 systemd-logind[1540]: Removed session 16. Jan 28 01:47:13.395201 kubelet[2762]: E0128 01:47:13.395106 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cc7f69c44-p8qpq" podUID="bfdea9eb-0bce-4c15-b321-f9c7a00efdf0" Jan 28 01:47:15.360225 kubelet[2762]: E0128 01:47:15.359772 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vgk8b" podUID="c966ee1e-4a54-4737-b8f4-7c2be261a470" Jan 28 01:47:18.211186 systemd[1]: Started sshd@16-10.0.0.33:22-10.0.0.1:44620.service - OpenSSH per-connection server daemon (10.0.0.1:44620). Jan 28 01:47:18.543365 sshd[5393]: Accepted publickey for core from 10.0.0.1 port 44620 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:47:18.549638 sshd-session[5393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:47:18.575300 systemd-logind[1540]: New session 17 of user core. Jan 28 01:47:18.594530 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 28 01:47:19.272318 sshd[5396]: Connection closed by 10.0.0.1 port 44620 Jan 28 01:47:19.272139 sshd-session[5393]: pam_unix(sshd:session): session closed for user core Jan 28 01:47:19.283882 systemd[1]: sshd@16-10.0.0.33:22-10.0.0.1:44620.service: Deactivated successfully. Jan 28 01:47:19.296834 systemd[1]: session-17.scope: Deactivated successfully. Jan 28 01:47:19.315709 systemd-logind[1540]: Session 17 logged out. Waiting for processes to exit. Jan 28 01:47:19.332624 systemd-logind[1540]: Removed session 17. Jan 28 01:47:19.403573 kubelet[2762]: E0128 01:47:19.403432 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-rjrhf" podUID="2cd00be8-fccf-4399-b5b1-c60bf8266112" Jan 28 01:47:19.413423 kubelet[2762]: E0128 01:47:19.413366 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64456467b5-b47z9" podUID="5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179" Jan 28 01:47:21.353007 kubelet[2762]: E0128 01:47:21.350256 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:47:21.353007 kubelet[2762]: E0128 01:47:21.366426 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5r68l" podUID="760a12b1-4a99-4684-a026-7c55d7164578" Jan 28 01:47:22.352176 kubelet[2762]: E0128 01:47:22.351830 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-gqzwx" podUID="a2048076-ab34-4562-b42d-515b64a0bfb4" Jan 28 01:47:24.360571 systemd[1]: Started sshd@17-10.0.0.33:22-10.0.0.1:44636.service - OpenSSH per-connection server daemon (10.0.0.1:44636). Jan 28 01:47:24.774011 sshd[5423]: Accepted publickey for core from 10.0.0.1 port 44636 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:47:24.781716 sshd-session[5423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:47:24.864704 systemd-logind[1540]: New session 18 of user core. Jan 28 01:47:24.893769 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 28 01:47:25.591758 sshd[5426]: Connection closed by 10.0.0.1 port 44636 Jan 28 01:47:25.592857 sshd-session[5423]: pam_unix(sshd:session): session closed for user core Jan 28 01:47:25.632686 systemd[1]: sshd@17-10.0.0.33:22-10.0.0.1:44636.service: Deactivated successfully. Jan 28 01:47:25.649742 systemd[1]: session-18.scope: Deactivated successfully. Jan 28 01:47:25.656514 systemd-logind[1540]: Session 18 logged out. Waiting for processes to exit. Jan 28 01:47:25.671594 systemd-logind[1540]: Removed session 18. Jan 28 01:47:27.359261 kubelet[2762]: E0128 01:47:27.358695 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:47:27.379152 containerd[1553]: time="2026-01-28T01:47:27.372291908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:47:27.504587 containerd[1553]: time="2026-01-28T01:47:27.504340325Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:47:27.514612 containerd[1553]: time="2026-01-28T01:47:27.514578653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 01:47:27.515304 containerd[1553]: time="2026-01-28T01:47:27.515146159Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:47:27.521200 kubelet[2762]: E0128 01:47:27.517414 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:47:27.521200 kubelet[2762]: E0128 01:47:27.517475 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:47:27.521200 kubelet[2762]: E0128 01:47:27.517703 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:298b27a7925644a4836dbb58d943c269,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xmwnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cc7f69c44-p8qpq_calico-system(bfdea9eb-0bce-4c15-b321-f9c7a00efdf0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:47:27.526353 containerd[1553]: time="2026-01-28T01:47:27.521777355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:47:27.661414 containerd[1553]: time="2026-01-28T01:47:27.660686128Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:47:27.671728 containerd[1553]: time="2026-01-28T01:47:27.671656788Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:47:27.673389 containerd[1553]: time="2026-01-28T01:47:27.671765949Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 01:47:27.676556 kubelet[2762]: E0128 01:47:27.676298 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:47:27.677345 kubelet[2762]: E0128 01:47:27.676604 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:47:27.680129 kubelet[2762]: E0128 01:47:27.679824 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8sgb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vgk8b_calico-system(c966ee1e-4a54-4737-b8f4-7c2be261a470): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:47:27.681849 kubelet[2762]: E0128 01:47:27.681783 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vgk8b" podUID="c966ee1e-4a54-4737-b8f4-7c2be261a470" Jan 28 01:47:27.684154 containerd[1553]: time="2026-01-28T01:47:27.683880447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:47:27.797328 containerd[1553]: time="2026-01-28T01:47:27.797277510Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:47:27.800465 containerd[1553]: time="2026-01-28T01:47:27.800423804Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 01:47:27.800636 containerd[1553]: time="2026-01-28T01:47:27.800601183Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:47:27.805324 kubelet[2762]: E0128 01:47:27.802776 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:47:27.805324 kubelet[2762]: E0128 01:47:27.802883 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:47:27.805324 kubelet[2762]: E0128 01:47:27.804239 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xmwnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cc7f69c44-p8qpq_calico-system(bfdea9eb-0bce-4c15-b321-f9c7a00efdf0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:47:27.808631 kubelet[2762]: E0128 01:47:27.807802 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cc7f69c44-p8qpq" podUID="bfdea9eb-0bce-4c15-b321-f9c7a00efdf0" Jan 28 01:47:30.620820 systemd[1]: Started sshd@18-10.0.0.33:22-10.0.0.1:37664.service - OpenSSH per-connection server daemon (10.0.0.1:37664). Jan 28 01:47:30.869361 sshd[5466]: Accepted publickey for core from 10.0.0.1 port 37664 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:47:30.875807 sshd-session[5466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:47:30.918220 systemd-logind[1540]: New session 19 of user core. Jan 28 01:47:30.946313 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 28 01:47:31.377305 kubelet[2762]: E0128 01:47:31.377258 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64456467b5-b47z9" podUID="5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179" Jan 28 01:47:31.567628 sshd[5475]: Connection closed by 10.0.0.1 port 37664 Jan 28 01:47:31.572731 sshd-session[5466]: pam_unix(sshd:session): session closed for user core Jan 28 01:47:31.623136 systemd[1]: sshd@18-10.0.0.33:22-10.0.0.1:37664.service: Deactivated successfully. Jan 28 01:47:31.637832 systemd[1]: session-19.scope: Deactivated successfully. Jan 28 01:47:31.650667 systemd-logind[1540]: Session 19 logged out. Waiting for processes to exit. Jan 28 01:47:31.669676 systemd[1]: Started sshd@19-10.0.0.33:22-10.0.0.1:37666.service - OpenSSH per-connection server daemon (10.0.0.1:37666). Jan 28 01:47:31.679227 systemd-logind[1540]: Removed session 19. Jan 28 01:47:31.888634 sshd[5497]: Accepted publickey for core from 10.0.0.1 port 37666 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:47:31.894822 sshd-session[5497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:47:31.943259 systemd-logind[1540]: New session 20 of user core. Jan 28 01:47:31.944603 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 28 01:47:32.741529 sshd[5500]: Connection closed by 10.0.0.1 port 37666 Jan 28 01:47:32.746429 sshd-session[5497]: pam_unix(sshd:session): session closed for user core Jan 28 01:47:32.770865 systemd[1]: sshd@19-10.0.0.33:22-10.0.0.1:37666.service: Deactivated successfully. Jan 28 01:47:32.776718 systemd[1]: session-20.scope: Deactivated successfully. Jan 28 01:47:32.780503 systemd-logind[1540]: Session 20 logged out. Waiting for processes to exit. Jan 28 01:47:32.790019 systemd-logind[1540]: Removed session 20. Jan 28 01:47:32.792319 systemd[1]: Started sshd@20-10.0.0.33:22-10.0.0.1:37680.service - OpenSSH per-connection server daemon (10.0.0.1:37680). Jan 28 01:47:33.060377 sshd[5512]: Accepted publickey for core from 10.0.0.1 port 37680 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:47:33.060675 sshd-session[5512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:47:33.095123 systemd-logind[1540]: New session 21 of user core. Jan 28 01:47:33.116181 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 28 01:47:33.352621 containerd[1553]: time="2026-01-28T01:47:33.352048494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:47:33.512285 containerd[1553]: time="2026-01-28T01:47:33.509770789Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:47:33.524724 containerd[1553]: time="2026-01-28T01:47:33.524666408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:47:33.528237 containerd[1553]: time="2026-01-28T01:47:33.527834434Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:47:33.537243 kubelet[2762]: E0128 01:47:33.530723 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:47:33.537243 kubelet[2762]: E0128 01:47:33.530785 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:47:33.541358 kubelet[2762]: E0128 01:47:33.534763 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tt9hd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6954f9c796-rjrhf_calico-apiserver(2cd00be8-fccf-4399-b5b1-c60bf8266112): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:47:33.549599 kubelet[2762]: E0128 01:47:33.542880 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-rjrhf" podUID="2cd00be8-fccf-4399-b5b1-c60bf8266112" Jan 28 01:47:33.826049 sshd[5515]: Connection closed by 10.0.0.1 port 37680 Jan 28 01:47:33.831485 sshd-session[5512]: pam_unix(sshd:session): session closed for user core Jan 28 01:47:33.871867 systemd[1]: sshd@20-10.0.0.33:22-10.0.0.1:37680.service: Deactivated successfully. Jan 28 01:47:33.895610 systemd[1]: session-21.scope: Deactivated successfully. Jan 28 01:47:33.904652 systemd-logind[1540]: Session 21 logged out. Waiting for processes to exit. Jan 28 01:47:33.918407 systemd-logind[1540]: Removed session 21. Jan 28 01:47:35.375432 containerd[1553]: time="2026-01-28T01:47:35.373514957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:47:35.464810 containerd[1553]: time="2026-01-28T01:47:35.464586738Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:47:35.481508 containerd[1553]: time="2026-01-28T01:47:35.481443631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:47:35.485405 containerd[1553]: time="2026-01-28T01:47:35.481857160Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:47:35.490732 kubelet[2762]: E0128 01:47:35.490020 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:47:35.490732 kubelet[2762]: E0128 01:47:35.490154 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:47:35.490732 kubelet[2762]: E0128 01:47:35.490313 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p2zt7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6954f9c796-gqzwx_calico-apiserver(a2048076-ab34-4562-b42d-515b64a0bfb4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:47:35.494798 kubelet[2762]: E0128 01:47:35.492002 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-gqzwx" podUID="a2048076-ab34-4562-b42d-515b64a0bfb4" Jan 28 01:47:36.379186 containerd[1553]: time="2026-01-28T01:47:36.374401111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:47:36.507237 containerd[1553]: time="2026-01-28T01:47:36.507182154Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:47:36.511248 containerd[1553]: time="2026-01-28T01:47:36.510858004Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:47:36.511248 containerd[1553]: time="2026-01-28T01:47:36.511044362Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 01:47:36.514514 kubelet[2762]: E0128 01:47:36.513489 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:47:36.514514 kubelet[2762]: E0128 01:47:36.513603 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:47:36.514514 kubelet[2762]: E0128 01:47:36.513752 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nn7zh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5r68l_calico-system(760a12b1-4a99-4684-a026-7c55d7164578): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:47:36.517654 containerd[1553]: time="2026-01-28T01:47:36.516679107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:47:36.595984 containerd[1553]: time="2026-01-28T01:47:36.595618821Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:47:36.606005 containerd[1553]: time="2026-01-28T01:47:36.605242038Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:47:36.606005 containerd[1553]: time="2026-01-28T01:47:36.605376118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 01:47:36.610383 kubelet[2762]: E0128 01:47:36.605719 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:47:36.610383 kubelet[2762]: E0128 01:47:36.605783 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:47:36.613385 kubelet[2762]: E0128 01:47:36.612778 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nn7zh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5r68l_calico-system(760a12b1-4a99-4684-a026-7c55d7164578): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:47:36.614359 kubelet[2762]: E0128 01:47:36.614246 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5r68l" podUID="760a12b1-4a99-4684-a026-7c55d7164578" Jan 28 01:47:38.891821 systemd[1]: Started sshd@21-10.0.0.33:22-10.0.0.1:55678.service - OpenSSH per-connection server daemon (10.0.0.1:55678). Jan 28 01:47:39.153054 sshd[5539]: Accepted publickey for core from 10.0.0.1 port 55678 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:47:39.161705 sshd-session[5539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:47:39.201302 systemd-logind[1540]: New session 22 of user core. Jan 28 01:47:39.231218 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 28 01:47:39.352213 kubelet[2762]: E0128 01:47:39.350378 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:47:39.352213 kubelet[2762]: E0128 01:47:39.350700 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:47:39.840341 sshd[5542]: Connection closed by 10.0.0.1 port 55678 Jan 28 01:47:39.844993 sshd-session[5539]: pam_unix(sshd:session): session closed for user core Jan 28 01:47:39.854787 systemd[1]: sshd@21-10.0.0.33:22-10.0.0.1:55678.service: Deactivated successfully. Jan 28 01:47:39.868826 systemd[1]: session-22.scope: Deactivated successfully. Jan 28 01:47:39.877394 systemd-logind[1540]: Session 22 logged out. Waiting for processes to exit. Jan 28 01:47:39.886718 systemd-logind[1540]: Removed session 22. Jan 28 01:47:40.356654 kubelet[2762]: E0128 01:47:40.356432 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vgk8b" podUID="c966ee1e-4a54-4737-b8f4-7c2be261a470" Jan 28 01:47:41.370813 kubelet[2762]: E0128 01:47:41.370451 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cc7f69c44-p8qpq" podUID="bfdea9eb-0bce-4c15-b321-f9c7a00efdf0" Jan 28 01:47:42.352737 kubelet[2762]: E0128 01:47:42.351404 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:47:44.894678 systemd[1]: Started sshd@22-10.0.0.33:22-10.0.0.1:58028.service - OpenSSH per-connection server daemon (10.0.0.1:58028). Jan 28 01:47:45.134061 sshd[5556]: Accepted publickey for core from 10.0.0.1 port 58028 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:47:45.138862 sshd-session[5556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:47:45.177356 systemd-logind[1540]: New session 23 of user core. Jan 28 01:47:45.182859 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 28 01:47:45.604154 sshd[5559]: Connection closed by 10.0.0.1 port 58028 Jan 28 01:47:45.605844 sshd-session[5556]: pam_unix(sshd:session): session closed for user core Jan 28 01:47:45.623786 systemd[1]: sshd@22-10.0.0.33:22-10.0.0.1:58028.service: Deactivated successfully. Jan 28 01:47:45.634615 systemd[1]: session-23.scope: Deactivated successfully. Jan 28 01:47:45.641868 systemd-logind[1540]: Session 23 logged out. Waiting for processes to exit. Jan 28 01:47:45.653010 systemd-logind[1540]: Removed session 23. Jan 28 01:47:46.354299 kubelet[2762]: E0128 01:47:46.352706 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-rjrhf" podUID="2cd00be8-fccf-4399-b5b1-c60bf8266112" Jan 28 01:47:46.360764 containerd[1553]: time="2026-01-28T01:47:46.353835888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:47:46.457492 containerd[1553]: time="2026-01-28T01:47:46.457442880Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:47:46.472007 containerd[1553]: time="2026-01-28T01:47:46.471593337Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:47:46.472007 containerd[1553]: time="2026-01-28T01:47:46.471996298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 01:47:46.475025 kubelet[2762]: E0128 01:47:46.472671 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:47:46.475025 kubelet[2762]: E0128 01:47:46.472735 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:47:46.475025 kubelet[2762]: E0128 01:47:46.472881 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-82qmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-64456467b5-b47z9_calico-system(5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:47:46.475025 kubelet[2762]: E0128 01:47:46.474610 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64456467b5-b47z9" podUID="5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179" Jan 28 01:47:48.360486 kubelet[2762]: E0128 01:47:48.358309 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-gqzwx" podUID="a2048076-ab34-4562-b42d-515b64a0bfb4" Jan 28 01:47:50.646752 systemd[1]: Started sshd@23-10.0.0.33:22-10.0.0.1:58034.service - OpenSSH per-connection server daemon (10.0.0.1:58034). Jan 28 01:47:50.864428 sshd[5573]: Accepted publickey for core from 10.0.0.1 port 58034 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:47:50.874507 sshd-session[5573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:47:50.935675 systemd-logind[1540]: New session 24 of user core. Jan 28 01:47:50.961860 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 28 01:47:51.354474 kubelet[2762]: E0128 01:47:51.351721 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:47:51.379098 kubelet[2762]: E0128 01:47:51.378290 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5r68l" podUID="760a12b1-4a99-4684-a026-7c55d7164578" Jan 28 01:47:51.777350 sshd[5576]: Connection closed by 10.0.0.1 port 58034 Jan 28 01:47:51.775777 sshd-session[5573]: pam_unix(sshd:session): session closed for user core Jan 28 01:47:51.796020 systemd-logind[1540]: Session 24 logged out. Waiting for processes to exit. Jan 28 01:47:51.804315 systemd[1]: sshd@23-10.0.0.33:22-10.0.0.1:58034.service: Deactivated successfully. Jan 28 01:47:51.813881 systemd[1]: session-24.scope: Deactivated successfully. Jan 28 01:47:51.834432 systemd-logind[1540]: Removed session 24. Jan 28 01:47:52.359388 kubelet[2762]: E0128 01:47:52.358696 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cc7f69c44-p8qpq" podUID="bfdea9eb-0bce-4c15-b321-f9c7a00efdf0" Jan 28 01:47:54.356197 kubelet[2762]: E0128 01:47:54.356068 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vgk8b" podUID="c966ee1e-4a54-4737-b8f4-7c2be261a470" Jan 28 01:47:56.806418 systemd[1]: Started sshd@24-10.0.0.33:22-10.0.0.1:54580.service - OpenSSH per-connection server daemon (10.0.0.1:54580). Jan 28 01:47:57.016202 sshd[5591]: Accepted publickey for core from 10.0.0.1 port 54580 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:47:57.019594 sshd-session[5591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:47:57.050387 systemd-logind[1540]: New session 25 of user core. Jan 28 01:47:57.078115 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 28 01:47:57.642004 sshd[5594]: Connection closed by 10.0.0.1 port 54580 Jan 28 01:47:57.643793 sshd-session[5591]: pam_unix(sshd:session): session closed for user core Jan 28 01:47:57.657689 systemd-logind[1540]: Session 25 logged out. Waiting for processes to exit. Jan 28 01:47:57.671546 systemd[1]: sshd@24-10.0.0.33:22-10.0.0.1:54580.service: Deactivated successfully. Jan 28 01:47:57.683355 systemd[1]: session-25.scope: Deactivated successfully. Jan 28 01:47:57.697256 systemd-logind[1540]: Removed session 25. Jan 28 01:47:58.355657 kubelet[2762]: E0128 01:47:58.350715 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:47:58.369378 kubelet[2762]: E0128 01:47:58.365589 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-rjrhf" podUID="2cd00be8-fccf-4399-b5b1-c60bf8266112" Jan 28 01:48:01.356290 kubelet[2762]: E0128 01:48:01.356085 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64456467b5-b47z9" podUID="5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179" Jan 28 01:48:02.699503 systemd[1]: Started sshd@25-10.0.0.33:22-10.0.0.1:54590.service - OpenSSH per-connection server daemon (10.0.0.1:54590). Jan 28 01:48:03.115515 sshd[5635]: Accepted publickey for core from 10.0.0.1 port 54590 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:48:03.126075 sshd-session[5635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:48:03.164821 systemd-logind[1540]: New session 26 of user core. Jan 28 01:48:03.178345 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 28 01:48:03.359370 kubelet[2762]: E0128 01:48:03.358597 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-gqzwx" podUID="a2048076-ab34-4562-b42d-515b64a0bfb4" Jan 28 01:48:03.692365 sshd[5638]: Connection closed by 10.0.0.1 port 54590 Jan 28 01:48:03.692793 sshd-session[5635]: pam_unix(sshd:session): session closed for user core Jan 28 01:48:03.707390 systemd[1]: sshd@25-10.0.0.33:22-10.0.0.1:54590.service: Deactivated successfully. Jan 28 01:48:03.724034 systemd[1]: session-26.scope: Deactivated successfully. Jan 28 01:48:03.737464 systemd-logind[1540]: Session 26 logged out. Waiting for processes to exit. Jan 28 01:48:03.742847 systemd-logind[1540]: Removed session 26. Jan 28 01:48:06.365362 kubelet[2762]: E0128 01:48:06.365276 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cc7f69c44-p8qpq" podUID="bfdea9eb-0bce-4c15-b321-f9c7a00efdf0" Jan 28 01:48:06.375559 kubelet[2762]: E0128 01:48:06.375345 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5r68l" podUID="760a12b1-4a99-4684-a026-7c55d7164578" Jan 28 01:48:08.724634 systemd[1]: Started sshd@26-10.0.0.33:22-10.0.0.1:55126.service - OpenSSH per-connection server daemon (10.0.0.1:55126). Jan 28 01:48:08.847730 sshd[5653]: Accepted publickey for core from 10.0.0.1 port 55126 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:48:08.852651 sshd-session[5653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:48:08.874106 systemd-logind[1540]: New session 27 of user core. Jan 28 01:48:08.889303 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 28 01:48:09.265353 sshd[5656]: Connection closed by 10.0.0.1 port 55126 Jan 28 01:48:09.264262 sshd-session[5653]: pam_unix(sshd:session): session closed for user core Jan 28 01:48:09.291055 systemd[1]: sshd@26-10.0.0.33:22-10.0.0.1:55126.service: Deactivated successfully. Jan 28 01:48:09.294508 systemd[1]: session-27.scope: Deactivated successfully. Jan 28 01:48:09.308489 systemd-logind[1540]: Session 27 logged out. Waiting for processes to exit. Jan 28 01:48:09.315240 systemd-logind[1540]: Removed session 27. Jan 28 01:48:09.352809 kubelet[2762]: E0128 01:48:09.352688 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vgk8b" podUID="c966ee1e-4a54-4737-b8f4-7c2be261a470" Jan 28 01:48:10.379352 kubelet[2762]: E0128 01:48:10.377723 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-rjrhf" podUID="2cd00be8-fccf-4399-b5b1-c60bf8266112" Jan 28 01:48:13.355022 kubelet[2762]: E0128 01:48:13.351247 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64456467b5-b47z9" podUID="5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179" Jan 28 01:48:14.318617 systemd[1]: Started sshd@27-10.0.0.33:22-10.0.0.1:55134.service - OpenSSH per-connection server daemon (10.0.0.1:55134). Jan 28 01:48:14.526311 sshd[5669]: Accepted publickey for core from 10.0.0.1 port 55134 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:48:14.533270 sshd-session[5669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:48:14.557283 systemd-logind[1540]: New session 28 of user core. Jan 28 01:48:14.567233 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 28 01:48:14.963404 sshd[5672]: Connection closed by 10.0.0.1 port 55134 Jan 28 01:48:14.962005 sshd-session[5669]: pam_unix(sshd:session): session closed for user core Jan 28 01:48:14.978082 systemd[1]: sshd@27-10.0.0.33:22-10.0.0.1:55134.service: Deactivated successfully. Jan 28 01:48:14.990859 systemd[1]: session-28.scope: Deactivated successfully. Jan 28 01:48:15.027092 systemd-logind[1540]: Session 28 logged out. Waiting for processes to exit. Jan 28 01:48:15.037614 systemd-logind[1540]: Removed session 28. Jan 28 01:48:17.353832 kubelet[2762]: E0128 01:48:17.352966 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-gqzwx" podUID="a2048076-ab34-4562-b42d-515b64a0bfb4" Jan 28 01:48:18.365876 kubelet[2762]: E0128 01:48:18.365766 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5r68l" podUID="760a12b1-4a99-4684-a026-7c55d7164578" Jan 28 01:48:18.375163 kubelet[2762]: E0128 01:48:18.374952 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cc7f69c44-p8qpq" podUID="bfdea9eb-0bce-4c15-b321-f9c7a00efdf0" Jan 28 01:48:20.001711 systemd[1]: Started sshd@28-10.0.0.33:22-10.0.0.1:42718.service - OpenSSH per-connection server daemon (10.0.0.1:42718). Jan 28 01:48:20.183610 sshd[5690]: Accepted publickey for core from 10.0.0.1 port 42718 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:48:20.189532 sshd-session[5690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:48:20.231023 systemd-logind[1540]: New session 29 of user core. Jan 28 01:48:20.274180 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 28 01:48:20.351696 kubelet[2762]: E0128 01:48:20.351030 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vgk8b" podUID="c966ee1e-4a54-4737-b8f4-7c2be261a470" Jan 28 01:48:20.735141 sshd[5694]: Connection closed by 10.0.0.1 port 42718 Jan 28 01:48:20.747834 sshd-session[5690]: pam_unix(sshd:session): session closed for user core Jan 28 01:48:20.767590 systemd[1]: sshd@28-10.0.0.33:22-10.0.0.1:42718.service: Deactivated successfully. Jan 28 01:48:20.774687 systemd[1]: session-29.scope: Deactivated successfully. Jan 28 01:48:20.782341 systemd-logind[1540]: Session 29 logged out. Waiting for processes to exit. Jan 28 01:48:20.787688 systemd-logind[1540]: Removed session 29. Jan 28 01:48:21.359583 kubelet[2762]: E0128 01:48:21.353977 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-rjrhf" podUID="2cd00be8-fccf-4399-b5b1-c60bf8266112" Jan 28 01:48:23.353421 kubelet[2762]: E0128 01:48:23.348039 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:25.772612 systemd[1]: Started sshd@29-10.0.0.33:22-10.0.0.1:41482.service - OpenSSH per-connection server daemon (10.0.0.1:41482). Jan 28 01:48:25.982727 sshd[5707]: Accepted publickey for core from 10.0.0.1 port 41482 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:48:25.992793 sshd-session[5707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:48:26.027109 systemd-logind[1540]: New session 30 of user core. Jan 28 01:48:26.048137 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 28 01:48:26.370726 kubelet[2762]: E0128 01:48:26.364234 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64456467b5-b47z9" podUID="5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179" Jan 28 01:48:26.655695 sshd[5712]: Connection closed by 10.0.0.1 port 41482 Jan 28 01:48:26.657144 sshd-session[5707]: pam_unix(sshd:session): session closed for user core Jan 28 01:48:26.677999 systemd[1]: sshd@29-10.0.0.33:22-10.0.0.1:41482.service: Deactivated successfully. Jan 28 01:48:26.691621 systemd[1]: session-30.scope: Deactivated successfully. Jan 28 01:48:26.707221 systemd-logind[1540]: Session 30 logged out. Waiting for processes to exit. Jan 28 01:48:26.717299 systemd-logind[1540]: Removed session 30. Jan 28 01:48:29.367565 kubelet[2762]: E0128 01:48:29.364141 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5r68l" podUID="760a12b1-4a99-4684-a026-7c55d7164578" Jan 28 01:48:31.706370 systemd[1]: Started sshd@30-10.0.0.33:22-10.0.0.1:41498.service - OpenSSH per-connection server daemon (10.0.0.1:41498). Jan 28 01:48:31.909974 sshd[5751]: Accepted publickey for core from 10.0.0.1 port 41498 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:48:31.918146 sshd-session[5751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:48:31.946421 systemd-logind[1540]: New session 31 of user core. Jan 28 01:48:31.959613 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 28 01:48:32.350451 sshd[5754]: Connection closed by 10.0.0.1 port 41498 Jan 28 01:48:32.353669 sshd-session[5751]: pam_unix(sshd:session): session closed for user core Jan 28 01:48:32.377081 kubelet[2762]: E0128 01:48:32.371862 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vgk8b" podUID="c966ee1e-4a54-4737-b8f4-7c2be261a470" Jan 28 01:48:32.377081 kubelet[2762]: E0128 01:48:32.372707 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-gqzwx" podUID="a2048076-ab34-4562-b42d-515b64a0bfb4" Jan 28 01:48:32.379219 systemd[1]: sshd@30-10.0.0.33:22-10.0.0.1:41498.service: Deactivated successfully. Jan 28 01:48:32.392654 systemd[1]: session-31.scope: Deactivated successfully. Jan 28 01:48:32.401323 systemd-logind[1540]: Session 31 logged out. Waiting for processes to exit. Jan 28 01:48:32.417569 systemd-logind[1540]: Removed session 31. Jan 28 01:48:33.354392 kubelet[2762]: E0128 01:48:33.354109 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cc7f69c44-p8qpq" podUID="bfdea9eb-0bce-4c15-b321-f9c7a00efdf0" Jan 28 01:48:35.369427 kubelet[2762]: E0128 01:48:35.364201 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-rjrhf" podUID="2cd00be8-fccf-4399-b5b1-c60bf8266112" Jan 28 01:48:37.353854 kubelet[2762]: E0128 01:48:37.353203 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:37.353854 kubelet[2762]: E0128 01:48:37.353757 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64456467b5-b47z9" podUID="5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179" Jan 28 01:48:37.427727 systemd[1]: Started sshd@31-10.0.0.33:22-10.0.0.1:43174.service - OpenSSH per-connection server daemon (10.0.0.1:43174). Jan 28 01:48:37.696460 sshd[5767]: Accepted publickey for core from 10.0.0.1 port 43174 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:48:37.701473 sshd-session[5767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:48:37.737844 systemd-logind[1540]: New session 32 of user core. Jan 28 01:48:37.755471 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 28 01:48:38.213061 sshd[5770]: Connection closed by 10.0.0.1 port 43174 Jan 28 01:48:38.214229 sshd-session[5767]: pam_unix(sshd:session): session closed for user core Jan 28 01:48:38.230201 systemd[1]: sshd@31-10.0.0.33:22-10.0.0.1:43174.service: Deactivated successfully. Jan 28 01:48:38.245100 systemd[1]: session-32.scope: Deactivated successfully. Jan 28 01:48:38.255476 systemd-logind[1540]: Session 32 logged out. Waiting for processes to exit. Jan 28 01:48:38.264600 systemd-logind[1540]: Removed session 32. Jan 28 01:48:40.366882 kubelet[2762]: E0128 01:48:40.366753 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:42.358093 kubelet[2762]: E0128 01:48:42.356974 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5r68l" podUID="760a12b1-4a99-4684-a026-7c55d7164578" Jan 28 01:48:43.264481 systemd[1]: Started sshd@32-10.0.0.33:22-10.0.0.1:43176.service - OpenSSH per-connection server daemon (10.0.0.1:43176). Jan 28 01:48:43.454956 sshd[5791]: Accepted publickey for core from 10.0.0.1 port 43176 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:48:43.462418 sshd-session[5791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:48:43.490025 systemd-logind[1540]: New session 33 of user core. Jan 28 01:48:43.504850 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 28 01:48:44.047881 sshd[5794]: Connection closed by 10.0.0.1 port 43176 Jan 28 01:48:44.048496 sshd-session[5791]: pam_unix(sshd:session): session closed for user core Jan 28 01:48:44.069879 systemd[1]: sshd@32-10.0.0.33:22-10.0.0.1:43176.service: Deactivated successfully. Jan 28 01:48:44.095588 systemd[1]: session-33.scope: Deactivated successfully. Jan 28 01:48:44.103420 systemd-logind[1540]: Session 33 logged out. Waiting for processes to exit. Jan 28 01:48:44.120219 systemd-logind[1540]: Removed session 33. Jan 28 01:48:44.353557 kubelet[2762]: E0128 01:48:44.348283 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:45.352424 kubelet[2762]: E0128 01:48:45.351540 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:45.370568 kubelet[2762]: E0128 01:48:45.368008 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cc7f69c44-p8qpq" podUID="bfdea9eb-0bce-4c15-b321-f9c7a00efdf0" Jan 28 01:48:46.361545 kubelet[2762]: E0128 01:48:46.353548 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-gqzwx" podUID="a2048076-ab34-4562-b42d-515b64a0bfb4" Jan 28 01:48:47.351476 kubelet[2762]: E0128 01:48:47.350202 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vgk8b" podUID="c966ee1e-4a54-4737-b8f4-7c2be261a470" Jan 28 01:48:48.390591 kubelet[2762]: E0128 01:48:48.390387 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-rjrhf" podUID="2cd00be8-fccf-4399-b5b1-c60bf8266112" Jan 28 01:48:49.094725 systemd[1]: Started sshd@33-10.0.0.33:22-10.0.0.1:53672.service - OpenSSH per-connection server daemon (10.0.0.1:53672). Jan 28 01:48:49.321218 sshd[5807]: Accepted publickey for core from 10.0.0.1 port 53672 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:48:49.325662 sshd-session[5807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:48:49.336210 systemd-logind[1540]: New session 34 of user core. Jan 28 01:48:49.352417 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 28 01:48:49.785118 sshd[5810]: Connection closed by 10.0.0.1 port 53672 Jan 28 01:48:49.783268 sshd-session[5807]: pam_unix(sshd:session): session closed for user core Jan 28 01:48:49.800588 systemd[1]: sshd@33-10.0.0.33:22-10.0.0.1:53672.service: Deactivated successfully. Jan 28 01:48:49.813251 systemd[1]: session-34.scope: Deactivated successfully. Jan 28 01:48:49.824234 systemd-logind[1540]: Session 34 logged out. Waiting for processes to exit. Jan 28 01:48:49.827837 systemd-logind[1540]: Removed session 34. Jan 28 01:48:50.368636 kubelet[2762]: E0128 01:48:50.368100 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64456467b5-b47z9" podUID="5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179" Jan 28 01:48:54.829591 systemd[1]: Started sshd@34-10.0.0.33:22-10.0.0.1:55598.service - OpenSSH per-connection server daemon (10.0.0.1:55598). Jan 28 01:48:55.074255 sshd[5825]: Accepted publickey for core from 10.0.0.1 port 55598 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:48:55.081797 sshd-session[5825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:48:55.125166 systemd-logind[1540]: New session 35 of user core. Jan 28 01:48:55.140855 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 28 01:48:55.364100 kubelet[2762]: E0128 01:48:55.359697 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5r68l" podUID="760a12b1-4a99-4684-a026-7c55d7164578" Jan 28 01:48:55.616743 sshd[5828]: Connection closed by 10.0.0.1 port 55598 Jan 28 01:48:55.620401 sshd-session[5825]: pam_unix(sshd:session): session closed for user core Jan 28 01:48:55.643007 systemd[1]: sshd@34-10.0.0.33:22-10.0.0.1:55598.service: Deactivated successfully. Jan 28 01:48:55.654543 systemd[1]: session-35.scope: Deactivated successfully. Jan 28 01:48:55.661004 systemd-logind[1540]: Session 35 logged out. Waiting for processes to exit. Jan 28 01:48:55.674030 systemd[1]: Started sshd@35-10.0.0.33:22-10.0.0.1:55600.service - OpenSSH per-connection server daemon (10.0.0.1:55600). Jan 28 01:48:55.684280 systemd-logind[1540]: Removed session 35. Jan 28 01:48:55.918966 sshd[5841]: Accepted publickey for core from 10.0.0.1 port 55600 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:48:55.926303 sshd-session[5841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:48:55.961661 systemd-logind[1540]: New session 36 of user core. Jan 28 01:48:55.974786 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 28 01:48:57.261615 sshd[5844]: Connection closed by 10.0.0.1 port 55600 Jan 28 01:48:57.259195 sshd-session[5841]: pam_unix(sshd:session): session closed for user core Jan 28 01:48:57.323960 systemd[1]: sshd@35-10.0.0.33:22-10.0.0.1:55600.service: Deactivated successfully. Jan 28 01:48:57.344611 systemd[1]: session-36.scope: Deactivated successfully. Jan 28 01:48:57.357779 systemd-logind[1540]: Session 36 logged out. Waiting for processes to exit. Jan 28 01:48:57.378664 systemd[1]: Started sshd@36-10.0.0.33:22-10.0.0.1:55612.service - OpenSSH per-connection server daemon (10.0.0.1:55612). Jan 28 01:48:57.389816 systemd-logind[1540]: Removed session 36. Jan 28 01:48:57.568567 sshd[5856]: Accepted publickey for core from 10.0.0.1 port 55612 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:48:57.574705 sshd-session[5856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:48:57.597962 systemd-logind[1540]: New session 37 of user core. Jan 28 01:48:57.610442 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 28 01:48:58.357186 containerd[1553]: time="2026-01-28T01:48:58.356751219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:48:58.531522 containerd[1553]: time="2026-01-28T01:48:58.528867704Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:48:58.537793 containerd[1553]: time="2026-01-28T01:48:58.537618850Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:48:58.537793 containerd[1553]: time="2026-01-28T01:48:58.537786061Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 01:48:58.539302 kubelet[2762]: E0128 01:48:58.539183 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:48:58.539860 kubelet[2762]: E0128 01:48:58.539302 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:48:58.539860 kubelet[2762]: E0128 01:48:58.539532 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8sgb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vgk8b_calico-system(c966ee1e-4a54-4737-b8f4-7c2be261a470): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:48:58.541955 kubelet[2762]: E0128 01:48:58.541672 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vgk8b" podUID="c966ee1e-4a54-4737-b8f4-7c2be261a470" Jan 28 01:48:59.364336 kubelet[2762]: E0128 01:48:59.359615 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:48:59.372693 containerd[1553]: time="2026-01-28T01:48:59.372601407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:48:59.504854 containerd[1553]: time="2026-01-28T01:48:59.504740829Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:48:59.516265 containerd[1553]: time="2026-01-28T01:48:59.516113235Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:48:59.516265 containerd[1553]: time="2026-01-28T01:48:59.516271970Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 01:48:59.522129 kubelet[2762]: E0128 01:48:59.516869 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:48:59.522129 kubelet[2762]: E0128 01:48:59.517158 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:48:59.525509 kubelet[2762]: E0128 01:48:59.525432 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:298b27a7925644a4836dbb58d943c269,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xmwnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cc7f69c44-p8qpq_calico-system(bfdea9eb-0bce-4c15-b321-f9c7a00efdf0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:48:59.532963 containerd[1553]: time="2026-01-28T01:48:59.532726132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:48:59.632707 containerd[1553]: time="2026-01-28T01:48:59.630809756Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:48:59.638088 containerd[1553]: time="2026-01-28T01:48:59.637852726Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:48:59.639112 containerd[1553]: time="2026-01-28T01:48:59.637992719Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 01:48:59.639187 kubelet[2762]: E0128 01:48:59.638422 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:48:59.639187 kubelet[2762]: E0128 01:48:59.638489 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:48:59.639187 kubelet[2762]: E0128 01:48:59.638627 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xmwnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cc7f69c44-p8qpq_calico-system(bfdea9eb-0bce-4c15-b321-f9c7a00efdf0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:48:59.650143 kubelet[2762]: E0128 01:48:59.644352 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cc7f69c44-p8qpq" podUID="bfdea9eb-0bce-4c15-b321-f9c7a00efdf0" Jan 28 01:48:59.930186 sshd[5859]: Connection closed by 10.0.0.1 port 55612 Jan 28 01:48:59.940343 sshd-session[5856]: pam_unix(sshd:session): session closed for user core Jan 28 01:48:59.973325 systemd[1]: sshd@36-10.0.0.33:22-10.0.0.1:55612.service: Deactivated successfully. Jan 28 01:48:59.984625 systemd[1]: session-37.scope: Deactivated successfully. Jan 28 01:48:59.996758 systemd-logind[1540]: Session 37 logged out. Waiting for processes to exit. Jan 28 01:49:00.013204 systemd[1]: Started sshd@37-10.0.0.33:22-10.0.0.1:55614.service - OpenSSH per-connection server daemon (10.0.0.1:55614). Jan 28 01:49:00.023849 systemd-logind[1540]: Removed session 37. Jan 28 01:49:00.354611 sshd[5908]: Accepted publickey for core from 10.0.0.1 port 55614 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:49:00.363812 containerd[1553]: time="2026-01-28T01:49:00.363047341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:49:00.365733 sshd-session[5908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:49:00.423677 systemd-logind[1540]: New session 38 of user core. Jan 28 01:49:00.474630 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 28 01:49:00.541541 containerd[1553]: time="2026-01-28T01:49:00.536047509Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:49:00.541541 containerd[1553]: time="2026-01-28T01:49:00.540211262Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:49:00.541541 containerd[1553]: time="2026-01-28T01:49:00.540327898Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:49:00.565011 kubelet[2762]: E0128 01:49:00.551848 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:49:00.565011 kubelet[2762]: E0128 01:49:00.552003 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:49:00.565011 kubelet[2762]: E0128 01:49:00.552218 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p2zt7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6954f9c796-gqzwx_calico-apiserver(a2048076-ab34-4562-b42d-515b64a0bfb4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:49:00.565011 kubelet[2762]: E0128 01:49:00.554013 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-gqzwx" podUID="a2048076-ab34-4562-b42d-515b64a0bfb4" Jan 28 01:49:01.607308 sshd[5911]: Connection closed by 10.0.0.1 port 55614 Jan 28 01:49:01.608200 sshd-session[5908]: pam_unix(sshd:session): session closed for user core Jan 28 01:49:01.635594 systemd[1]: sshd@37-10.0.0.33:22-10.0.0.1:55614.service: Deactivated successfully. Jan 28 01:49:01.658335 systemd[1]: session-38.scope: Deactivated successfully. Jan 28 01:49:01.664652 systemd-logind[1540]: Session 38 logged out. Waiting for processes to exit. Jan 28 01:49:01.687143 systemd[1]: Started sshd@38-10.0.0.33:22-10.0.0.1:55618.service - OpenSSH per-connection server daemon (10.0.0.1:55618). Jan 28 01:49:01.697755 systemd-logind[1540]: Removed session 38. Jan 28 01:49:01.911353 sshd[5922]: Accepted publickey for core from 10.0.0.1 port 55618 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:49:01.911778 sshd-session[5922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:49:01.932761 systemd-logind[1540]: New session 39 of user core. Jan 28 01:49:01.959054 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 28 01:49:02.375201 containerd[1553]: time="2026-01-28T01:49:02.374776466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:49:02.525303 sshd[5925]: Connection closed by 10.0.0.1 port 55618 Jan 28 01:49:02.525467 sshd-session[5922]: pam_unix(sshd:session): session closed for user core Jan 28 01:49:02.535880 containerd[1553]: time="2026-01-28T01:49:02.535679735Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:49:02.539126 systemd[1]: sshd@38-10.0.0.33:22-10.0.0.1:55618.service: Deactivated successfully. Jan 28 01:49:02.543099 systemd[1]: session-39.scope: Deactivated successfully. Jan 28 01:49:02.546118 containerd[1553]: time="2026-01-28T01:49:02.546061182Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:49:02.546314 containerd[1553]: time="2026-01-28T01:49:02.546274820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:49:02.546742 kubelet[2762]: E0128 01:49:02.546696 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:49:02.548791 kubelet[2762]: E0128 01:49:02.548011 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:49:02.548791 kubelet[2762]: E0128 01:49:02.548264 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tt9hd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6954f9c796-rjrhf_calico-apiserver(2cd00be8-fccf-4399-b5b1-c60bf8266112): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:49:02.555647 kubelet[2762]: E0128 01:49:02.555535 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-rjrhf" podUID="2cd00be8-fccf-4399-b5b1-c60bf8266112" Jan 28 01:49:02.566100 systemd-logind[1540]: Session 39 logged out. Waiting for processes to exit. Jan 28 01:49:02.575094 systemd-logind[1540]: Removed session 39. Jan 28 01:49:04.387474 kubelet[2762]: E0128 01:49:04.382193 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64456467b5-b47z9" podUID="5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179" Jan 28 01:49:06.372972 containerd[1553]: time="2026-01-28T01:49:06.371681260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:49:06.474272 containerd[1553]: time="2026-01-28T01:49:06.474181295Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:49:06.482628 containerd[1553]: time="2026-01-28T01:49:06.481828674Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 01:49:06.482628 containerd[1553]: time="2026-01-28T01:49:06.481972151Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:49:06.483263 kubelet[2762]: E0128 01:49:06.483032 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:49:06.483263 kubelet[2762]: E0128 01:49:06.483230 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:49:06.488008 kubelet[2762]: E0128 01:49:06.486182 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nn7zh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5r68l_calico-system(760a12b1-4a99-4684-a026-7c55d7164578): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:49:06.503142 containerd[1553]: time="2026-01-28T01:49:06.500129277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:49:06.593517 containerd[1553]: time="2026-01-28T01:49:06.591882678Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:49:06.603639 containerd[1553]: time="2026-01-28T01:49:06.600734561Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:49:06.603639 containerd[1553]: time="2026-01-28T01:49:06.600830856Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 01:49:06.604947 kubelet[2762]: E0128 01:49:06.604568 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:49:06.604947 kubelet[2762]: E0128 01:49:06.604684 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:49:06.604947 kubelet[2762]: E0128 01:49:06.604831 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nn7zh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5r68l_calico-system(760a12b1-4a99-4684-a026-7c55d7164578): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:49:06.611336 kubelet[2762]: E0128 01:49:06.611242 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5r68l" podUID="760a12b1-4a99-4684-a026-7c55d7164578" Jan 28 01:49:07.560326 systemd[1]: Started sshd@39-10.0.0.33:22-10.0.0.1:54376.service - OpenSSH per-connection server daemon (10.0.0.1:54376). Jan 28 01:49:07.718964 sshd[5941]: Accepted publickey for core from 10.0.0.1 port 54376 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:49:07.720703 sshd-session[5941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:49:07.766704 systemd-logind[1540]: New session 40 of user core. Jan 28 01:49:07.804790 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 28 01:49:08.280610 sshd[5945]: Connection closed by 10.0.0.1 port 54376 Jan 28 01:49:08.286162 sshd-session[5941]: pam_unix(sshd:session): session closed for user core Jan 28 01:49:08.306875 systemd-logind[1540]: Session 40 logged out. Waiting for processes to exit. Jan 28 01:49:08.312587 systemd[1]: sshd@39-10.0.0.33:22-10.0.0.1:54376.service: Deactivated successfully. Jan 28 01:49:08.345132 systemd[1]: session-40.scope: Deactivated successfully. Jan 28 01:49:08.362766 systemd-logind[1540]: Removed session 40. Jan 28 01:49:12.396142 kubelet[2762]: E0128 01:49:12.396054 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vgk8b" podUID="c966ee1e-4a54-4737-b8f4-7c2be261a470" Jan 28 01:49:12.414390 kubelet[2762]: E0128 01:49:12.396507 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-gqzwx" podUID="a2048076-ab34-4562-b42d-515b64a0bfb4" Jan 28 01:49:13.324685 systemd[1]: Started sshd@40-10.0.0.33:22-10.0.0.1:54384.service - OpenSSH per-connection server daemon (10.0.0.1:54384). Jan 28 01:49:13.366126 kubelet[2762]: E0128 01:49:13.364160 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cc7f69c44-p8qpq" podUID="bfdea9eb-0bce-4c15-b321-f9c7a00efdf0" Jan 28 01:49:13.552037 sshd[5982]: Accepted publickey for core from 10.0.0.1 port 54384 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:49:13.555087 sshd-session[5982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:49:13.577655 systemd-logind[1540]: New session 41 of user core. Jan 28 01:49:13.613626 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 28 01:49:14.128573 sshd[5985]: Connection closed by 10.0.0.1 port 54384 Jan 28 01:49:14.127948 sshd-session[5982]: pam_unix(sshd:session): session closed for user core Jan 28 01:49:14.158539 systemd[1]: sshd@40-10.0.0.33:22-10.0.0.1:54384.service: Deactivated successfully. Jan 28 01:49:14.176595 systemd[1]: session-41.scope: Deactivated successfully. Jan 28 01:49:14.185351 systemd-logind[1540]: Session 41 logged out. Waiting for processes to exit. Jan 28 01:49:14.204404 systemd-logind[1540]: Removed session 41. Jan 28 01:49:15.356157 kubelet[2762]: E0128 01:49:15.351531 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:49:17.349543 kubelet[2762]: E0128 01:49:17.347382 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:49:17.350645 kubelet[2762]: E0128 01:49:17.350134 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-rjrhf" podUID="2cd00be8-fccf-4399-b5b1-c60bf8266112" Jan 28 01:49:19.191625 systemd[1]: Started sshd@41-10.0.0.33:22-10.0.0.1:57778.service - OpenSSH per-connection server daemon (10.0.0.1:57778). Jan 28 01:49:19.358984 containerd[1553]: time="2026-01-28T01:49:19.353171235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:49:19.386979 kubelet[2762]: E0128 01:49:19.386691 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5r68l" podUID="760a12b1-4a99-4684-a026-7c55d7164578" Jan 28 01:49:19.513368 sshd[5999]: Accepted publickey for core from 10.0.0.1 port 57778 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:49:19.517649 sshd-session[5999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:49:19.537348 containerd[1553]: time="2026-01-28T01:49:19.536858913Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 01:49:19.544665 containerd[1553]: time="2026-01-28T01:49:19.543593283Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:49:19.544665 containerd[1553]: time="2026-01-28T01:49:19.543699851Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 01:49:19.544975 kubelet[2762]: E0128 01:49:19.543874 2762 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:49:19.544975 kubelet[2762]: E0128 01:49:19.543991 2762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:49:19.544975 kubelet[2762]: E0128 01:49:19.544133 2762 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-82qmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-64456467b5-b47z9_calico-system(5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:49:19.545520 kubelet[2762]: E0128 01:49:19.545469 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64456467b5-b47z9" podUID="5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179" Jan 28 01:49:19.557345 systemd-logind[1540]: New session 42 of user core. Jan 28 01:49:19.579401 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 28 01:49:20.101202 sshd[6002]: Connection closed by 10.0.0.1 port 57778 Jan 28 01:49:20.095483 systemd-logind[1540]: Session 42 logged out. Waiting for processes to exit. Jan 28 01:49:20.090199 sshd-session[5999]: pam_unix(sshd:session): session closed for user core Jan 28 01:49:20.098137 systemd[1]: sshd@41-10.0.0.33:22-10.0.0.1:57778.service: Deactivated successfully. Jan 28 01:49:20.143584 systemd[1]: session-42.scope: Deactivated successfully. Jan 28 01:49:20.186389 systemd-logind[1540]: Removed session 42. Jan 28 01:49:24.388083 kubelet[2762]: E0128 01:49:24.387775 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cc7f69c44-p8qpq" podUID="bfdea9eb-0bce-4c15-b321-f9c7a00efdf0" Jan 28 01:49:25.141281 systemd[1]: Started sshd@42-10.0.0.33:22-10.0.0.1:45542.service - OpenSSH per-connection server daemon (10.0.0.1:45542). Jan 28 01:49:25.353334 kubelet[2762]: E0128 01:49:25.353288 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vgk8b" podUID="c966ee1e-4a54-4737-b8f4-7c2be261a470" Jan 28 01:49:25.370729 sshd[6015]: Accepted publickey for core from 10.0.0.1 port 45542 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:49:25.379209 sshd-session[6015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:49:25.445569 systemd-logind[1540]: New session 43 of user core. Jan 28 01:49:25.481112 systemd[1]: Started session-43.scope - Session 43 of User core. Jan 28 01:49:26.035839 sshd[6018]: Connection closed by 10.0.0.1 port 45542 Jan 28 01:49:26.037137 sshd-session[6015]: pam_unix(sshd:session): session closed for user core Jan 28 01:49:26.062291 systemd[1]: sshd@42-10.0.0.33:22-10.0.0.1:45542.service: Deactivated successfully. Jan 28 01:49:26.080968 systemd[1]: session-43.scope: Deactivated successfully. Jan 28 01:49:26.114595 systemd-logind[1540]: Session 43 logged out. Waiting for processes to exit. Jan 28 01:49:26.118787 systemd-logind[1540]: Removed session 43. Jan 28 01:49:26.355713 kubelet[2762]: E0128 01:49:26.352427 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-gqzwx" podUID="a2048076-ab34-4562-b42d-515b64a0bfb4" Jan 28 01:49:28.054157 update_engine[1545]: I20260128 01:49:28.053740 1545 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 28 01:49:28.054157 update_engine[1545]: I20260128 01:49:28.053817 1545 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 28 01:49:28.064699 update_engine[1545]: I20260128 01:49:28.064405 1545 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 28 01:49:28.066696 update_engine[1545]: I20260128 01:49:28.066422 1545 omaha_request_params.cc:62] Current group set to stable Jan 28 01:49:28.066828 update_engine[1545]: I20260128 01:49:28.066804 1545 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 28 01:49:28.068195 update_engine[1545]: I20260128 01:49:28.068166 1545 update_attempter.cc:643] Scheduling an action processor start. Jan 28 01:49:28.068318 update_engine[1545]: I20260128 01:49:28.068293 1545 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 28 01:49:28.068512 update_engine[1545]: I20260128 01:49:28.068440 1545 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 28 01:49:28.068755 update_engine[1545]: I20260128 01:49:28.068732 1545 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 28 01:49:28.073152 update_engine[1545]: I20260128 01:49:28.068806 1545 omaha_request_action.cc:272] Request: Jan 28 01:49:28.073152 update_engine[1545]: Jan 28 01:49:28.073152 update_engine[1545]: Jan 28 01:49:28.073152 update_engine[1545]: Jan 28 01:49:28.073152 update_engine[1545]: Jan 28 01:49:28.073152 update_engine[1545]: Jan 28 01:49:28.073152 update_engine[1545]: Jan 28 01:49:28.073152 update_engine[1545]: Jan 28 01:49:28.073152 update_engine[1545]: Jan 28 01:49:28.073152 update_engine[1545]: I20260128 01:49:28.068822 1545 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:49:28.098394 locksmithd[1571]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 28 01:49:28.118781 update_engine[1545]: I20260128 01:49:28.118367 1545 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:49:28.136018 update_engine[1545]: I20260128 01:49:28.124855 1545 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:49:28.148257 update_engine[1545]: E20260128 01:49:28.146380 1545 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 28 01:49:28.148257 update_engine[1545]: I20260128 01:49:28.146595 1545 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 28 01:49:28.352175 kubelet[2762]: E0128 01:49:28.350731 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-rjrhf" podUID="2cd00be8-fccf-4399-b5b1-c60bf8266112" Jan 28 01:49:31.114832 systemd[1]: Started sshd@43-10.0.0.33:22-10.0.0.1:45546.service - OpenSSH per-connection server daemon (10.0.0.1:45546). Jan 28 01:49:31.270008 sshd[6061]: Accepted publickey for core from 10.0.0.1 port 45546 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:49:31.278750 sshd-session[6061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:49:31.314392 systemd-logind[1540]: New session 44 of user core. Jan 28 01:49:31.335821 systemd[1]: Started session-44.scope - Session 44 of User core. Jan 28 01:49:31.351809 kubelet[2762]: E0128 01:49:31.351743 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64456467b5-b47z9" podUID="5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179" Jan 28 01:49:31.380447 kubelet[2762]: E0128 01:49:31.370131 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5r68l" podUID="760a12b1-4a99-4684-a026-7c55d7164578" Jan 28 01:49:31.889440 sshd[6064]: Connection closed by 10.0.0.1 port 45546 Jan 28 01:49:31.888838 sshd-session[6061]: pam_unix(sshd:session): session closed for user core Jan 28 01:49:31.911406 systemd-logind[1540]: Session 44 logged out. Waiting for processes to exit. Jan 28 01:49:31.912745 systemd[1]: sshd@43-10.0.0.33:22-10.0.0.1:45546.service: Deactivated successfully. Jan 28 01:49:32.008336 systemd[1]: session-44.scope: Deactivated successfully. Jan 28 01:49:32.029304 systemd-logind[1540]: Removed session 44. Jan 28 01:49:35.348795 kubelet[2762]: E0128 01:49:35.347747 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:49:35.360310 kubelet[2762]: E0128 01:49:35.360187 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cc7f69c44-p8qpq" podUID="bfdea9eb-0bce-4c15-b321-f9c7a00efdf0" Jan 28 01:49:36.934059 systemd[1]: Started sshd@44-10.0.0.33:22-10.0.0.1:36210.service - OpenSSH per-connection server daemon (10.0.0.1:36210). Jan 28 01:49:37.122374 sshd[6077]: Accepted publickey for core from 10.0.0.1 port 36210 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:49:37.126559 sshd-session[6077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:49:37.148048 systemd-logind[1540]: New session 45 of user core. Jan 28 01:49:37.174743 systemd[1]: Started session-45.scope - Session 45 of User core. Jan 28 01:49:37.562038 sshd[6080]: Connection closed by 10.0.0.1 port 36210 Jan 28 01:49:37.562584 sshd-session[6077]: pam_unix(sshd:session): session closed for user core Jan 28 01:49:37.575424 systemd[1]: sshd@44-10.0.0.33:22-10.0.0.1:36210.service: Deactivated successfully. Jan 28 01:49:37.579133 systemd-logind[1540]: Session 45 logged out. Waiting for processes to exit. Jan 28 01:49:37.583820 systemd[1]: session-45.scope: Deactivated successfully. Jan 28 01:49:37.591736 systemd-logind[1540]: Removed session 45. Jan 28 01:49:37.931977 update_engine[1545]: I20260128 01:49:37.931243 1545 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:49:37.934657 update_engine[1545]: I20260128 01:49:37.932487 1545 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:49:37.934657 update_engine[1545]: I20260128 01:49:37.934028 1545 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:49:37.950727 update_engine[1545]: E20260128 01:49:37.950291 1545 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 28 01:49:37.950727 update_engine[1545]: I20260128 01:49:37.950454 1545 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 28 01:49:38.367063 kubelet[2762]: E0128 01:49:38.366328 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vgk8b" podUID="c966ee1e-4a54-4737-b8f4-7c2be261a470" Jan 28 01:49:40.353417 kubelet[2762]: E0128 01:49:40.349614 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-gqzwx" podUID="a2048076-ab34-4562-b42d-515b64a0bfb4" Jan 28 01:49:40.358629 kubelet[2762]: E0128 01:49:40.354391 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6954f9c796-rjrhf" podUID="2cd00be8-fccf-4399-b5b1-c60bf8266112" Jan 28 01:49:42.619355 systemd[1]: Started sshd@45-10.0.0.33:22-10.0.0.1:36220.service - OpenSSH per-connection server daemon (10.0.0.1:36220). Jan 28 01:49:42.812968 sshd[6095]: Accepted publickey for core from 10.0.0.1 port 36220 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:49:42.812212 sshd-session[6095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:49:42.848684 systemd-logind[1540]: New session 46 of user core. Jan 28 01:49:42.883179 systemd[1]: Started session-46.scope - Session 46 of User core. Jan 28 01:49:43.231306 sshd[6098]: Connection closed by 10.0.0.1 port 36220 Jan 28 01:49:43.233233 sshd-session[6095]: pam_unix(sshd:session): session closed for user core Jan 28 01:49:43.246328 systemd[1]: sshd@45-10.0.0.33:22-10.0.0.1:36220.service: Deactivated successfully. Jan 28 01:49:43.254599 systemd[1]: session-46.scope: Deactivated successfully. Jan 28 01:49:43.263052 systemd-logind[1540]: Session 46 logged out. Waiting for processes to exit. Jan 28 01:49:43.277658 systemd-logind[1540]: Removed session 46. Jan 28 01:49:44.354671 kubelet[2762]: E0128 01:49:44.354475 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64456467b5-b47z9" podUID="5fd9c2ef-ecf6-4c2a-ace3-1669c7df4179" Jan 28 01:49:46.372206 kubelet[2762]: E0128 01:49:46.369406 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5r68l" podUID="760a12b1-4a99-4684-a026-7c55d7164578" Jan 28 01:49:47.354670 kubelet[2762]: E0128 01:49:47.354421 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cc7f69c44-p8qpq" podUID="bfdea9eb-0bce-4c15-b321-f9c7a00efdf0" Jan 28 01:49:47.933111 update_engine[1545]: I20260128 01:49:47.933000 1545 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 28 01:49:47.934080 update_engine[1545]: I20260128 01:49:47.933141 1545 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 28 01:49:47.934910 update_engine[1545]: I20260128 01:49:47.934824 1545 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 28 01:49:47.951730 update_engine[1545]: E20260128 01:49:47.951606 1545 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 28 01:49:47.952011 update_engine[1545]: I20260128 01:49:47.951768 1545 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 28 01:49:48.283863 systemd[1]: Started sshd@46-10.0.0.33:22-10.0.0.1:49742.service - OpenSSH per-connection server daemon (10.0.0.1:49742). Jan 28 01:49:48.360855 kubelet[2762]: E0128 01:49:48.357437 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:49:48.467972 sshd[6112]: Accepted publickey for core from 10.0.0.1 port 49742 ssh2: RSA SHA256:G5V+d5+OkP6yq5VG4vsxeR8tFsyMoF48CG54hlljR3w Jan 28 01:49:48.480100 sshd-session[6112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:49:48.507178 systemd-logind[1540]: New session 47 of user core. Jan 28 01:49:48.536258 systemd[1]: Started session-47.scope - Session 47 of User core. Jan 28 01:49:48.865327 sshd[6115]: Connection closed by 10.0.0.1 port 49742 Jan 28 01:49:48.867019 sshd-session[6112]: pam_unix(sshd:session): session closed for user core Jan 28 01:49:48.882142 systemd[1]: sshd@46-10.0.0.33:22-10.0.0.1:49742.service: Deactivated successfully. Jan 28 01:49:48.889773 systemd[1]: session-47.scope: Deactivated successfully. Jan 28 01:49:48.896800 systemd-logind[1540]: Session 47 logged out. Waiting for processes to exit. Jan 28 01:49:48.903506 systemd-logind[1540]: Removed session 47. Jan 28 01:49:50.350883 kubelet[2762]: E0128 01:49:50.350644 2762 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 28 01:49:50.357433 kubelet[2762]: E0128 01:49:50.356531 2762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vgk8b" podUID="c966ee1e-4a54-4737-b8f4-7c2be261a470"