Jan 23 01:37:54.466512 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 22:22:03 -00 2026 Jan 23 01:37:54.466547 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:37:54.466558 kernel: BIOS-provided physical RAM map: Jan 23 01:37:54.466572 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 01:37:54.466580 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 23 01:37:54.466588 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 23 01:37:54.466597 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 23 01:37:54.466606 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 23 01:37:54.467174 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 23 01:37:54.467195 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 23 01:37:54.467207 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 23 01:37:54.467218 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 23 01:37:54.467235 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 23 01:37:54.467243 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 23 01:37:54.467253 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 23 01:37:54.467262 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 23 01:37:54.467403 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 23 01:37:54.467419 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 23 01:37:54.467431 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 23 01:37:54.467442 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 23 01:37:54.467452 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 23 01:37:54.467463 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 23 01:37:54.467473 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 23 01:37:54.467483 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 01:37:54.467494 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 23 01:37:54.467504 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 23 01:37:54.467515 kernel: NX (Execute Disable) protection: active Jan 23 01:37:54.467526 kernel: APIC: Static calls initialized Jan 23 01:37:54.467540 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Jan 23 01:37:54.467551 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Jan 23 01:37:54.467561 kernel: extended physical RAM map: Jan 23 01:37:54.467572 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 01:37:54.467583 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 23 01:37:54.467593 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 23 01:37:54.467604 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 23 01:37:54.467614 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 23 01:37:54.467625 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 23 01:37:54.467636 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 23 01:37:54.467646 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Jan 23 01:37:54.467662 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Jan 23 01:37:54.467677 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Jan 23 01:37:54.467688 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Jan 23 01:37:54.467699 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Jan 23 01:37:54.467710 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 23 01:37:54.467725 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 23 01:37:54.467958 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 23 01:37:54.467969 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 23 01:37:54.467978 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 23 01:37:54.467987 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Jan 23 01:37:54.467996 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Jan 23 01:37:54.468005 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Jan 23 01:37:54.468014 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Jan 23 01:37:54.468151 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 23 01:37:54.468161 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 23 01:37:54.468170 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 23 01:37:54.468184 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 01:37:54.468193 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 23 01:37:54.468202 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 23 01:37:54.468328 kernel: efi: EFI v2.7 by EDK II Jan 23 01:37:54.468342 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Jan 23 01:37:54.468446 kernel: random: crng init done Jan 23 01:37:54.468459 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 23 01:37:54.468590 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 23 01:37:54.468602 kernel: secureboot: Secure boot disabled Jan 23 01:37:54.468612 kernel: SMBIOS 2.8 present. Jan 23 01:37:54.468621 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 23 01:37:54.468635 kernel: DMI: Memory slots populated: 1/1 Jan 23 01:37:54.468644 kernel: Hypervisor detected: KVM Jan 23 01:37:54.468653 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 23 01:37:54.468662 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 01:37:54.468671 kernel: kvm-clock: using sched offset of 30230680219 cycles Jan 23 01:37:54.468683 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 01:37:54.468694 kernel: tsc: Detected 2445.424 MHz processor Jan 23 01:37:54.468705 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 01:37:54.468718 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 01:37:54.468729 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 23 01:37:54.468960 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 23 01:37:54.468979 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 01:37:54.468992 kernel: Using GB pages for direct mapping Jan 23 01:37:54.469005 kernel: ACPI: Early table checksum verification disabled Jan 23 01:37:54.469016 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 23 01:37:54.469158 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 23 01:37:54.469170 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:37:54.469181 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:37:54.469192 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 23 01:37:54.469204 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:37:54.469222 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:37:54.469234 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:37:54.469244 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:37:54.469254 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 23 01:37:54.469263 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 23 01:37:54.469272 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 23 01:37:54.469281 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 23 01:37:54.469291 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 23 01:37:54.469307 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 23 01:37:54.469317 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 23 01:37:54.469328 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 23 01:37:54.469340 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 23 01:37:54.469351 kernel: No NUMA configuration found Jan 23 01:37:54.469360 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 23 01:37:54.469369 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Jan 23 01:37:54.469378 kernel: Zone ranges: Jan 23 01:37:54.469388 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 01:37:54.469401 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 23 01:37:54.469412 kernel: Normal empty Jan 23 01:37:54.469422 kernel: Device empty Jan 23 01:37:54.469433 kernel: Movable zone start for each node Jan 23 01:37:54.469445 kernel: Early memory node ranges Jan 23 01:37:54.469456 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 23 01:37:54.469588 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 23 01:37:54.469602 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 23 01:37:54.469613 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 23 01:37:54.469623 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 23 01:37:54.469640 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 23 01:37:54.469651 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Jan 23 01:37:54.469662 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Jan 23 01:37:54.469673 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 23 01:37:54.472614 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 01:37:54.473413 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 23 01:37:54.473432 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 23 01:37:54.473443 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 01:37:54.473453 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 23 01:37:54.473464 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 23 01:37:54.473475 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 23 01:37:54.473486 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 23 01:37:54.473501 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 23 01:37:54.473514 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 23 01:37:54.473524 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 01:37:54.473533 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 01:37:54.473547 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 23 01:37:54.473557 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 01:37:54.473566 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 01:37:54.473576 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 01:37:54.473587 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 01:37:54.473600 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 01:37:54.473718 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 23 01:37:54.473730 kernel: TSC deadline timer available Jan 23 01:37:54.473966 kernel: CPU topo: Max. logical packages: 1 Jan 23 01:37:54.473982 kernel: CPU topo: Max. logical dies: 1 Jan 23 01:37:54.473992 kernel: CPU topo: Max. dies per package: 1 Jan 23 01:37:54.474002 kernel: CPU topo: Max. threads per core: 1 Jan 23 01:37:54.474013 kernel: CPU topo: Num. cores per package: 4 Jan 23 01:37:54.474150 kernel: CPU topo: Num. threads per package: 4 Jan 23 01:37:54.474162 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 23 01:37:54.474173 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 01:37:54.474184 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 23 01:37:54.474195 kernel: kvm-guest: setup PV sched yield Jan 23 01:37:54.474206 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 23 01:37:54.474221 kernel: Booting paravirtualized kernel on KVM Jan 23 01:37:54.474231 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 01:37:54.474242 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 23 01:37:54.474253 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 23 01:37:54.474263 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 23 01:37:54.474273 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 23 01:37:54.474283 kernel: kvm-guest: PV spinlocks enabled Jan 23 01:37:54.474294 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 01:37:54.474540 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:37:54.474553 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 01:37:54.474563 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 01:37:54.474573 kernel: Fallback order for Node 0: 0 Jan 23 01:37:54.474582 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Jan 23 01:37:54.474603 kernel: Policy zone: DMA32 Jan 23 01:37:54.474614 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 01:37:54.474623 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 23 01:37:54.474633 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 01:37:54.474647 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 01:37:54.474657 kernel: Dynamic Preempt: voluntary Jan 23 01:37:54.474669 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 01:37:54.474693 kernel: rcu: RCU event tracing is enabled. Jan 23 01:37:54.474704 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 23 01:37:54.474714 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 01:37:54.474724 kernel: Rude variant of Tasks RCU enabled. Jan 23 01:37:54.475181 kernel: Tracing variant of Tasks RCU enabled. Jan 23 01:37:54.475196 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 01:37:54.475211 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 23 01:37:54.475343 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 01:37:54.475358 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 01:37:54.475370 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 01:37:54.475380 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 23 01:37:54.475390 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 01:37:54.475400 kernel: Console: colour dummy device 80x25 Jan 23 01:37:54.475410 kernel: printk: legacy console [ttyS0] enabled Jan 23 01:37:54.475419 kernel: ACPI: Core revision 20240827 Jan 23 01:37:54.475436 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 23 01:37:54.475447 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 01:37:54.475458 kernel: x2apic enabled Jan 23 01:37:54.475588 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 01:37:54.475598 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 23 01:37:54.475608 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 23 01:37:54.475618 kernel: kvm-guest: setup PV IPIs Jan 23 01:37:54.475627 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 23 01:37:54.475640 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd5e8294, max_idle_ns: 440795237246 ns Jan 23 01:37:54.475658 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Jan 23 01:37:54.475668 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 01:37:54.475677 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 23 01:37:54.475687 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 23 01:37:54.475697 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 01:37:54.475707 kernel: Spectre V2 : Mitigation: Retpolines Jan 23 01:37:54.475721 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 01:37:54.475731 kernel: Speculative Store Bypass: Vulnerable Jan 23 01:37:54.475965 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 23 01:37:54.475982 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 23 01:37:54.476216 kernel: active return thunk: srso_alias_return_thunk Jan 23 01:37:54.476228 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 23 01:37:54.476239 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 23 01:37:54.476249 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 23 01:37:54.476259 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 01:37:54.476270 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 01:37:54.476280 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 01:37:54.476296 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 01:37:54.476306 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 23 01:37:54.476317 kernel: Freeing SMP alternatives memory: 32K Jan 23 01:37:54.476328 kernel: pid_max: default: 32768 minimum: 301 Jan 23 01:37:54.476338 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 01:37:54.476348 kernel: landlock: Up and running. Jan 23 01:37:54.476358 kernel: SELinux: Initializing. Jan 23 01:37:54.476369 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 01:37:54.476380 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 01:37:54.476394 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 23 01:37:54.476405 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 23 01:37:54.476415 kernel: signal: max sigframe size: 1776 Jan 23 01:37:54.476426 kernel: rcu: Hierarchical SRCU implementation. Jan 23 01:37:54.476438 kernel: rcu: Max phase no-delay instances is 400. Jan 23 01:37:54.476448 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 01:37:54.497317 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 23 01:37:54.497343 kernel: smp: Bringing up secondary CPUs ... Jan 23 01:37:54.497355 kernel: smpboot: x86: Booting SMP configuration: Jan 23 01:37:54.497372 kernel: .... node #0, CPUs: #1 #2 #3 Jan 23 01:37:54.497382 kernel: smp: Brought up 1 node, 4 CPUs Jan 23 01:37:54.497392 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Jan 23 01:37:54.497403 kernel: Memory: 2414476K/2565800K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46196K init, 2564K bss, 145388K reserved, 0K cma-reserved) Jan 23 01:37:54.497413 kernel: devtmpfs: initialized Jan 23 01:37:54.497422 kernel: x86/mm: Memory block size: 128MB Jan 23 01:37:54.497432 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 23 01:37:54.497443 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 23 01:37:54.497456 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 23 01:37:54.497471 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 23 01:37:54.497481 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Jan 23 01:37:54.497491 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 23 01:37:54.497500 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 01:37:54.497510 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 23 01:37:54.497519 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 01:37:54.497530 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 01:37:54.497540 kernel: audit: initializing netlink subsys (disabled) Jan 23 01:37:54.497550 kernel: audit: type=2000 audit(1769132250.193:1): state=initialized audit_enabled=0 res=1 Jan 23 01:37:54.497563 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 01:37:54.497573 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 01:37:54.497582 kernel: cpuidle: using governor menu Jan 23 01:37:54.497594 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 01:37:54.497723 kernel: dca service started, version 1.12.1 Jan 23 01:37:54.497982 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 23 01:37:54.497995 kernel: PCI: Using configuration type 1 for base access Jan 23 01:37:54.498009 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 01:37:54.498155 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 01:37:54.498168 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 01:37:54.498180 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 01:37:54.498191 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 01:37:54.498203 kernel: ACPI: Added _OSI(Module Device) Jan 23 01:37:54.498215 kernel: ACPI: Added _OSI(Processor Device) Jan 23 01:37:54.498226 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 01:37:54.498237 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 01:37:54.498250 kernel: ACPI: Interpreter enabled Jan 23 01:37:54.498261 kernel: ACPI: PM: (supports S0 S3 S5) Jan 23 01:37:54.498277 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 01:37:54.498289 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 01:37:54.498301 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 01:37:54.498313 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 23 01:37:54.498326 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 01:37:54.499697 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 01:37:54.500265 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 23 01:37:54.500455 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 23 01:37:54.500472 kernel: PCI host bridge to bus 0000:00 Jan 23 01:37:54.501666 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 01:37:54.502210 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 01:37:54.502373 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 01:37:54.502536 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 23 01:37:54.502709 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 23 01:37:54.503270 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 23 01:37:54.503442 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 01:37:54.505542 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 23 01:37:54.506336 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 23 01:37:54.506517 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jan 23 01:37:54.506699 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jan 23 01:37:54.507264 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 23 01:37:54.507444 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 01:37:54.507619 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 17578 usecs Jan 23 01:37:54.508279 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 23 01:37:54.508463 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jan 23 01:37:54.508640 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jan 23 01:37:54.509172 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jan 23 01:37:54.510176 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 23 01:37:54.510367 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jan 23 01:37:54.510552 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jan 23 01:37:54.527569 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jan 23 01:37:54.528964 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 23 01:37:54.529396 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jan 23 01:37:54.529571 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jan 23 01:37:54.530001 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 23 01:37:54.530307 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jan 23 01:37:54.532363 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 23 01:37:54.532553 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 23 01:37:54.532712 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 16601 usecs Jan 23 01:37:54.533355 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 23 01:37:54.533544 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jan 23 01:37:54.533969 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jan 23 01:37:54.534633 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 23 01:37:54.535245 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jan 23 01:37:54.535267 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 01:37:54.535280 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 01:37:54.535291 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 01:37:54.535300 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 01:37:54.535317 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 23 01:37:54.535326 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 23 01:37:54.535336 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 23 01:37:54.535345 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 23 01:37:54.535355 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 23 01:37:54.535366 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 23 01:37:54.535377 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 23 01:37:54.535389 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 23 01:37:54.535402 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 23 01:37:54.535417 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 23 01:37:54.535427 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 23 01:37:54.535436 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 23 01:37:54.535446 kernel: iommu: Default domain type: Translated Jan 23 01:37:54.535456 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 01:37:54.535465 kernel: efivars: Registered efivars operations Jan 23 01:37:54.535475 kernel: PCI: Using ACPI for IRQ routing Jan 23 01:37:54.535485 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 01:37:54.535498 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 23 01:37:54.535515 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 23 01:37:54.535524 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Jan 23 01:37:54.535533 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Jan 23 01:37:54.535542 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 23 01:37:54.535555 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 23 01:37:54.535566 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Jan 23 01:37:54.535575 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 23 01:37:54.537516 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 23 01:37:54.537698 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 23 01:37:54.539000 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 01:37:54.539157 kernel: vgaarb: loaded Jan 23 01:37:54.539174 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 23 01:37:54.539288 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 23 01:37:54.539302 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 01:37:54.539312 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 01:37:54.539322 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 01:37:54.539332 kernel: pnp: PnP ACPI init Jan 23 01:37:54.541318 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 23 01:37:54.541344 kernel: pnp: PnP ACPI: found 6 devices Jan 23 01:37:54.541355 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 01:37:54.541365 kernel: NET: Registered PF_INET protocol family Jan 23 01:37:54.541375 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 01:37:54.541385 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 01:37:54.541415 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 01:37:54.541430 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 01:37:54.541443 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 01:37:54.541456 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 01:37:54.541474 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 01:37:54.541485 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 01:37:54.541499 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 01:37:54.541510 kernel: NET: Registered PF_XDP protocol family Jan 23 01:37:54.541686 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jan 23 01:37:54.542511 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jan 23 01:37:54.542693 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 01:37:54.543962 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 01:37:54.544268 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 01:37:54.545269 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 23 01:37:54.545429 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 23 01:37:54.546466 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 23 01:37:54.546485 kernel: PCI: CLS 0 bytes, default 64 Jan 23 01:37:54.546496 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd5e8294, max_idle_ns: 440795237246 ns Jan 23 01:37:54.546507 kernel: Initialise system trusted keyrings Jan 23 01:37:54.546517 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 01:37:54.546533 kernel: Key type asymmetric registered Jan 23 01:37:54.546543 kernel: Asymmetric key parser 'x509' registered Jan 23 01:37:54.546554 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 01:37:54.546564 kernel: io scheduler mq-deadline registered Jan 23 01:37:54.546576 kernel: io scheduler kyber registered Jan 23 01:37:54.546587 kernel: io scheduler bfq registered Jan 23 01:37:54.546599 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 01:37:54.546614 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 23 01:37:54.546630 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 23 01:37:54.546642 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 23 01:37:54.547559 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 01:37:54.547573 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 01:37:54.547584 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 01:37:54.547597 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 01:37:54.547611 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 01:37:54.549209 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 23 01:37:54.549231 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 01:37:54.577341 kernel: rtc_cmos 00:04: registered as rtc0 Jan 23 01:37:54.577561 kernel: rtc_cmos 00:04: setting system clock to 2026-01-23T01:37:49 UTC (1769132269) Jan 23 01:37:54.577938 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 23 01:37:54.577962 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 23 01:37:54.577973 kernel: efifb: probing for efifb Jan 23 01:37:54.577984 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 23 01:37:54.578004 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 23 01:37:54.578015 kernel: efifb: scrolling: redraw Jan 23 01:37:54.578161 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 01:37:54.578173 kernel: Console: switching to colour frame buffer device 160x50 Jan 23 01:37:54.578183 kernel: fb0: EFI VGA frame buffer device Jan 23 01:37:54.578193 kernel: pstore: Using crash dump compression: deflate Jan 23 01:37:54.578204 kernel: pstore: Registered efi_pstore as persistent store backend Jan 23 01:37:54.578214 kernel: NET: Registered PF_INET6 protocol family Jan 23 01:37:54.578224 kernel: Segment Routing with IPv6 Jan 23 01:37:54.578240 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 01:37:54.578252 kernel: NET: Registered PF_PACKET protocol family Jan 23 01:37:54.578265 kernel: Key type dns_resolver registered Jan 23 01:37:54.578276 kernel: IPI shorthand broadcast: enabled Jan 23 01:37:54.578286 kernel: sched_clock: Marking stable (20444098187, 2571147406)->(24527280756, -1512035163) Jan 23 01:37:54.578297 kernel: registered taskstats version 1 Jan 23 01:37:54.578307 kernel: Loading compiled-in X.509 certificates Jan 23 01:37:54.578317 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: ed54f39d0282729985c39b8ffa9938cacff38d8a' Jan 23 01:37:54.578327 kernel: Demotion targets for Node 0: null Jan 23 01:37:54.578341 kernel: Key type .fscrypt registered Jan 23 01:37:54.578353 kernel: Key type fscrypt-provisioning registered Jan 23 01:37:54.578369 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 01:37:54.578382 kernel: ima: Allocated hash algorithm: sha1 Jan 23 01:37:54.578395 kernel: ima: No architecture policies found Jan 23 01:37:54.578408 kernel: clk: Disabling unused clocks Jan 23 01:37:54.578422 kernel: Warning: unable to open an initial console. Jan 23 01:37:54.578436 kernel: Freeing unused kernel image (initmem) memory: 46196K Jan 23 01:37:54.578450 kernel: Write protecting the kernel read-only data: 40960k Jan 23 01:37:54.578466 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 01:37:54.578476 kernel: Run /init as init process Jan 23 01:37:54.578490 kernel: with arguments: Jan 23 01:37:54.578504 kernel: /init Jan 23 01:37:54.578514 kernel: with environment: Jan 23 01:37:54.578527 kernel: HOME=/ Jan 23 01:37:54.578540 kernel: TERM=linux Jan 23 01:37:54.578555 systemd[1]: Successfully made /usr/ read-only. Jan 23 01:37:54.578575 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 01:37:54.578587 systemd[1]: Detected virtualization kvm. Jan 23 01:37:54.578598 systemd[1]: Detected architecture x86-64. Jan 23 01:37:54.578608 systemd[1]: Running in initrd. Jan 23 01:37:54.578619 systemd[1]: No hostname configured, using default hostname. Jan 23 01:37:54.578630 systemd[1]: Hostname set to . Jan 23 01:37:54.578640 systemd[1]: Initializing machine ID from VM UUID. Jan 23 01:37:54.578651 systemd[1]: Queued start job for default target initrd.target. Jan 23 01:37:54.578666 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:37:54.578677 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:37:54.578691 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 01:37:54.578706 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 01:37:54.578717 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 01:37:54.578729 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 01:37:54.578978 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 01:37:54.578995 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 01:37:54.579009 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:37:54.579150 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:37:54.579165 systemd[1]: Reached target paths.target - Path Units. Jan 23 01:37:54.579176 systemd[1]: Reached target slices.target - Slice Units. Jan 23 01:37:54.579187 systemd[1]: Reached target swap.target - Swaps. Jan 23 01:37:54.579198 systemd[1]: Reached target timers.target - Timer Units. Jan 23 01:37:54.579208 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 01:37:54.579224 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 01:37:54.579235 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 01:37:54.579246 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 01:37:54.579258 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:37:54.579271 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 01:37:54.579284 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:37:54.579298 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 01:37:54.579312 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 01:37:54.579326 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 01:37:54.579342 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 01:37:54.579354 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 01:37:54.579364 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 01:37:54.579376 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 01:37:54.579386 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 01:37:54.579397 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:37:54.579408 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 01:37:54.579470 systemd-journald[204]: Collecting audit messages is disabled. Jan 23 01:37:54.579511 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:37:54.579524 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 01:37:54.579535 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 01:37:54.579547 systemd-journald[204]: Journal started Jan 23 01:37:54.579570 systemd-journald[204]: Runtime Journal (/run/log/journal/5bc2d0d5c2af4c49a95aa133d44a99e0) is 6M, max 48.1M, 42.1M free. Jan 23 01:37:54.497376 systemd-modules-load[205]: Inserted module 'overlay' Jan 23 01:37:54.705439 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:37:54.760431 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 01:37:54.793170 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 01:37:54.839287 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 01:37:54.952229 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 01:37:54.991460 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 01:37:54.996330 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 01:37:55.083365 kernel: Bridge firewalling registered Jan 23 01:37:55.084185 systemd-modules-load[205]: Inserted module 'br_netfilter' Jan 23 01:37:55.095557 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 01:37:55.116523 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:37:55.220624 systemd-tmpfiles[226]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 01:37:55.228129 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:37:55.279369 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 01:37:55.301406 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:37:55.306720 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 01:37:55.347476 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:37:55.424330 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 01:37:55.508916 dracut-cmdline[243]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:37:55.800565 systemd-resolved[248]: Positive Trust Anchors: Jan 23 01:37:55.800697 systemd-resolved[248]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 01:37:55.800950 systemd-resolved[248]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 01:37:55.847539 systemd-resolved[248]: Defaulting to hostname 'linux'. Jan 23 01:37:55.882171 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 01:37:55.954999 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:37:56.733976 kernel: SCSI subsystem initialized Jan 23 01:37:56.797993 kernel: Loading iSCSI transport class v2.0-870. Jan 23 01:37:56.891155 kernel: iscsi: registered transport (tcp) Jan 23 01:37:57.050658 kernel: iscsi: registered transport (qla4xxx) Jan 23 01:37:57.056382 kernel: QLogic iSCSI HBA Driver Jan 23 01:37:57.246310 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 01:37:57.353911 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:37:57.382638 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 01:37:57.727401 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 01:37:57.747519 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 01:37:58.049728 kernel: raid6: avx2x4 gen() 16230 MB/s Jan 23 01:37:58.075355 kernel: raid6: avx2x2 gen() 12780 MB/s Jan 23 01:37:58.113981 kernel: raid6: avx2x1 gen() 8528 MB/s Jan 23 01:37:58.114174 kernel: raid6: using algorithm avx2x4 gen() 16230 MB/s Jan 23 01:37:58.150878 kernel: raid6: .... xor() 3697 MB/s, rmw enabled Jan 23 01:37:58.150955 kernel: raid6: using avx2x2 recovery algorithm Jan 23 01:37:58.286704 kernel: xor: automatically using best checksumming function avx Jan 23 01:37:59.883608 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 01:38:00.021528 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 01:38:00.051002 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:38:00.207574 systemd-udevd[457]: Using default interface naming scheme 'v255'. Jan 23 01:38:00.221644 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:38:00.276259 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 01:38:00.492337 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Jan 23 01:38:00.761552 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 01:38:00.808342 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 01:38:01.418962 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:38:01.440529 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 01:38:01.728474 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 23 01:38:01.729534 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 01:38:01.828297 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 23 01:38:01.889012 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:38:02.007956 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 01:38:02.008008 kernel: GPT:9289727 != 19775487 Jan 23 01:38:02.008024 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 01:38:02.008174 kernel: GPT:9289727 != 19775487 Jan 23 01:38:02.008190 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 01:38:02.008203 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 01:38:01.889593 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:38:01.988465 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:38:02.013344 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:38:02.045526 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:38:02.208977 kernel: libata version 3.00 loaded. Jan 23 01:38:02.337618 kernel: ahci 0000:00:1f.2: version 3.0 Jan 23 01:38:02.351541 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 23 01:38:02.363732 kernel: AES CTR mode by8 optimization enabled Jan 23 01:38:02.364523 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 23 01:38:02.474389 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 23 01:38:02.474962 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 23 01:38:02.475317 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 23 01:38:02.512720 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:38:02.570293 kernel: scsi host0: ahci Jan 23 01:38:02.586984 kernel: scsi host1: ahci Jan 23 01:38:02.641524 kernel: scsi host2: ahci Jan 23 01:38:02.669286 kernel: scsi host3: ahci Jan 23 01:38:02.688496 kernel: scsi host4: ahci Jan 23 01:38:02.697336 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 23 01:38:02.844172 kernel: scsi host5: ahci Jan 23 01:38:02.845003 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Jan 23 01:38:02.845027 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Jan 23 01:38:02.845177 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Jan 23 01:38:02.845192 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Jan 23 01:38:02.845204 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Jan 23 01:38:02.845226 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Jan 23 01:38:02.888132 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 23 01:38:02.947697 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 01:38:03.011330 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 23 01:38:03.013385 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 23 01:38:03.113187 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 01:38:03.159605 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 23 01:38:03.186204 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 23 01:38:03.208276 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 23 01:38:03.224256 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 23 01:38:03.233143 disk-uuid[624]: Primary Header is updated. Jan 23 01:38:03.233143 disk-uuid[624]: Secondary Entries is updated. Jan 23 01:38:03.233143 disk-uuid[624]: Secondary Header is updated. Jan 23 01:38:03.367229 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 23 01:38:03.367281 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 23 01:38:03.367305 kernel: ata3.00: LPM support broken, forcing max_power Jan 23 01:38:03.367319 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 01:38:03.367332 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 23 01:38:03.367349 kernel: ata3.00: applying bridge limits Jan 23 01:38:03.367362 kernel: ata3.00: LPM support broken, forcing max_power Jan 23 01:38:03.367378 kernel: ata3.00: configured for UDMA/100 Jan 23 01:38:03.391203 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 23 01:38:03.553521 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 23 01:38:03.554228 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 01:38:03.617402 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 23 01:38:04.344257 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 01:38:04.348383 disk-uuid[625]: The operation has completed successfully. Jan 23 01:38:04.807240 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 01:38:04.807545 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 01:38:04.814164 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 01:38:04.914632 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 01:38:04.936588 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 01:38:04.973342 sh[643]: Success Jan 23 01:38:04.988995 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:38:05.004638 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 01:38:05.042663 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 01:38:05.389469 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 01:38:05.470718 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 01:38:05.470987 kernel: device-mapper: uevent: version 1.0.3 Jan 23 01:38:05.471010 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 01:38:05.560727 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 23 01:38:05.855551 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 01:38:05.862432 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 01:38:05.944637 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 01:38:06.014480 kernel: BTRFS: device fsid f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (665) Jan 23 01:38:06.047397 kernel: BTRFS info (device dm-0): first mount of filesystem f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 Jan 23 01:38:06.047485 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:38:06.198635 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 01:38:06.199183 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 01:38:06.219557 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 01:38:06.240387 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 01:38:06.303575 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 01:38:06.340995 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 01:38:06.372636 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 01:38:06.591320 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (697) Jan 23 01:38:06.622973 kernel: BTRFS info (device vda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:38:06.623336 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:38:06.683454 kernel: BTRFS info (device vda6): turning on async discard Jan 23 01:38:06.683586 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 01:38:06.734991 kernel: BTRFS info (device vda6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:38:06.761557 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 01:38:06.806206 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 01:38:07.484165 kernel: hrtimer: interrupt took 2441208 ns Jan 23 01:38:08.142197 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 01:38:08.169655 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 01:38:08.420948 systemd-networkd[840]: lo: Link UP Jan 23 01:38:08.422457 systemd-networkd[840]: lo: Gained carrier Jan 23 01:38:08.445980 ignition[751]: Ignition 2.22.0 Jan 23 01:38:08.428702 systemd-networkd[840]: Enumeration completed Jan 23 01:38:08.446205 ignition[751]: Stage: fetch-offline Jan 23 01:38:08.429325 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 01:38:08.446364 ignition[751]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:38:08.439028 systemd-networkd[840]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:38:08.446378 ignition[751]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 01:38:08.439035 systemd-networkd[840]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 01:38:08.447207 ignition[751]: parsed url from cmdline: "" Jan 23 01:38:08.446617 systemd-networkd[840]: eth0: Link UP Jan 23 01:38:08.447215 ignition[751]: no config URL provided Jan 23 01:38:08.448604 systemd-networkd[840]: eth0: Gained carrier Jan 23 01:38:08.447225 ignition[751]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 01:38:08.448617 systemd-networkd[840]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:38:08.447238 ignition[751]: no config at "/usr/lib/ignition/user.ign" Jan 23 01:38:08.453995 systemd[1]: Reached target network.target - Network. Jan 23 01:38:08.447271 ignition[751]: op(1): [started] loading QEMU firmware config module Jan 23 01:38:08.609366 systemd-networkd[840]: eth0: DHCPv4 address 10.0.0.137/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 23 01:38:08.447278 ignition[751]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 23 01:38:09.038475 ignition[751]: op(1): [finished] loading QEMU firmware config module Jan 23 01:38:09.698575 systemd-networkd[840]: eth0: Gained IPv6LL Jan 23 01:38:11.883517 ignition[751]: parsing config with SHA512: 11533700f88d8ba87647d60cb43675f3e5b54560471e2cc96a9907a8418285dc2275ca1b1834da2461300e2a63fc1dd103cbf2fa07b28f0f490801a55680082c Jan 23 01:38:12.047494 unknown[751]: fetched base config from "system" Jan 23 01:38:12.047705 unknown[751]: fetched user config from "qemu" Jan 23 01:38:12.091663 ignition[751]: fetch-offline: fetch-offline passed Jan 23 01:38:12.093219 ignition[751]: Ignition finished successfully Jan 23 01:38:12.101709 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 01:38:12.106629 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 23 01:38:12.110475 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 01:38:12.804366 ignition[847]: Ignition 2.22.0 Jan 23 01:38:12.805188 ignition[847]: Stage: kargs Jan 23 01:38:12.824390 ignition[847]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:38:12.824539 ignition[847]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 01:38:12.865448 ignition[847]: kargs: kargs passed Jan 23 01:38:12.865654 ignition[847]: Ignition finished successfully Jan 23 01:38:12.881554 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 01:38:12.932003 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 01:38:14.590360 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1847307916 wd_nsec: 1847307420 Jan 23 01:38:14.718346 ignition[855]: Ignition 2.22.0 Jan 23 01:38:14.718419 ignition[855]: Stage: disks Jan 23 01:38:14.719673 ignition[855]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:38:14.737708 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 01:38:14.719687 ignition[855]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 01:38:14.755648 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 01:38:14.721014 ignition[855]: disks: disks passed Jan 23 01:38:14.775495 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 01:38:14.721136 ignition[855]: Ignition finished successfully Jan 23 01:38:14.824202 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 01:38:14.834347 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 01:38:14.847183 systemd[1]: Reached target basic.target - Basic System. Jan 23 01:38:14.849659 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 01:38:14.993656 systemd-fsck[865]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 23 01:38:15.005657 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 01:38:15.034488 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 01:38:15.465963 kernel: EXT4-fs (vda9): mounted filesystem 2036722e-4586-420e-8dc7-a3b65e840c36 r/w with ordered data mode. Quota mode: none. Jan 23 01:38:15.468601 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 01:38:15.482427 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 01:38:15.486147 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 01:38:15.523169 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 01:38:15.555667 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (874) Jan 23 01:38:15.532559 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 01:38:15.614221 kernel: BTRFS info (device vda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:38:15.614261 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:38:15.532645 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 01:38:15.532688 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 01:38:15.579414 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 01:38:15.677418 kernel: BTRFS info (device vda6): turning on async discard Jan 23 01:38:15.677461 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 01:38:15.692971 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 01:38:15.712965 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 01:38:15.916957 initrd-setup-root[898]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 01:38:15.949403 initrd-setup-root[905]: cut: /sysroot/etc/group: No such file or directory Jan 23 01:38:15.971040 initrd-setup-root[912]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 01:38:15.992477 initrd-setup-root[919]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 01:38:16.399721 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 01:38:16.421053 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 01:38:16.433328 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 01:38:16.480627 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 01:38:17.591303 kernel: BTRFS info (device vda6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:38:17.709048 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 01:38:17.863585 ignition[987]: INFO : Ignition 2.22.0 Jan 23 01:38:17.863585 ignition[987]: INFO : Stage: mount Jan 23 01:38:17.888028 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:38:17.888028 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 01:38:17.888028 ignition[987]: INFO : mount: mount passed Jan 23 01:38:17.888028 ignition[987]: INFO : Ignition finished successfully Jan 23 01:38:17.931911 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 01:38:17.938989 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 01:38:18.047342 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 01:38:18.150957 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1000) Jan 23 01:38:18.166143 kernel: BTRFS info (device vda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:38:18.167058 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:38:18.205940 kernel: BTRFS info (device vda6): turning on async discard Jan 23 01:38:18.206207 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 01:38:18.216241 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 01:38:18.511599 ignition[1017]: INFO : Ignition 2.22.0 Jan 23 01:38:18.511599 ignition[1017]: INFO : Stage: files Jan 23 01:38:18.542394 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:38:18.542394 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 01:38:18.542394 ignition[1017]: DEBUG : files: compiled without relabeling support, skipping Jan 23 01:38:18.631687 ignition[1017]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 01:38:18.650661 ignition[1017]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 01:38:18.841052 ignition[1017]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 01:38:18.878377 ignition[1017]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 01:38:18.895586 unknown[1017]: wrote ssh authorized keys file for user: core Jan 23 01:38:18.907420 ignition[1017]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 01:38:18.918868 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 01:38:18.941413 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 23 01:38:19.257145 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 01:38:20.045958 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 01:38:20.074928 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 01:38:20.091231 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 01:38:20.091231 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 01:38:20.119732 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 01:38:20.119732 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 01:38:20.119732 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 01:38:20.119732 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 01:38:20.119732 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 01:38:20.211904 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 01:38:20.521302 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 01:38:20.521302 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 01:38:20.582000 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 01:38:20.582000 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 01:38:20.582000 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 23 01:38:20.937376 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 01:38:30.845008 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 01:38:30.845008 ignition[1017]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 01:38:30.887708 ignition[1017]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 01:38:30.887708 ignition[1017]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 01:38:30.887708 ignition[1017]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 01:38:30.887708 ignition[1017]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 23 01:38:30.887708 ignition[1017]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 23 01:38:30.887708 ignition[1017]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 23 01:38:30.887708 ignition[1017]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 23 01:38:30.887708 ignition[1017]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 23 01:38:31.721547 ignition[1017]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 23 01:38:31.762505 ignition[1017]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 23 01:38:31.790301 ignition[1017]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 23 01:38:31.790301 ignition[1017]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 23 01:38:31.790301 ignition[1017]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 01:38:31.790301 ignition[1017]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 01:38:31.790301 ignition[1017]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 01:38:31.790301 ignition[1017]: INFO : files: files passed Jan 23 01:38:31.790301 ignition[1017]: INFO : Ignition finished successfully Jan 23 01:38:31.853952 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 01:38:31.933647 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 01:38:31.955060 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 01:38:32.076603 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 01:38:32.076965 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 01:38:32.134233 initrd-setup-root-after-ignition[1045]: grep: /sysroot/oem/oem-release: No such file or directory Jan 23 01:38:32.179464 initrd-setup-root-after-ignition[1048]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:38:32.179464 initrd-setup-root-after-ignition[1048]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:38:32.214694 initrd-setup-root-after-ignition[1052]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:38:32.228260 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 01:38:32.256626 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 01:38:32.307065 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 01:38:32.657950 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 01:38:32.659637 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 01:38:32.699446 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 01:38:32.745044 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 01:38:32.771393 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 01:38:32.790531 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 01:38:32.932590 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 01:38:32.980443 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 01:38:33.059992 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:38:33.096916 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:38:33.111230 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 01:38:33.128635 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 01:38:33.129446 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 01:38:33.183095 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 01:38:33.191360 systemd[1]: Stopped target basic.target - Basic System. Jan 23 01:38:33.200502 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 01:38:33.215417 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 01:38:33.215710 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 01:38:33.247036 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 01:38:33.276201 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 01:38:33.312562 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 01:38:33.329397 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 01:38:33.371345 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 01:38:33.388601 systemd[1]: Stopped target swap.target - Swaps. Jan 23 01:38:33.397189 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 01:38:33.397550 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 01:38:33.422629 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:38:33.435604 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:38:33.442381 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 01:38:33.443975 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:38:33.472024 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 01:38:33.474424 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 01:38:33.513034 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 01:38:33.513319 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 01:38:33.543535 systemd[1]: Stopped target paths.target - Path Units. Jan 23 01:38:33.573300 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 01:38:33.574605 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:38:33.587241 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 01:38:33.636623 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 01:38:33.649727 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 01:38:33.650236 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 01:38:33.665479 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 01:38:33.673714 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 01:38:33.696625 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 01:38:33.697282 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 01:38:33.717692 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 01:38:33.718084 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 01:38:33.775994 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 01:38:33.808490 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 01:38:33.821437 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 01:38:33.821680 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:38:33.858413 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 01:38:33.859213 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 01:38:33.902874 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 01:38:33.903197 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 01:38:33.931675 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 01:38:33.985563 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 01:38:33.985998 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 01:38:34.484670 ignition[1072]: INFO : Ignition 2.22.0 Jan 23 01:38:34.484670 ignition[1072]: INFO : Stage: umount Jan 23 01:38:34.497267 ignition[1072]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:38:34.497267 ignition[1072]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 01:38:34.497267 ignition[1072]: INFO : umount: umount passed Jan 23 01:38:34.497267 ignition[1072]: INFO : Ignition finished successfully Jan 23 01:38:34.520186 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 01:38:34.520435 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 01:38:34.535684 systemd[1]: Stopped target network.target - Network. Jan 23 01:38:34.582296 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 01:38:34.582696 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 01:38:34.589333 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 01:38:34.589406 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 01:38:34.618467 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 01:38:34.618583 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 01:38:34.637521 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 01:38:34.637626 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 01:38:34.654614 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 01:38:34.654717 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 01:38:34.699206 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 01:38:34.706965 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 01:38:34.744967 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 01:38:34.745246 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 01:38:34.800087 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 01:38:34.802022 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 01:38:34.802240 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:38:34.835861 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:38:34.878191 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 01:38:34.878501 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 01:38:34.916042 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 01:38:34.926256 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 01:38:34.926604 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 01:38:34.926677 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:38:34.948655 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 01:38:34.982006 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 01:38:34.983217 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 01:38:35.005442 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 01:38:35.005608 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:38:35.023701 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 01:38:35.024003 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 01:38:35.030599 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:38:35.092623 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 01:38:35.115085 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 01:38:35.115495 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:38:35.147253 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 01:38:35.147386 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 01:38:35.176407 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 01:38:35.176493 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:38:35.188283 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 01:38:35.188407 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 01:38:35.201375 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 01:38:35.201486 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 01:38:35.243726 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 01:38:35.243997 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 01:38:35.325466 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 01:38:35.331444 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 01:38:35.331543 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:38:35.382644 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 01:38:35.382962 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:38:35.432286 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 01:38:35.432392 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 01:38:35.489897 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 01:38:35.489987 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:38:35.501602 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:38:35.501692 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:38:35.536635 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 01:38:35.536724 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jan 23 01:38:35.536920 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 01:38:35.536986 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:38:35.539706 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 01:38:35.540023 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 01:38:35.579697 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 01:38:35.580226 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 01:38:35.601030 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 01:38:35.648608 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 01:38:35.783450 systemd[1]: Switching root. Jan 23 01:38:35.853520 systemd-journald[204]: Journal stopped Jan 23 01:38:41.117362 systemd-journald[204]: Received SIGTERM from PID 1 (systemd). Jan 23 01:38:41.117449 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 01:38:41.117483 kernel: SELinux: policy capability open_perms=1 Jan 23 01:38:41.117500 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 01:38:41.117517 kernel: SELinux: policy capability always_check_network=0 Jan 23 01:38:41.117534 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 01:38:41.117729 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 01:38:41.117901 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 01:38:41.117920 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 01:38:41.117936 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 01:38:41.117953 kernel: audit: type=1403 audit(1769132316.525:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 01:38:41.117971 systemd[1]: Successfully loaded SELinux policy in 250.945ms. Jan 23 01:38:41.118005 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 22.981ms. Jan 23 01:38:41.118024 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 01:38:41.118047 systemd[1]: Detected virtualization kvm. Jan 23 01:38:41.118067 systemd[1]: Detected architecture x86-64. Jan 23 01:38:41.118085 systemd[1]: Detected first boot. Jan 23 01:38:41.118103 systemd[1]: Initializing machine ID from VM UUID. Jan 23 01:38:41.118208 zram_generator::config[1117]: No configuration found. Jan 23 01:38:41.118233 kernel: Guest personality initialized and is inactive Jan 23 01:38:41.118250 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 01:38:41.118266 kernel: Initialized host personality Jan 23 01:38:41.118290 kernel: NET: Registered PF_VSOCK protocol family Jan 23 01:38:41.118312 systemd[1]: Populated /etc with preset unit settings. Jan 23 01:38:41.118331 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 01:38:41.118350 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 01:38:41.118371 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 01:38:41.118459 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 01:38:41.118477 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 01:38:41.118494 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 01:38:41.118511 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 01:38:41.118532 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 01:38:41.118553 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 01:38:41.118569 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 01:38:41.118585 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 01:38:41.118611 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 01:38:41.118627 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:38:41.118643 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:38:41.118662 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 01:38:41.118679 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 01:38:41.118700 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 01:38:41.118719 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 01:38:41.118872 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 01:38:41.118896 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:38:41.118912 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:38:41.118928 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 01:38:41.118947 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 01:38:41.118964 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 01:38:41.119057 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 01:38:41.119078 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:38:41.119095 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 01:38:41.119110 systemd[1]: Reached target slices.target - Slice Units. Jan 23 01:38:41.119217 systemd[1]: Reached target swap.target - Swaps. Jan 23 01:38:41.119241 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 01:38:41.119260 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 01:38:41.119279 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 01:38:41.119295 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:38:41.119319 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 01:38:41.119339 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:38:41.119357 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 01:38:41.119376 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 01:38:41.119394 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 01:38:41.119414 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 01:38:41.119432 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:38:41.119448 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 01:38:41.119463 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 01:38:41.119486 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 01:38:41.119502 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 01:38:41.119522 systemd[1]: Reached target machines.target - Containers. Jan 23 01:38:41.119609 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 01:38:41.119627 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:38:41.119643 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 01:38:41.119661 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 01:38:41.119676 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:38:41.119699 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 01:38:41.119715 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:38:41.119730 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 01:38:41.119876 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 01:38:41.119895 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 01:38:41.119915 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 01:38:41.119931 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 01:38:41.119946 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 01:38:41.119963 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 01:38:41.119985 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:38:41.120003 kernel: ACPI: bus type drm_connector registered Jan 23 01:38:41.120018 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 01:38:41.120036 kernel: fuse: init (API version 7.41) Jan 23 01:38:41.120051 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 01:38:41.120067 kernel: loop: module loaded Jan 23 01:38:41.120085 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 01:38:41.120271 systemd-journald[1202]: Collecting audit messages is disabled. Jan 23 01:38:41.120308 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 01:38:41.120325 systemd-journald[1202]: Journal started Jan 23 01:38:41.120355 systemd-journald[1202]: Runtime Journal (/run/log/journal/5bc2d0d5c2af4c49a95aa133d44a99e0) is 6M, max 48.1M, 42.1M free. Jan 23 01:38:39.214489 systemd[1]: Queued start job for default target multi-user.target. Jan 23 01:38:39.308565 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 23 01:38:39.327017 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 01:38:39.340533 systemd[1]: systemd-journald.service: Consumed 3.314s CPU time. Jan 23 01:38:41.145981 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 01:38:41.193936 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 01:38:41.218308 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 01:38:41.219241 systemd[1]: Stopped verity-setup.service. Jan 23 01:38:41.240938 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:38:41.270973 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 01:38:41.296968 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 01:38:41.315205 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 01:38:41.328686 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 01:38:41.342559 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 01:38:41.354392 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 01:38:41.378673 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 01:38:41.389449 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 01:38:41.407603 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:38:41.432294 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 01:38:41.433224 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 01:38:41.444604 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:38:41.445400 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:38:41.467398 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 01:38:41.468393 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 01:38:41.489426 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:38:41.489963 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:38:41.499494 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 01:38:41.500021 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 01:38:41.509299 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 01:38:41.509868 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 01:38:41.519176 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 01:38:41.528072 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:38:41.539212 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 01:38:41.549936 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 01:38:41.573602 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:38:41.607477 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 01:38:41.625694 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 01:38:41.656464 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 01:38:41.684943 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 01:38:41.685210 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 01:38:41.700642 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 01:38:41.723036 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 01:38:41.731916 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:38:41.756504 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 01:38:41.789392 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 01:38:41.800565 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 01:38:41.811470 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 01:38:41.822101 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 01:38:41.832053 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:38:41.855710 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 01:38:41.888348 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 01:38:41.895283 systemd-journald[1202]: Time spent on flushing to /var/log/journal/5bc2d0d5c2af4c49a95aa133d44a99e0 is 160.489ms for 1067 entries. Jan 23 01:38:41.895283 systemd-journald[1202]: System Journal (/var/log/journal/5bc2d0d5c2af4c49a95aa133d44a99e0) is 8M, max 195.6M, 187.6M free. Jan 23 01:38:42.117348 systemd-journald[1202]: Received client request to flush runtime journal. Jan 23 01:38:42.117418 kernel: loop0: detected capacity change from 0 to 229808 Jan 23 01:38:41.935897 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 01:38:41.956116 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 01:38:41.998915 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 01:38:42.022966 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 01:38:42.056411 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 01:38:42.120356 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 01:38:42.315885 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:38:42.337108 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Jan 23 01:38:42.337222 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Jan 23 01:38:42.342908 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 01:38:42.349912 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 01:38:42.352533 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 01:38:42.383060 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 01:38:42.399949 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 01:38:42.425379 kernel: loop1: detected capacity change from 0 to 128560 Jan 23 01:38:42.909998 kernel: loop2: detected capacity change from 0 to 110984 Jan 23 01:38:43.216580 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 01:38:43.233868 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 01:38:43.288985 kernel: loop3: detected capacity change from 0 to 229808 Jan 23 01:38:43.327878 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jan 23 01:38:43.327909 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jan 23 01:38:43.337928 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:38:43.384961 kernel: loop4: detected capacity change from 0 to 128560 Jan 23 01:38:43.512498 kernel: loop5: detected capacity change from 0 to 110984 Jan 23 01:38:43.731122 (sd-merge)[1262]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 23 01:38:43.732386 (sd-merge)[1262]: Merged extensions into '/usr'. Jan 23 01:38:43.750025 systemd[1]: Reload requested from client PID 1237 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 01:38:43.750207 systemd[1]: Reloading... Jan 23 01:38:44.231964 zram_generator::config[1290]: No configuration found. Jan 23 01:38:46.018454 systemd[1]: Reloading finished in 2265 ms. Jan 23 01:38:46.067971 ldconfig[1232]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 01:38:46.066616 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 01:38:46.090566 systemd[1]: Starting ensure-sysext.service... Jan 23 01:38:46.100267 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 01:38:46.140366 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 01:38:46.161401 systemd[1]: Reload requested from client PID 1327 ('systemctl') (unit ensure-sysext.service)... Jan 23 01:38:46.161430 systemd[1]: Reloading... Jan 23 01:38:46.229306 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 01:38:46.230093 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 01:38:46.230665 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 01:38:46.231493 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 01:38:46.234351 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 01:38:46.235326 systemd-tmpfiles[1328]: ACLs are not supported, ignoring. Jan 23 01:38:46.235530 systemd-tmpfiles[1328]: ACLs are not supported, ignoring. Jan 23 01:38:46.248251 systemd-tmpfiles[1328]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 01:38:46.248323 systemd-tmpfiles[1328]: Skipping /boot Jan 23 01:38:46.816333 systemd-tmpfiles[1328]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 01:38:46.816355 systemd-tmpfiles[1328]: Skipping /boot Jan 23 01:38:46.855069 zram_generator::config[1353]: No configuration found. Jan 23 01:38:47.934602 systemd[1]: Reloading finished in 1771 ms. Jan 23 01:38:47.989984 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:38:48.068575 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 01:38:48.112568 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 01:38:48.143928 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 01:38:48.199321 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 01:38:48.211993 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 01:38:48.225981 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 01:38:48.325711 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:38:48.326527 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:38:48.830343 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:38:48.848283 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:38:48.902922 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 01:38:48.906898 augenrules[1422]: No rules Jan 23 01:38:48.913504 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:38:48.913896 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:38:48.922290 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:38:48.955376 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 01:38:48.997678 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:38:49.007377 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 01:38:49.008002 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 01:38:49.026639 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 01:38:49.041645 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 01:38:49.056463 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:38:49.057057 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:38:49.092644 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:38:49.093611 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:38:49.107949 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 01:38:49.108540 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 01:38:49.129225 systemd-udevd[1427]: Using default interface naming scheme 'v255'. Jan 23 01:38:49.154455 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 01:38:49.184969 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:38:49.193318 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 01:38:49.203576 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:38:49.207399 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:38:49.221361 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 01:38:49.238261 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:38:49.262443 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 01:38:49.276988 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:38:49.277128 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:38:49.432226 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 01:38:49.433064 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 01:38:49.433331 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:38:49.486511 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 01:38:49.503264 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:38:49.521263 systemd[1]: Finished ensure-sysext.service. Jan 23 01:38:49.534505 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 01:38:49.535067 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 01:38:49.549511 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 01:38:49.550087 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 01:38:49.574443 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 01:38:49.586321 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 23 01:38:49.641322 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:38:49.641701 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:38:49.653989 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:38:49.654494 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:38:49.691575 augenrules[1436]: /sbin/augenrules: No change Jan 23 01:38:49.692130 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 01:38:49.692330 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 01:38:49.713081 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 01:38:49.756885 augenrules[1498]: No rules Jan 23 01:38:49.760606 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 01:38:49.777522 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 01:38:49.793083 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 01:38:50.438870 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 01:38:50.457849 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 23 01:38:50.459950 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 01:38:50.495891 kernel: ACPI: button: Power Button [PWRF] Jan 23 01:38:50.506589 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 01:38:50.548843 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 23 01:38:50.555858 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 23 01:38:50.580630 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 23 01:38:50.597047 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 23 01:38:50.606320 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 01:38:51.092391 systemd-resolved[1398]: Positive Trust Anchors: Jan 23 01:38:51.092528 systemd-resolved[1398]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 01:38:51.092584 systemd-resolved[1398]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 01:38:51.187368 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 01:38:51.210139 systemd-resolved[1398]: Defaulting to hostname 'linux'. Jan 23 01:38:51.217535 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 01:38:51.227542 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:38:51.235238 systemd-networkd[1471]: lo: Link UP Jan 23 01:38:51.236020 systemd-networkd[1471]: lo: Gained carrier Jan 23 01:38:51.239425 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 01:38:51.240361 systemd-networkd[1471]: Enumeration completed Jan 23 01:38:51.244081 systemd-networkd[1471]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:38:51.244088 systemd-networkd[1471]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 01:38:51.250471 systemd-networkd[1471]: eth0: Link UP Jan 23 01:38:51.251223 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 01:38:51.251258 systemd-networkd[1471]: eth0: Gained carrier Jan 23 01:38:51.251290 systemd-networkd[1471]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:38:51.279670 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 01:38:51.296048 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 01:38:51.305625 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 01:38:51.314212 systemd-networkd[1471]: eth0: DHCPv4 address 10.0.0.137/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 23 01:38:51.315455 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 01:38:51.318580 systemd-timesyncd[1475]: Network configuration changed, trying to establish connection. Jan 23 01:38:52.454258 systemd-timesyncd[1475]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 23 01:38:52.454329 systemd-timesyncd[1475]: Initial clock synchronization to Fri 2026-01-23 01:38:52.454142 UTC. Jan 23 01:38:52.455429 systemd-resolved[1398]: Clock change detected. Flushing caches. Jan 23 01:38:52.460756 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 01:38:52.482040 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 01:38:52.482126 systemd[1]: Reached target paths.target - Path Units. Jan 23 01:38:52.490097 systemd[1]: Reached target timers.target - Timer Units. Jan 23 01:38:52.518927 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 01:38:52.532865 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 01:38:52.546084 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 01:38:52.558430 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 01:38:52.569437 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 01:38:52.590145 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 01:38:52.628397 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 01:38:52.666908 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 01:38:52.682797 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 01:38:52.743512 systemd[1]: Reached target network.target - Network. Jan 23 01:38:52.756759 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 01:38:52.767112 systemd[1]: Reached target basic.target - Basic System. Jan 23 01:38:52.776146 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 01:38:52.776298 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 01:38:52.781861 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 01:38:52.820300 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 01:38:52.847173 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 01:38:52.926084 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 01:38:53.029669 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 01:38:53.073676 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 01:38:53.080390 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 01:38:53.118357 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 01:38:53.125322 jq[1540]: false Jan 23 01:38:53.132711 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 01:38:53.154719 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 01:38:53.174927 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 01:38:53.177032 extend-filesystems[1541]: Found /dev/vda6 Jan 23 01:38:53.225206 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 01:38:53.231876 google_oslogin_nss_cache[1542]: oslogin_cache_refresh[1542]: Refreshing passwd entry cache Jan 23 01:38:53.232431 oslogin_cache_refresh[1542]: Refreshing passwd entry cache Jan 23 01:38:53.238135 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 01:38:53.262247 extend-filesystems[1541]: Found /dev/vda9 Jan 23 01:38:53.262914 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 01:38:53.269777 oslogin_cache_refresh[1542]: Failure getting users, quitting Jan 23 01:38:53.270745 google_oslogin_nss_cache[1542]: oslogin_cache_refresh[1542]: Failure getting users, quitting Jan 23 01:38:53.270745 google_oslogin_nss_cache[1542]: oslogin_cache_refresh[1542]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 01:38:53.270745 google_oslogin_nss_cache[1542]: oslogin_cache_refresh[1542]: Refreshing group entry cache Jan 23 01:38:53.269806 oslogin_cache_refresh[1542]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 01:38:53.269878 oslogin_cache_refresh[1542]: Refreshing group entry cache Jan 23 01:38:53.286423 oslogin_cache_refresh[1542]: Failure getting groups, quitting Jan 23 01:38:53.286889 google_oslogin_nss_cache[1542]: oslogin_cache_refresh[1542]: Failure getting groups, quitting Jan 23 01:38:53.286889 google_oslogin_nss_cache[1542]: oslogin_cache_refresh[1542]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 01:38:53.286443 oslogin_cache_refresh[1542]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 01:38:53.295291 extend-filesystems[1541]: Checking size of /dev/vda9 Jan 23 01:38:53.323332 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:38:53.326355 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 01:38:53.426058 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 01:38:53.446816 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 01:38:53.463390 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 01:38:53.510488 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 01:38:53.530660 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 01:38:53.533229 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 01:38:53.534169 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 01:38:53.536374 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 01:38:53.553472 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 01:38:53.554161 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 01:38:53.587896 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 01:38:53.588402 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 01:38:53.835801 systemd-networkd[1471]: eth0: Gained IPv6LL Jan 23 01:38:53.848907 jq[1564]: true Jan 23 01:38:53.874485 extend-filesystems[1541]: Resized partition /dev/vda9 Jan 23 01:38:53.913491 extend-filesystems[1576]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 01:38:53.951347 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 23 01:38:53.911861 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 01:38:53.955509 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 01:38:53.956283 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 01:38:53.982674 update_engine[1563]: I20260123 01:38:53.979671 1563 main.cc:92] Flatcar Update Engine starting Jan 23 01:38:54.008517 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 23 01:38:54.026525 (ntainerd)[1583]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 01:38:54.029164 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:38:54.037343 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 01:38:54.068503 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:38:54.070199 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:38:54.077840 jq[1574]: true Jan 23 01:38:54.088139 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:38:54.115245 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 01:38:54.143156 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:38:54.142466 dbus-daemon[1538]: [system] SELinux support is enabled Jan 23 01:38:54.153394 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 01:38:54.215408 tar[1569]: linux-amd64/LICENSE Jan 23 01:38:54.215408 tar[1569]: linux-amd64/helm Jan 23 01:38:54.253737 update_engine[1563]: I20260123 01:38:54.249475 1563 update_check_scheduler.cc:74] Next update check in 9m44s Jan 23 01:38:54.253404 systemd[1]: Started update-engine.service - Update Engine. Jan 23 01:38:54.267192 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 01:38:54.267235 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 01:38:54.277235 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 01:38:54.277273 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 01:38:54.299873 systemd-logind[1556]: Watching system buttons on /dev/input/event2 (Power Button) Jan 23 01:38:54.299918 systemd-logind[1556]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 01:38:54.300439 systemd-logind[1556]: New seat seat0. Jan 23 01:38:54.312254 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 23 01:38:54.343802 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 01:38:54.344456 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 01:38:54.349803 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 01:38:54.383089 extend-filesystems[1576]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 23 01:38:54.383089 extend-filesystems[1576]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 23 01:38:54.383089 extend-filesystems[1576]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 23 01:38:54.379871 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 01:38:54.406150 extend-filesystems[1541]: Resized filesystem in /dev/vda9 Jan 23 01:38:54.424915 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 01:38:54.443118 bash[1625]: Updated "/home/core/.ssh/authorized_keys" Jan 23 01:38:54.594664 kernel: kvm_amd: TSC scaling supported Jan 23 01:38:54.594892 kernel: kvm_amd: Nested Virtualization enabled Jan 23 01:38:54.594929 kernel: kvm_amd: Nested Paging enabled Jan 23 01:38:54.613281 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 23 01:38:54.613454 kernel: kvm_amd: PMU virtualization is disabled Jan 23 01:38:54.652528 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 01:38:54.679355 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:38:54.698271 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 23 01:38:54.698825 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 23 01:38:54.743361 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 01:38:54.744813 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 23 01:38:54.850464 locksmithd[1611]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 01:38:55.117721 containerd[1583]: time="2026-01-23T01:38:55Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 01:38:55.124812 kernel: EDAC MC: Ver: 3.0.0 Jan 23 01:38:55.125938 containerd[1583]: time="2026-01-23T01:38:55.125322305Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 01:38:55.180104 containerd[1583]: time="2026-01-23T01:38:55.178506017Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.407µs" Jan 23 01:38:55.180104 containerd[1583]: time="2026-01-23T01:38:55.178690812Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 01:38:55.180104 containerd[1583]: time="2026-01-23T01:38:55.178714868Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 01:38:55.180104 containerd[1583]: time="2026-01-23T01:38:55.178950388Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 01:38:55.180104 containerd[1583]: time="2026-01-23T01:38:55.179067456Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 01:38:55.180104 containerd[1583]: time="2026-01-23T01:38:55.179109424Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 01:38:55.180104 containerd[1583]: time="2026-01-23T01:38:55.179185627Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 01:38:55.180104 containerd[1583]: time="2026-01-23T01:38:55.179196737Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 01:38:55.182848 containerd[1583]: time="2026-01-23T01:38:55.182746015Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 01:38:55.182848 containerd[1583]: time="2026-01-23T01:38:55.182845401Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 01:38:55.186126 containerd[1583]: time="2026-01-23T01:38:55.182868634Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 01:38:55.186126 containerd[1583]: time="2026-01-23T01:38:55.182883412Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 01:38:55.186126 containerd[1583]: time="2026-01-23T01:38:55.183101489Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 01:38:55.186126 containerd[1583]: time="2026-01-23T01:38:55.183519429Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 01:38:55.186126 containerd[1583]: time="2026-01-23T01:38:55.185709820Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 01:38:55.186126 containerd[1583]: time="2026-01-23T01:38:55.185730388Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 01:38:55.186279 containerd[1583]: time="2026-01-23T01:38:55.186258734Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 01:38:55.187733 containerd[1583]: time="2026-01-23T01:38:55.187472441Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 01:38:55.187939 containerd[1583]: time="2026-01-23T01:38:55.187841841Z" level=info msg="metadata content store policy set" policy=shared Jan 23 01:38:55.203798 containerd[1583]: time="2026-01-23T01:38:55.203194950Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 01:38:55.203798 containerd[1583]: time="2026-01-23T01:38:55.203792846Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 01:38:55.203942 containerd[1583]: time="2026-01-23T01:38:55.203828273Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 01:38:55.203942 containerd[1583]: time="2026-01-23T01:38:55.203847178Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 01:38:55.203942 containerd[1583]: time="2026-01-23T01:38:55.203862897Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 01:38:55.203942 containerd[1583]: time="2026-01-23T01:38:55.203875952Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 01:38:55.203942 containerd[1583]: time="2026-01-23T01:38:55.203896611Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 01:38:55.203942 containerd[1583]: time="2026-01-23T01:38:55.203914935Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 01:38:55.203942 containerd[1583]: time="2026-01-23T01:38:55.203928791Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 01:38:55.203942 containerd[1583]: time="2026-01-23T01:38:55.203941203Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 01:38:55.205188 containerd[1583]: time="2026-01-23T01:38:55.205101881Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 01:38:55.205233 containerd[1583]: time="2026-01-23T01:38:55.205199584Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 01:38:55.205500 containerd[1583]: time="2026-01-23T01:38:55.205398595Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 01:38:55.205641 containerd[1583]: time="2026-01-23T01:38:55.205502269Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 01:38:55.205788 containerd[1583]: time="2026-01-23T01:38:55.205529449Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 01:38:55.205788 containerd[1583]: time="2026-01-23T01:38:55.205748859Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 01:38:55.205788 containerd[1583]: time="2026-01-23T01:38:55.205769488Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 01:38:55.205788 containerd[1583]: time="2026-01-23T01:38:55.205785277Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 01:38:55.205901 containerd[1583]: time="2026-01-23T01:38:55.205803572Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 01:38:55.205901 containerd[1583]: time="2026-01-23T01:38:55.205819230Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 01:38:55.205901 containerd[1583]: time="2026-01-23T01:38:55.205840420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 01:38:55.205901 containerd[1583]: time="2026-01-23T01:38:55.205856130Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 01:38:55.205901 containerd[1583]: time="2026-01-23T01:38:55.205871869Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 01:38:55.209755 containerd[1583]: time="2026-01-23T01:38:55.209677486Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 01:38:55.209755 containerd[1583]: time="2026-01-23T01:38:55.209722519Z" level=info msg="Start snapshots syncer" Jan 23 01:38:55.209859 containerd[1583]: time="2026-01-23T01:38:55.209763416Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 01:38:55.213301 containerd[1583]: time="2026-01-23T01:38:55.211853498Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 01:38:55.213301 containerd[1583]: time="2026-01-23T01:38:55.212436608Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 01:38:55.214265 containerd[1583]: time="2026-01-23T01:38:55.212523029Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 01:38:55.214265 containerd[1583]: time="2026-01-23T01:38:55.213189433Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 01:38:55.214265 containerd[1583]: time="2026-01-23T01:38:55.213229748Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 01:38:55.214265 containerd[1583]: time="2026-01-23T01:38:55.213249576Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 01:38:55.214265 containerd[1583]: time="2026-01-23T01:38:55.213271116Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 01:38:55.214265 containerd[1583]: time="2026-01-23T01:38:55.213298557Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 01:38:55.214265 containerd[1583]: time="2026-01-23T01:38:55.213320909Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 01:38:55.214265 containerd[1583]: time="2026-01-23T01:38:55.213338251Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 01:38:55.214265 containerd[1583]: time="2026-01-23T01:38:55.213376733Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 01:38:55.214265 containerd[1583]: time="2026-01-23T01:38:55.213431416Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 01:38:55.214265 containerd[1583]: time="2026-01-23T01:38:55.213467463Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 01:38:55.214265 containerd[1583]: time="2026-01-23T01:38:55.213524208Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 01:38:55.214719 containerd[1583]: time="2026-01-23T01:38:55.214278257Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 01:38:55.214719 containerd[1583]: time="2026-01-23T01:38:55.214303343Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 01:38:55.214719 containerd[1583]: time="2026-01-23T01:38:55.214322068Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 01:38:55.214719 containerd[1583]: time="2026-01-23T01:38:55.214333239Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 01:38:55.214719 containerd[1583]: time="2026-01-23T01:38:55.214350612Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 01:38:55.214719 containerd[1583]: time="2026-01-23T01:38:55.214382882Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 01:38:55.214719 containerd[1583]: time="2026-01-23T01:38:55.214409171Z" level=info msg="runtime interface created" Jan 23 01:38:55.214719 containerd[1583]: time="2026-01-23T01:38:55.214421835Z" level=info msg="created NRI interface" Jan 23 01:38:55.214719 containerd[1583]: time="2026-01-23T01:38:55.214433727Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 01:38:55.214719 containerd[1583]: time="2026-01-23T01:38:55.214454977Z" level=info msg="Connect containerd service" Jan 23 01:38:55.214719 containerd[1583]: time="2026-01-23T01:38:55.214493218Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 01:38:55.232862 containerd[1583]: time="2026-01-23T01:38:55.231280127Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 01:38:55.325685 tar[1569]: linux-amd64/README.md Jan 23 01:38:55.373356 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 01:38:55.413840 containerd[1583]: time="2026-01-23T01:38:55.413410633Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 01:38:55.413840 containerd[1583]: time="2026-01-23T01:38:55.413665719Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 01:38:55.413840 containerd[1583]: time="2026-01-23T01:38:55.413702468Z" level=info msg="Start subscribing containerd event" Jan 23 01:38:55.413840 containerd[1583]: time="2026-01-23T01:38:55.413728306Z" level=info msg="Start recovering state" Jan 23 01:38:55.413840 containerd[1583]: time="2026-01-23T01:38:55.413845184Z" level=info msg="Start event monitor" Jan 23 01:38:55.413840 containerd[1583]: time="2026-01-23T01:38:55.413863458Z" level=info msg="Start cni network conf syncer for default" Jan 23 01:38:55.413840 containerd[1583]: time="2026-01-23T01:38:55.413870612Z" level=info msg="Start streaming server" Jan 23 01:38:55.414267 containerd[1583]: time="2026-01-23T01:38:55.413879418Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 01:38:55.414267 containerd[1583]: time="2026-01-23T01:38:55.413887083Z" level=info msg="runtime interface starting up..." Jan 23 01:38:55.414267 containerd[1583]: time="2026-01-23T01:38:55.413893455Z" level=info msg="starting plugins..." Jan 23 01:38:55.414267 containerd[1583]: time="2026-01-23T01:38:55.413910015Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 01:38:55.414267 containerd[1583]: time="2026-01-23T01:38:55.414173237Z" level=info msg="containerd successfully booted in 0.301804s" Jan 23 01:38:55.415270 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 01:38:55.704919 sshd_keygen[1573]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 01:38:55.833105 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 01:38:55.848774 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 01:38:55.861463 systemd[1]: Started sshd@0-10.0.0.137:22-10.0.0.1:50350.service - OpenSSH per-connection server daemon (10.0.0.1:50350). Jan 23 01:38:55.960026 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 01:38:55.961038 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 01:38:55.974807 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 01:38:57.139732 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 01:38:57.213360 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 01:38:57.238325 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 01:38:57.250250 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 01:38:57.849950 sshd[1676]: Accepted publickey for core from 10.0.0.1 port 50350 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:38:57.856228 sshd-session[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:38:57.926529 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 01:38:57.954267 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 01:38:58.026857 systemd-logind[1556]: New session 1 of user core. Jan 23 01:38:58.567729 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 01:38:58.593467 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 01:38:58.653348 (systemd)[1688]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 01:38:58.667297 systemd-logind[1556]: New session c1 of user core. Jan 23 01:38:59.891958 systemd[1688]: Queued start job for default target default.target. Jan 23 01:38:59.910910 systemd[1688]: Created slice app.slice - User Application Slice. Jan 23 01:38:59.911044 systemd[1688]: Reached target paths.target - Paths. Jan 23 01:38:59.911390 systemd[1688]: Reached target timers.target - Timers. Jan 23 01:38:59.917154 systemd[1688]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 01:39:00.051363 systemd[1688]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 01:39:00.051681 systemd[1688]: Reached target sockets.target - Sockets. Jan 23 01:39:00.051742 systemd[1688]: Reached target basic.target - Basic System. Jan 23 01:39:00.051801 systemd[1688]: Reached target default.target - Main User Target. Jan 23 01:39:00.051854 systemd[1688]: Startup finished in 1.305s. Jan 23 01:39:00.052160 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 01:39:00.092378 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 01:39:00.546429 systemd[1]: Started sshd@1-10.0.0.137:22-10.0.0.1:38220.service - OpenSSH per-connection server daemon (10.0.0.1:38220). Jan 23 01:39:02.248711 sshd[1699]: Accepted publickey for core from 10.0.0.1 port 38220 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:39:02.290760 sshd-session[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:39:02.428851 systemd-logind[1556]: New session 2 of user core. Jan 23 01:39:02.441789 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 01:39:02.769433 sshd[1702]: Connection closed by 10.0.0.1 port 38220 Jan 23 01:39:02.773260 sshd-session[1699]: pam_unix(sshd:session): session closed for user core Jan 23 01:39:02.798342 systemd[1]: sshd@1-10.0.0.137:22-10.0.0.1:38220.service: Deactivated successfully. Jan 23 01:39:02.804402 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 01:39:02.814148 systemd-logind[1556]: Session 2 logged out. Waiting for processes to exit. Jan 23 01:39:02.818092 systemd[1]: Started sshd@2-10.0.0.137:22-10.0.0.1:59562.service - OpenSSH per-connection server daemon (10.0.0.1:59562). Jan 23 01:39:02.824321 systemd-logind[1556]: Removed session 2. Jan 23 01:39:03.627454 sshd[1712]: Accepted publickey for core from 10.0.0.1 port 59562 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:39:03.687911 sshd-session[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:39:03.729689 systemd-logind[1556]: New session 3 of user core. Jan 23 01:39:03.738407 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 01:39:03.867201 sshd[1715]: Connection closed by 10.0.0.1 port 59562 Jan 23 01:39:03.869522 sshd-session[1712]: pam_unix(sshd:session): session closed for user core Jan 23 01:39:03.882365 systemd[1]: sshd@2-10.0.0.137:22-10.0.0.1:59562.service: Deactivated successfully. Jan 23 01:39:03.886430 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 01:39:03.888345 systemd-logind[1556]: Session 3 logged out. Waiting for processes to exit. Jan 23 01:39:04.017069 systemd-logind[1556]: Removed session 3. Jan 23 01:39:04.769362 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:39:04.770735 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 01:39:04.775418 systemd[1]: Startup finished in 21.612s (kernel) + 45.034s (initrd) + 27.343s (userspace) = 1min 33.989s. Jan 23 01:39:04.803502 (kubelet)[1725]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:39:13.933753 systemd[1]: Started sshd@3-10.0.0.137:22-10.0.0.1:36742.service - OpenSSH per-connection server daemon (10.0.0.1:36742). Jan 23 01:39:14.893416 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 36742 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:39:14.922324 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:39:14.944256 systemd-logind[1556]: New session 4 of user core. Jan 23 01:39:14.951139 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 01:39:15.060901 sshd[1737]: Connection closed by 10.0.0.1 port 36742 Jan 23 01:39:15.063849 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Jan 23 01:39:15.088922 systemd[1]: sshd@3-10.0.0.137:22-10.0.0.1:36742.service: Deactivated successfully. Jan 23 01:39:15.093406 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 01:39:15.097183 systemd-logind[1556]: Session 4 logged out. Waiting for processes to exit. Jan 23 01:39:15.112146 systemd[1]: Started sshd@4-10.0.0.137:22-10.0.0.1:36758.service - OpenSSH per-connection server daemon (10.0.0.1:36758). Jan 23 01:39:15.115156 systemd-logind[1556]: Removed session 4. Jan 23 01:39:15.163477 kubelet[1725]: E0123 01:39:15.162954 1725 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:39:15.171940 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:39:15.172352 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:39:15.175782 systemd[1]: kubelet.service: Consumed 13.815s CPU time, 270.8M memory peak. Jan 23 01:39:15.234750 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 36758 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:39:15.238329 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:39:15.264759 systemd-logind[1556]: New session 5 of user core. Jan 23 01:39:15.283969 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 01:39:15.382323 sshd[1747]: Connection closed by 10.0.0.1 port 36758 Jan 23 01:39:15.382804 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Jan 23 01:39:15.396749 systemd[1]: sshd@4-10.0.0.137:22-10.0.0.1:36758.service: Deactivated successfully. Jan 23 01:39:15.400266 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 01:39:15.401952 systemd-logind[1556]: Session 5 logged out. Waiting for processes to exit. Jan 23 01:39:15.406808 systemd[1]: Started sshd@5-10.0.0.137:22-10.0.0.1:36772.service - OpenSSH per-connection server daemon (10.0.0.1:36772). Jan 23 01:39:15.409271 systemd-logind[1556]: Removed session 5. Jan 23 01:39:15.521747 sshd[1753]: Accepted publickey for core from 10.0.0.1 port 36772 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:39:15.525165 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:39:15.545866 systemd-logind[1556]: New session 6 of user core. Jan 23 01:39:15.571114 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 01:39:15.683495 sshd[1756]: Connection closed by 10.0.0.1 port 36772 Jan 23 01:39:15.685838 sshd-session[1753]: pam_unix(sshd:session): session closed for user core Jan 23 01:39:15.750789 systemd[1]: sshd@5-10.0.0.137:22-10.0.0.1:36772.service: Deactivated successfully. Jan 23 01:39:15.769782 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 01:39:15.776901 systemd-logind[1556]: Session 6 logged out. Waiting for processes to exit. Jan 23 01:39:15.824209 systemd[1]: Started sshd@6-10.0.0.137:22-10.0.0.1:36774.service - OpenSSH per-connection server daemon (10.0.0.1:36774). Jan 23 01:39:15.828938 systemd-logind[1556]: Removed session 6. Jan 23 01:39:16.320898 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 36774 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:39:16.325295 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:39:16.350703 systemd-logind[1556]: New session 7 of user core. Jan 23 01:39:16.360288 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 01:39:16.478210 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 01:39:16.478959 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:39:16.615319 sudo[1766]: pam_unix(sudo:session): session closed for user root Jan 23 01:39:16.626080 sshd[1765]: Connection closed by 10.0.0.1 port 36774 Jan 23 01:39:16.628216 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Jan 23 01:39:16.659390 systemd[1]: sshd@6-10.0.0.137:22-10.0.0.1:36774.service: Deactivated successfully. Jan 23 01:39:16.666514 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 01:39:16.682518 systemd-logind[1556]: Session 7 logged out. Waiting for processes to exit. Jan 23 01:39:16.717952 systemd[1]: Started sshd@7-10.0.0.137:22-10.0.0.1:36780.service - OpenSSH per-connection server daemon (10.0.0.1:36780). Jan 23 01:39:16.723427 systemd-logind[1556]: Removed session 7. Jan 23 01:39:17.015509 sshd[1772]: Accepted publickey for core from 10.0.0.1 port 36780 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:39:17.026900 sshd-session[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:39:17.064803 systemd-logind[1556]: New session 8 of user core. Jan 23 01:39:17.115815 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 01:39:17.295471 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 01:39:17.297439 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:39:17.374081 sudo[1777]: pam_unix(sudo:session): session closed for user root Jan 23 01:39:17.456869 sudo[1776]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 01:39:17.457790 sudo[1776]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:39:17.574317 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 01:39:18.016399 augenrules[1799]: No rules Jan 23 01:39:18.021836 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 01:39:18.022508 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 01:39:18.029831 sudo[1776]: pam_unix(sudo:session): session closed for user root Jan 23 01:39:18.038481 sshd[1775]: Connection closed by 10.0.0.1 port 36780 Jan 23 01:39:18.043422 sshd-session[1772]: pam_unix(sshd:session): session closed for user core Jan 23 01:39:18.058237 systemd[1]: sshd@7-10.0.0.137:22-10.0.0.1:36780.service: Deactivated successfully. Jan 23 01:39:18.064970 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 01:39:18.071945 systemd-logind[1556]: Session 8 logged out. Waiting for processes to exit. Jan 23 01:39:18.081965 systemd[1]: Started sshd@8-10.0.0.137:22-10.0.0.1:36786.service - OpenSSH per-connection server daemon (10.0.0.1:36786). Jan 23 01:39:18.084964 systemd-logind[1556]: Removed session 8. Jan 23 01:39:18.329386 sshd[1808]: Accepted publickey for core from 10.0.0.1 port 36786 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:39:18.332320 sshd-session[1808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:39:18.412779 systemd-logind[1556]: New session 9 of user core. Jan 23 01:39:18.434274 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 01:39:18.538847 sudo[1812]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 01:39:18.542956 sudo[1812]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:39:25.403860 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 01:39:25.413276 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:39:29.682504 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:39:29.743683 (kubelet)[1841]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:39:30.648285 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 01:39:31.052716 (dockerd)[1849]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 01:39:31.236744 kubelet[1841]: E0123 01:39:31.236260 1841 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:39:31.245804 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:39:31.246232 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:39:31.247737 systemd[1]: kubelet.service: Consumed 4.585s CPU time, 110.8M memory peak. Jan 23 01:39:33.651016 dockerd[1849]: time="2026-01-23T01:39:33.650069712Z" level=info msg="Starting up" Jan 23 01:39:33.655357 dockerd[1849]: time="2026-01-23T01:39:33.655186481Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 01:39:35.271705 dockerd[1849]: time="2026-01-23T01:39:35.270034974Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 01:39:36.528384 dockerd[1849]: time="2026-01-23T01:39:36.523215066Z" level=info msg="Loading containers: start." Jan 23 01:39:36.945177 kernel: Initializing XFRM netlink socket Jan 23 01:39:38.589701 systemd-networkd[1471]: docker0: Link UP Jan 23 01:39:38.602375 dockerd[1849]: time="2026-01-23T01:39:38.602190419Z" level=info msg="Loading containers: done." Jan 23 01:39:38.655155 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1003116619-merged.mount: Deactivated successfully. Jan 23 01:39:38.662151 dockerd[1849]: time="2026-01-23T01:39:38.661867698Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 01:39:38.662957 dockerd[1849]: time="2026-01-23T01:39:38.662466617Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 01:39:38.663265 dockerd[1849]: time="2026-01-23T01:39:38.663156947Z" level=info msg="Initializing buildkit" Jan 23 01:39:38.806199 dockerd[1849]: time="2026-01-23T01:39:38.805505125Z" level=info msg="Completed buildkit initialization" Jan 23 01:39:38.824215 dockerd[1849]: time="2026-01-23T01:39:38.824037036Z" level=info msg="Daemon has completed initialization" Jan 23 01:39:38.824838 dockerd[1849]: time="2026-01-23T01:39:38.824674397Z" level=info msg="API listen on /run/docker.sock" Jan 23 01:39:38.825885 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 01:39:39.790270 update_engine[1563]: I20260123 01:39:39.789268 1563 update_attempter.cc:509] Updating boot flags... Jan 23 01:39:40.451681 containerd[1583]: time="2026-01-23T01:39:40.451216395Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 23 01:39:41.160146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1903893722.mount: Deactivated successfully. Jan 23 01:39:41.385206 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 01:39:41.396295 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:39:41.995425 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:39:42.022007 (kubelet)[2114]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:39:42.222493 kubelet[2114]: E0123 01:39:42.222079 2114 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:39:42.230175 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:39:42.230813 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:39:42.232178 systemd[1]: kubelet.service: Consumed 635ms CPU time, 111M memory peak. Jan 23 01:39:47.126137 containerd[1583]: time="2026-01-23T01:39:47.125944597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:39:47.129181 containerd[1583]: time="2026-01-23T01:39:47.128930992Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114712" Jan 23 01:39:47.133820 containerd[1583]: time="2026-01-23T01:39:47.133514232Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:39:47.143207 containerd[1583]: time="2026-01-23T01:39:47.142986196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:39:47.146461 containerd[1583]: time="2026-01-23T01:39:47.146407492Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 6.695003609s" Jan 23 01:39:47.149090 containerd[1583]: time="2026-01-23T01:39:47.147060645Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 23 01:39:47.153901 containerd[1583]: time="2026-01-23T01:39:47.153236551Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 23 01:39:52.483141 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 01:39:52.839056 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:39:56.161028 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:39:56.354481 (kubelet)[2168]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:39:56.853091 kubelet[2168]: E0123 01:39:56.851953 2168 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:39:56.862200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:39:56.862766 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:39:56.866020 systemd[1]: kubelet.service: Consumed 3.520s CPU time, 108.2M memory peak. Jan 23 01:39:57.898518 containerd[1583]: time="2026-01-23T01:39:57.898223165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:39:57.902020 containerd[1583]: time="2026-01-23T01:39:57.901987932Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016781" Jan 23 01:39:57.905983 containerd[1583]: time="2026-01-23T01:39:57.905954201Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:39:57.915258 containerd[1583]: time="2026-01-23T01:39:57.915216726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:39:57.917081 containerd[1583]: time="2026-01-23T01:39:57.916976656Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 10.763626001s" Jan 23 01:39:57.917164 containerd[1583]: time="2026-01-23T01:39:57.917082923Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 23 01:39:57.920050 containerd[1583]: time="2026-01-23T01:39:57.919292289Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 23 01:40:01.087248 containerd[1583]: time="2026-01-23T01:40:01.086950443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:40:01.090006 containerd[1583]: time="2026-01-23T01:40:01.089303758Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158102" Jan 23 01:40:01.093013 containerd[1583]: time="2026-01-23T01:40:01.092890214Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:40:01.101471 containerd[1583]: time="2026-01-23T01:40:01.101266946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:40:01.104786 containerd[1583]: time="2026-01-23T01:40:01.104445659Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 3.185118655s" Jan 23 01:40:01.105020 containerd[1583]: time="2026-01-23T01:40:01.104864991Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 23 01:40:01.106350 containerd[1583]: time="2026-01-23T01:40:01.105517698Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 23 01:40:03.329154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3114665301.mount: Deactivated successfully. Jan 23 01:40:05.749438 containerd[1583]: time="2026-01-23T01:40:05.749134724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:40:05.754239 containerd[1583]: time="2026-01-23T01:40:05.754196821Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930096" Jan 23 01:40:05.760232 containerd[1583]: time="2026-01-23T01:40:05.760073111Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:40:05.768439 containerd[1583]: time="2026-01-23T01:40:05.768011423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:40:05.768439 containerd[1583]: time="2026-01-23T01:40:05.768256771Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 4.662299375s" Jan 23 01:40:05.768439 containerd[1583]: time="2026-01-23T01:40:05.768289453Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 23 01:40:05.772090 containerd[1583]: time="2026-01-23T01:40:05.771465725Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 23 01:40:06.567195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount37823666.mount: Deactivated successfully. Jan 23 01:40:06.886184 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 01:40:06.891178 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:40:07.417200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:40:07.443415 (kubelet)[2224]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:40:07.607187 kubelet[2224]: E0123 01:40:07.607050 2224 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:40:07.612232 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:40:07.613119 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:40:07.614204 systemd[1]: kubelet.service: Consumed 577ms CPU time, 109.5M memory peak. Jan 23 01:40:09.353379 containerd[1583]: time="2026-01-23T01:40:09.351938449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:40:09.355353 containerd[1583]: time="2026-01-23T01:40:09.355150137Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jan 23 01:40:09.359426 containerd[1583]: time="2026-01-23T01:40:09.358495023Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:40:09.367879 containerd[1583]: time="2026-01-23T01:40:09.366236958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:40:09.368171 containerd[1583]: time="2026-01-23T01:40:09.368015686Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 3.596519054s" Jan 23 01:40:09.368171 containerd[1583]: time="2026-01-23T01:40:09.368157420Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 23 01:40:09.370408 containerd[1583]: time="2026-01-23T01:40:09.370262206Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 01:40:10.124985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2436391231.mount: Deactivated successfully. Jan 23 01:40:10.157750 containerd[1583]: time="2026-01-23T01:40:10.157407635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:40:10.163800 containerd[1583]: time="2026-01-23T01:40:10.162081629Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 23 01:40:10.166271 containerd[1583]: time="2026-01-23T01:40:10.166066053Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:40:10.175336 containerd[1583]: time="2026-01-23T01:40:10.174934725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:40:10.176283 containerd[1583]: time="2026-01-23T01:40:10.176236389Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 805.937704ms" Jan 23 01:40:10.176347 containerd[1583]: time="2026-01-23T01:40:10.176280401Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 23 01:40:10.178485 containerd[1583]: time="2026-01-23T01:40:10.178337147Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 23 01:40:10.783052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2327891792.mount: Deactivated successfully. Jan 23 01:40:16.592241 containerd[1583]: time="2026-01-23T01:40:16.591984002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:40:16.595162 containerd[1583]: time="2026-01-23T01:40:16.595123249Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926227" Jan 23 01:40:16.597088 containerd[1583]: time="2026-01-23T01:40:16.597034812Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:40:16.602821 containerd[1583]: time="2026-01-23T01:40:16.602742639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:40:16.604109 containerd[1583]: time="2026-01-23T01:40:16.604003305Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 6.425633087s" Jan 23 01:40:16.604109 containerd[1583]: time="2026-01-23T01:40:16.604065060Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 23 01:40:17.635488 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 23 01:40:17.638818 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:40:17.970191 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:40:17.995978 (kubelet)[2348]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:40:18.071889 kubelet[2348]: E0123 01:40:18.071528 2348 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:40:18.076745 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:40:18.077049 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:40:18.077755 systemd[1]: kubelet.service: Consumed 306ms CPU time, 110.9M memory peak. Jan 23 01:40:20.469385 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:40:20.469819 systemd[1]: kubelet.service: Consumed 306ms CPU time, 110.9M memory peak. Jan 23 01:40:20.473698 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:40:20.519883 systemd[1]: Reload requested from client PID 2364 ('systemctl') (unit session-9.scope)... Jan 23 01:40:20.519962 systemd[1]: Reloading... Jan 23 01:40:20.663110 zram_generator::config[2410]: No configuration found. Jan 23 01:40:21.133078 systemd[1]: Reloading finished in 612 ms. Jan 23 01:40:21.219887 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 01:40:21.220063 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 01:40:21.220532 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:40:21.220798 systemd[1]: kubelet.service: Consumed 195ms CPU time, 98.2M memory peak. Jan 23 01:40:21.223196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:40:21.511299 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:40:21.531400 (kubelet)[2455]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 01:40:21.623367 kubelet[2455]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:40:21.623367 kubelet[2455]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 01:40:21.623367 kubelet[2455]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:40:21.623975 kubelet[2455]: I0123 01:40:21.623515 2455 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 01:40:21.841143 kubelet[2455]: I0123 01:40:21.840912 2455 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 01:40:21.841143 kubelet[2455]: I0123 01:40:21.841063 2455 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 01:40:21.841886 kubelet[2455]: I0123 01:40:21.841792 2455 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 01:40:21.883312 kubelet[2455]: I0123 01:40:21.883191 2455 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 01:40:21.885749 kubelet[2455]: E0123 01:40:21.884820 2455 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.137:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 01:40:21.906490 kubelet[2455]: I0123 01:40:21.906404 2455 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 01:40:21.924316 kubelet[2455]: I0123 01:40:21.924193 2455 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 01:40:21.926154 kubelet[2455]: I0123 01:40:21.926019 2455 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 01:40:21.926894 kubelet[2455]: I0123 01:40:21.926106 2455 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 01:40:21.927153 kubelet[2455]: I0123 01:40:21.927099 2455 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 01:40:21.927198 kubelet[2455]: I0123 01:40:21.927164 2455 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 01:40:21.929741 kubelet[2455]: I0123 01:40:21.929443 2455 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:40:21.933707 kubelet[2455]: I0123 01:40:21.933424 2455 kubelet.go:480] "Attempting to sync node with API server" Jan 23 01:40:21.933780 kubelet[2455]: I0123 01:40:21.933723 2455 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 01:40:21.934179 kubelet[2455]: I0123 01:40:21.934054 2455 kubelet.go:386] "Adding apiserver pod source" Jan 23 01:40:21.934179 kubelet[2455]: I0123 01:40:21.934157 2455 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 01:40:21.939382 kubelet[2455]: E0123 01:40:21.939289 2455 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 01:40:21.940889 kubelet[2455]: E0123 01:40:21.940733 2455 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 01:40:21.946814 kubelet[2455]: I0123 01:40:21.946662 2455 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 01:40:21.948266 kubelet[2455]: I0123 01:40:21.948123 2455 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 01:40:21.949593 kubelet[2455]: W0123 01:40:21.949457 2455 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 01:40:21.957986 kubelet[2455]: I0123 01:40:21.957899 2455 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 01:40:21.958318 kubelet[2455]: I0123 01:40:21.958239 2455 server.go:1289] "Started kubelet" Jan 23 01:40:21.961317 kubelet[2455]: I0123 01:40:21.961058 2455 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 01:40:21.963932 kubelet[2455]: I0123 01:40:21.963873 2455 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 01:40:21.964066 kubelet[2455]: I0123 01:40:21.964012 2455 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 01:40:21.964207 kubelet[2455]: I0123 01:40:21.964102 2455 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 01:40:21.967063 kubelet[2455]: E0123 01:40:21.963209 2455 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.137:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.137:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188d3890836f0231 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-23 01:40:21.958009393 +0000 UTC m=+0.417091835,LastTimestamp:2026-01-23 01:40:21.958009393 +0000 UTC m=+0.417091835,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 23 01:40:21.969450 kubelet[2455]: I0123 01:40:21.969331 2455 server.go:317] "Adding debug handlers to kubelet server" Jan 23 01:40:21.970337 kubelet[2455]: I0123 01:40:21.970206 2455 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 01:40:21.971445 kubelet[2455]: E0123 01:40:21.971218 2455 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:40:21.972254 kubelet[2455]: I0123 01:40:21.971832 2455 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 01:40:21.972772 kubelet[2455]: I0123 01:40:21.972413 2455 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 01:40:21.974017 kubelet[2455]: E0123 01:40:21.973897 2455 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 01:40:21.975313 kubelet[2455]: I0123 01:40:21.975258 2455 reconciler.go:26] "Reconciler: start to sync state" Jan 23 01:40:21.976381 kubelet[2455]: E0123 01:40:21.976350 2455 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 01:40:21.976497 kubelet[2455]: E0123 01:40:21.976381 2455 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="200ms" Jan 23 01:40:21.979892 kubelet[2455]: I0123 01:40:21.979802 2455 factory.go:223] Registration of the systemd container factory successfully Jan 23 01:40:21.980036 kubelet[2455]: I0123 01:40:21.980007 2455 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 01:40:21.984257 kubelet[2455]: I0123 01:40:21.984181 2455 factory.go:223] Registration of the containerd container factory successfully Jan 23 01:40:22.021904 kubelet[2455]: I0123 01:40:22.021318 2455 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 01:40:22.021904 kubelet[2455]: I0123 01:40:22.021830 2455 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 01:40:22.021904 kubelet[2455]: I0123 01:40:22.021860 2455 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:40:22.030081 kubelet[2455]: I0123 01:40:22.029950 2455 policy_none.go:49] "None policy: Start" Jan 23 01:40:22.030180 kubelet[2455]: I0123 01:40:22.030153 2455 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 01:40:22.030371 kubelet[2455]: I0123 01:40:22.030350 2455 state_mem.go:35] "Initializing new in-memory state store" Jan 23 01:40:22.030877 kubelet[2455]: I0123 01:40:22.030707 2455 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 01:40:22.034804 kubelet[2455]: I0123 01:40:22.034523 2455 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 01:40:22.035277 kubelet[2455]: I0123 01:40:22.035194 2455 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 01:40:22.035409 kubelet[2455]: I0123 01:40:22.035331 2455 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 01:40:22.035409 kubelet[2455]: I0123 01:40:22.035390 2455 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 01:40:22.035494 kubelet[2455]: E0123 01:40:22.035446 2455 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 01:40:22.037975 kubelet[2455]: E0123 01:40:22.037834 2455 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 01:40:22.057296 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 01:40:22.071935 kubelet[2455]: E0123 01:40:22.071865 2455 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:40:22.075115 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 01:40:22.083322 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 01:40:22.105150 kubelet[2455]: E0123 01:40:22.104819 2455 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 01:40:22.105292 kubelet[2455]: I0123 01:40:22.105228 2455 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 01:40:22.105746 kubelet[2455]: I0123 01:40:22.105384 2455 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 01:40:22.106109 kubelet[2455]: I0123 01:40:22.105996 2455 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 01:40:22.108035 kubelet[2455]: E0123 01:40:22.107864 2455 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 01:40:22.108666 kubelet[2455]: E0123 01:40:22.108418 2455 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 23 01:40:22.158510 systemd[1]: Created slice kubepods-burstable-pod8aad776a2f26955ff68c865c0c0362aa.slice - libcontainer container kubepods-burstable-pod8aad776a2f26955ff68c865c0c0362aa.slice. Jan 23 01:40:22.177241 kubelet[2455]: I0123 01:40:22.177164 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:40:22.178119 kubelet[2455]: I0123 01:40:22.178052 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:40:22.178221 kubelet[2455]: I0123 01:40:22.178128 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:40:22.178221 kubelet[2455]: I0123 01:40:22.178155 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 23 01:40:22.178221 kubelet[2455]: I0123 01:40:22.178175 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8aad776a2f26955ff68c865c0c0362aa-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8aad776a2f26955ff68c865c0c0362aa\") " pod="kube-system/kube-apiserver-localhost" Jan 23 01:40:22.178221 kubelet[2455]: I0123 01:40:22.178197 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8aad776a2f26955ff68c865c0c0362aa-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8aad776a2f26955ff68c865c0c0362aa\") " pod="kube-system/kube-apiserver-localhost" Jan 23 01:40:22.178340 kubelet[2455]: I0123 01:40:22.178216 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:40:22.178340 kubelet[2455]: I0123 01:40:22.178312 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:40:22.178340 kubelet[2455]: I0123 01:40:22.178332 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8aad776a2f26955ff68c865c0c0362aa-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8aad776a2f26955ff68c865c0c0362aa\") " pod="kube-system/kube-apiserver-localhost" Jan 23 01:40:22.178429 kubelet[2455]: E0123 01:40:22.177826 2455 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="400ms" Jan 23 01:40:22.178429 kubelet[2455]: E0123 01:40:22.178004 2455 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:40:22.184288 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Jan 23 01:40:22.188723 kubelet[2455]: E0123 01:40:22.188450 2455 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:40:22.193328 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Jan 23 01:40:22.197397 kubelet[2455]: E0123 01:40:22.197309 2455 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:40:22.209368 kubelet[2455]: I0123 01:40:22.209107 2455 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 01:40:22.210182 kubelet[2455]: E0123 01:40:22.210075 2455 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Jan 23 01:40:22.414221 kubelet[2455]: I0123 01:40:22.413532 2455 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 01:40:22.414385 kubelet[2455]: E0123 01:40:22.414311 2455 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Jan 23 01:40:22.481474 containerd[1583]: time="2026-01-23T01:40:22.481363288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8aad776a2f26955ff68c865c0c0362aa,Namespace:kube-system,Attempt:0,}" Jan 23 01:40:22.490893 containerd[1583]: time="2026-01-23T01:40:22.490498709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Jan 23 01:40:22.499359 containerd[1583]: time="2026-01-23T01:40:22.499279132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Jan 23 01:40:22.525882 containerd[1583]: time="2026-01-23T01:40:22.525505646Z" level=info msg="connecting to shim 65b1865197b8a9cdd078e71a0cd86571cf104a3dbe142f2a8ceedc141423dc70" address="unix:///run/containerd/s/19f6053ab661dfa7ceda729e43a8f1c4ecf45fb45b89e93d8703c52a73640fa2" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:40:22.564909 containerd[1583]: time="2026-01-23T01:40:22.564766448Z" level=info msg="connecting to shim f2db3af36a776eda79a470a96922df656ffee6ce680995d3f4043fdbd772f34b" address="unix:///run/containerd/s/5132fb2b281ad66878fdc12fd85aa5b2bf716b281c91c0c228e414fca796caae" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:40:22.570994 containerd[1583]: time="2026-01-23T01:40:22.570862883Z" level=info msg="connecting to shim 2131276a840458f6eb05334ec8ac84f20599cf7a71214d2cb715ac186c457406" address="unix:///run/containerd/s/1898cdc47cd37e8e058257430d869f8b4b128386f4a4c35cdd1506f5ee090104" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:40:22.579457 kubelet[2455]: E0123 01:40:22.579357 2455 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.137:6443: connect: connection refused" interval="800ms" Jan 23 01:40:22.583061 systemd[1]: Started cri-containerd-65b1865197b8a9cdd078e71a0cd86571cf104a3dbe142f2a8ceedc141423dc70.scope - libcontainer container 65b1865197b8a9cdd078e71a0cd86571cf104a3dbe142f2a8ceedc141423dc70. Jan 23 01:40:22.642024 systemd[1]: Started cri-containerd-2131276a840458f6eb05334ec8ac84f20599cf7a71214d2cb715ac186c457406.scope - libcontainer container 2131276a840458f6eb05334ec8ac84f20599cf7a71214d2cb715ac186c457406. Jan 23 01:40:22.647813 systemd[1]: Started cri-containerd-f2db3af36a776eda79a470a96922df656ffee6ce680995d3f4043fdbd772f34b.scope - libcontainer container f2db3af36a776eda79a470a96922df656ffee6ce680995d3f4043fdbd772f34b. Jan 23 01:40:22.715777 containerd[1583]: time="2026-01-23T01:40:22.714919200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8aad776a2f26955ff68c865c0c0362aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"65b1865197b8a9cdd078e71a0cd86571cf104a3dbe142f2a8ceedc141423dc70\"" Jan 23 01:40:22.732154 containerd[1583]: time="2026-01-23T01:40:22.732054187Z" level=info msg="CreateContainer within sandbox \"65b1865197b8a9cdd078e71a0cd86571cf104a3dbe142f2a8ceedc141423dc70\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 01:40:22.752750 containerd[1583]: time="2026-01-23T01:40:22.751709487Z" level=info msg="Container bdc95fbc072e7793c49d8d507ccb6fc7682208eb47ec7f5fd1b290aa2cdfdd60: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:40:22.756927 containerd[1583]: time="2026-01-23T01:40:22.756831426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"2131276a840458f6eb05334ec8ac84f20599cf7a71214d2cb715ac186c457406\"" Jan 23 01:40:22.768749 containerd[1583]: time="2026-01-23T01:40:22.768439792Z" level=info msg="CreateContainer within sandbox \"2131276a840458f6eb05334ec8ac84f20599cf7a71214d2cb715ac186c457406\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 01:40:22.769458 containerd[1583]: time="2026-01-23T01:40:22.769275298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2db3af36a776eda79a470a96922df656ffee6ce680995d3f4043fdbd772f34b\"" Jan 23 01:40:22.781270 containerd[1583]: time="2026-01-23T01:40:22.780107587Z" level=info msg="CreateContainer within sandbox \"f2db3af36a776eda79a470a96922df656ffee6ce680995d3f4043fdbd772f34b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 01:40:22.783715 containerd[1583]: time="2026-01-23T01:40:22.783398899Z" level=info msg="CreateContainer within sandbox \"65b1865197b8a9cdd078e71a0cd86571cf104a3dbe142f2a8ceedc141423dc70\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bdc95fbc072e7793c49d8d507ccb6fc7682208eb47ec7f5fd1b290aa2cdfdd60\"" Jan 23 01:40:22.785846 containerd[1583]: time="2026-01-23T01:40:22.785377011Z" level=info msg="StartContainer for \"bdc95fbc072e7793c49d8d507ccb6fc7682208eb47ec7f5fd1b290aa2cdfdd60\"" Jan 23 01:40:22.788767 containerd[1583]: time="2026-01-23T01:40:22.788156348Z" level=info msg="connecting to shim bdc95fbc072e7793c49d8d507ccb6fc7682208eb47ec7f5fd1b290aa2cdfdd60" address="unix:///run/containerd/s/19f6053ab661dfa7ceda729e43a8f1c4ecf45fb45b89e93d8703c52a73640fa2" protocol=ttrpc version=3 Jan 23 01:40:22.795777 containerd[1583]: time="2026-01-23T01:40:22.795371886Z" level=info msg="Container 71ef8eb228249ff56236760ab273966fdcb056f5dbd5e8418f8a65ada0834e3e: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:40:22.801853 containerd[1583]: time="2026-01-23T01:40:22.801715296Z" level=info msg="Container 2fc54573595bd25927a5a272a0381b77c4954c97636811c054845476a3425bef: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:40:22.809189 containerd[1583]: time="2026-01-23T01:40:22.808844072Z" level=info msg="CreateContainer within sandbox \"2131276a840458f6eb05334ec8ac84f20599cf7a71214d2cb715ac186c457406\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"71ef8eb228249ff56236760ab273966fdcb056f5dbd5e8418f8a65ada0834e3e\"" Jan 23 01:40:22.813091 containerd[1583]: time="2026-01-23T01:40:22.812876589Z" level=info msg="StartContainer for \"71ef8eb228249ff56236760ab273966fdcb056f5dbd5e8418f8a65ada0834e3e\"" Jan 23 01:40:22.814387 containerd[1583]: time="2026-01-23T01:40:22.814129548Z" level=info msg="connecting to shim 71ef8eb228249ff56236760ab273966fdcb056f5dbd5e8418f8a65ada0834e3e" address="unix:///run/containerd/s/1898cdc47cd37e8e058257430d869f8b4b128386f4a4c35cdd1506f5ee090104" protocol=ttrpc version=3 Jan 23 01:40:22.817111 kubelet[2455]: I0123 01:40:22.816960 2455 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 01:40:22.819233 kubelet[2455]: E0123 01:40:22.818988 2455 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.137:6443/api/v1/nodes\": dial tcp 10.0.0.137:6443: connect: connection refused" node="localhost" Jan 23 01:40:22.824091 kubelet[2455]: E0123 01:40:22.823788 2455 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.137:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 01:40:22.826125 containerd[1583]: time="2026-01-23T01:40:22.825837309Z" level=info msg="CreateContainer within sandbox \"f2db3af36a776eda79a470a96922df656ffee6ce680995d3f4043fdbd772f34b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2fc54573595bd25927a5a272a0381b77c4954c97636811c054845476a3425bef\"" Jan 23 01:40:22.827031 systemd[1]: Started cri-containerd-bdc95fbc072e7793c49d8d507ccb6fc7682208eb47ec7f5fd1b290aa2cdfdd60.scope - libcontainer container bdc95fbc072e7793c49d8d507ccb6fc7682208eb47ec7f5fd1b290aa2cdfdd60. Jan 23 01:40:22.829333 containerd[1583]: time="2026-01-23T01:40:22.828094573Z" level=info msg="StartContainer for \"2fc54573595bd25927a5a272a0381b77c4954c97636811c054845476a3425bef\"" Jan 23 01:40:22.831171 containerd[1583]: time="2026-01-23T01:40:22.831140129Z" level=info msg="connecting to shim 2fc54573595bd25927a5a272a0381b77c4954c97636811c054845476a3425bef" address="unix:///run/containerd/s/5132fb2b281ad66878fdc12fd85aa5b2bf716b281c91c0c228e414fca796caae" protocol=ttrpc version=3 Jan 23 01:40:22.867008 systemd[1]: Started cri-containerd-71ef8eb228249ff56236760ab273966fdcb056f5dbd5e8418f8a65ada0834e3e.scope - libcontainer container 71ef8eb228249ff56236760ab273966fdcb056f5dbd5e8418f8a65ada0834e3e. Jan 23 01:40:22.881305 systemd[1]: Started cri-containerd-2fc54573595bd25927a5a272a0381b77c4954c97636811c054845476a3425bef.scope - libcontainer container 2fc54573595bd25927a5a272a0381b77c4954c97636811c054845476a3425bef. Jan 23 01:40:22.996792 containerd[1583]: time="2026-01-23T01:40:22.995243728Z" level=info msg="StartContainer for \"71ef8eb228249ff56236760ab273966fdcb056f5dbd5e8418f8a65ada0834e3e\" returns successfully" Jan 23 01:40:23.020887 containerd[1583]: time="2026-01-23T01:40:23.020846631Z" level=info msg="StartContainer for \"bdc95fbc072e7793c49d8d507ccb6fc7682208eb47ec7f5fd1b290aa2cdfdd60\" returns successfully" Jan 23 01:40:23.031693 containerd[1583]: time="2026-01-23T01:40:23.031169327Z" level=info msg="StartContainer for \"2fc54573595bd25927a5a272a0381b77c4954c97636811c054845476a3425bef\" returns successfully" Jan 23 01:40:23.061504 kubelet[2455]: E0123 01:40:23.061412 2455 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:40:23.068331 kubelet[2455]: E0123 01:40:23.067076 2455 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:40:23.073111 kubelet[2455]: E0123 01:40:23.073025 2455 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:40:23.624694 kubelet[2455]: I0123 01:40:23.622872 2455 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 01:40:24.079687 kubelet[2455]: E0123 01:40:24.078442 2455 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:40:24.085398 kubelet[2455]: E0123 01:40:24.085250 2455 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:40:25.336167 kubelet[2455]: E0123 01:40:25.336068 2455 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:40:25.823111 kubelet[2455]: E0123 01:40:25.822890 2455 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:40:26.062507 kubelet[2455]: E0123 01:40:26.062126 2455 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 23 01:40:26.230089 kubelet[2455]: I0123 01:40:26.229977 2455 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 23 01:40:26.276202 kubelet[2455]: I0123 01:40:26.275909 2455 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 23 01:40:26.292427 kubelet[2455]: E0123 01:40:26.292271 2455 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 23 01:40:26.292427 kubelet[2455]: I0123 01:40:26.292373 2455 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 23 01:40:26.296854 kubelet[2455]: E0123 01:40:26.296785 2455 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 23 01:40:26.296854 kubelet[2455]: I0123 01:40:26.296817 2455 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 23 01:40:26.299839 kubelet[2455]: E0123 01:40:26.299802 2455 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 23 01:40:26.942157 kubelet[2455]: I0123 01:40:26.942018 2455 apiserver.go:52] "Watching apiserver" Jan 23 01:40:26.973935 kubelet[2455]: I0123 01:40:26.973871 2455 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 01:40:29.004734 systemd[1]: Reload requested from client PID 2742 ('systemctl') (unit session-9.scope)... Jan 23 01:40:29.004803 systemd[1]: Reloading... Jan 23 01:40:29.157757 zram_generator::config[2785]: No configuration found. Jan 23 01:40:29.552806 systemd[1]: Reloading finished in 547 ms. Jan 23 01:40:29.618359 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:40:29.636192 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 01:40:29.636932 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:40:29.637011 systemd[1]: kubelet.service: Consumed 1.471s CPU time, 129.9M memory peak. Jan 23 01:40:29.642153 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:40:30.005745 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:40:30.026386 (kubelet)[2830]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 01:40:30.117194 kubelet[2830]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:40:30.117194 kubelet[2830]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 01:40:30.117194 kubelet[2830]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:40:30.117991 kubelet[2830]: I0123 01:40:30.117189 2830 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 01:40:30.143209 kubelet[2830]: I0123 01:40:30.141980 2830 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 01:40:30.143209 kubelet[2830]: I0123 01:40:30.142016 2830 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 01:40:30.143209 kubelet[2830]: I0123 01:40:30.142250 2830 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 01:40:30.145212 kubelet[2830]: I0123 01:40:30.144515 2830 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 01:40:30.151411 kubelet[2830]: I0123 01:40:30.150940 2830 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 01:40:30.170334 kubelet[2830]: I0123 01:40:30.170300 2830 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 01:40:30.188788 kubelet[2830]: I0123 01:40:30.188357 2830 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 01:40:30.189132 kubelet[2830]: I0123 01:40:30.189016 2830 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 01:40:30.189347 kubelet[2830]: I0123 01:40:30.189118 2830 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 01:40:30.189347 kubelet[2830]: I0123 01:40:30.189336 2830 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 01:40:30.189347 kubelet[2830]: I0123 01:40:30.189346 2830 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 01:40:30.189882 kubelet[2830]: I0123 01:40:30.189421 2830 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:40:30.189882 kubelet[2830]: I0123 01:40:30.189860 2830 kubelet.go:480] "Attempting to sync node with API server" Jan 23 01:40:30.189882 kubelet[2830]: I0123 01:40:30.189874 2830 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 01:40:30.189983 kubelet[2830]: I0123 01:40:30.189899 2830 kubelet.go:386] "Adding apiserver pod source" Jan 23 01:40:30.189983 kubelet[2830]: I0123 01:40:30.189916 2830 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 01:40:30.200712 kubelet[2830]: I0123 01:40:30.200295 2830 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 01:40:30.203982 kubelet[2830]: I0123 01:40:30.202451 2830 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 01:40:30.223816 kubelet[2830]: I0123 01:40:30.223740 2830 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 01:40:30.224470 kubelet[2830]: I0123 01:40:30.224367 2830 server.go:1289] "Started kubelet" Jan 23 01:40:30.225963 kubelet[2830]: I0123 01:40:30.225922 2830 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 01:40:30.234792 kubelet[2830]: I0123 01:40:30.226120 2830 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 01:40:30.247478 kubelet[2830]: I0123 01:40:30.247370 2830 server.go:317] "Adding debug handlers to kubelet server" Jan 23 01:40:30.256047 kubelet[2830]: I0123 01:40:30.226240 2830 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 01:40:30.256047 kubelet[2830]: I0123 01:40:30.254955 2830 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 01:40:30.257705 kubelet[2830]: I0123 01:40:30.226501 2830 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 01:40:30.257705 kubelet[2830]: I0123 01:40:30.257270 2830 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 01:40:30.257705 kubelet[2830]: I0123 01:40:30.257522 2830 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 01:40:30.263710 kubelet[2830]: I0123 01:40:30.263464 2830 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 01:40:30.275222 kubelet[2830]: E0123 01:40:30.275101 2830 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 01:40:30.277470 kubelet[2830]: I0123 01:40:30.277325 2830 reconciler.go:26] "Reconciler: start to sync state" Jan 23 01:40:30.277956 kubelet[2830]: I0123 01:40:30.277838 2830 factory.go:223] Registration of the containerd container factory successfully Jan 23 01:40:30.278035 kubelet[2830]: I0123 01:40:30.278004 2830 factory.go:223] Registration of the systemd container factory successfully Jan 23 01:40:30.333015 kubelet[2830]: I0123 01:40:30.332829 2830 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 01:40:30.363950 kubelet[2830]: I0123 01:40:30.363856 2830 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 01:40:30.363950 kubelet[2830]: I0123 01:40:30.363951 2830 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 01:40:30.364151 kubelet[2830]: I0123 01:40:30.363977 2830 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 01:40:30.364151 kubelet[2830]: I0123 01:40:30.363988 2830 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 01:40:30.364151 kubelet[2830]: E0123 01:40:30.364041 2830 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 01:40:30.387096 kubelet[2830]: I0123 01:40:30.387012 2830 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 01:40:30.387096 kubelet[2830]: I0123 01:40:30.387035 2830 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 01:40:30.387096 kubelet[2830]: I0123 01:40:30.387061 2830 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:40:30.387324 kubelet[2830]: I0123 01:40:30.387229 2830 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 01:40:30.387380 kubelet[2830]: I0123 01:40:30.387312 2830 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 01:40:30.387380 kubelet[2830]: I0123 01:40:30.387342 2830 policy_none.go:49] "None policy: Start" Jan 23 01:40:30.387380 kubelet[2830]: I0123 01:40:30.387356 2830 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 01:40:30.387380 kubelet[2830]: I0123 01:40:30.387371 2830 state_mem.go:35] "Initializing new in-memory state store" Jan 23 01:40:30.387746 kubelet[2830]: I0123 01:40:30.387496 2830 state_mem.go:75] "Updated machine memory state" Jan 23 01:40:30.420178 kubelet[2830]: E0123 01:40:30.420146 2830 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 01:40:30.424063 kubelet[2830]: I0123 01:40:30.423533 2830 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 01:40:30.425198 kubelet[2830]: I0123 01:40:30.424336 2830 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 01:40:30.426724 kubelet[2830]: I0123 01:40:30.426297 2830 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 01:40:30.433457 kubelet[2830]: E0123 01:40:30.433358 2830 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 01:40:30.467016 kubelet[2830]: I0123 01:40:30.466977 2830 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 23 01:40:30.479467 kubelet[2830]: I0123 01:40:30.478974 2830 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 23 01:40:30.481727 kubelet[2830]: I0123 01:40:30.481491 2830 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 23 01:40:30.538738 kubelet[2830]: I0123 01:40:30.536083 2830 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 01:40:30.559270 kubelet[2830]: I0123 01:40:30.557385 2830 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 23 01:40:30.559270 kubelet[2830]: I0123 01:40:30.558234 2830 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 23 01:40:30.579832 kubelet[2830]: I0123 01:40:30.579030 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:40:30.579832 kubelet[2830]: I0123 01:40:30.579079 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:40:30.579832 kubelet[2830]: I0123 01:40:30.579112 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:40:30.579832 kubelet[2830]: I0123 01:40:30.579143 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 23 01:40:30.579832 kubelet[2830]: I0123 01:40:30.579163 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8aad776a2f26955ff68c865c0c0362aa-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8aad776a2f26955ff68c865c0c0362aa\") " pod="kube-system/kube-apiserver-localhost" Jan 23 01:40:30.580188 kubelet[2830]: I0123 01:40:30.579185 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8aad776a2f26955ff68c865c0c0362aa-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8aad776a2f26955ff68c865c0c0362aa\") " pod="kube-system/kube-apiserver-localhost" Jan 23 01:40:30.580188 kubelet[2830]: I0123 01:40:30.579207 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8aad776a2f26955ff68c865c0c0362aa-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8aad776a2f26955ff68c865c0c0362aa\") " pod="kube-system/kube-apiserver-localhost" Jan 23 01:40:30.580188 kubelet[2830]: I0123 01:40:30.579228 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:40:30.580188 kubelet[2830]: I0123 01:40:30.579254 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:40:31.191428 kubelet[2830]: I0123 01:40:31.191026 2830 apiserver.go:52] "Watching apiserver" Jan 23 01:40:31.259504 kubelet[2830]: I0123 01:40:31.259135 2830 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 01:40:31.398203 kubelet[2830]: I0123 01:40:31.397896 2830 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 23 01:40:31.402192 kubelet[2830]: I0123 01:40:31.401903 2830 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 23 01:40:31.420714 kubelet[2830]: E0123 01:40:31.418861 2830 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 23 01:40:31.420714 kubelet[2830]: E0123 01:40:31.419172 2830 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 23 01:40:31.455161 kubelet[2830]: I0123 01:40:31.454762 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.454417694 podStartE2EDuration="1.454417694s" podCreationTimestamp="2026-01-23 01:40:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:40:31.437270822 +0000 UTC m=+1.400278004" watchObservedRunningTime="2026-01-23 01:40:31.454417694 +0000 UTC m=+1.417424886" Jan 23 01:40:31.455161 kubelet[2830]: I0123 01:40:31.454993 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.454981293 podStartE2EDuration="1.454981293s" podCreationTimestamp="2026-01-23 01:40:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:40:31.451398898 +0000 UTC m=+1.414406079" watchObservedRunningTime="2026-01-23 01:40:31.454981293 +0000 UTC m=+1.417988475" Jan 23 01:40:31.490289 kubelet[2830]: I0123 01:40:31.489916 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.489895124 podStartE2EDuration="1.489895124s" podCreationTimestamp="2026-01-23 01:40:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:40:31.468522192 +0000 UTC m=+1.431529375" watchObservedRunningTime="2026-01-23 01:40:31.489895124 +0000 UTC m=+1.452902306" Jan 23 01:40:38.078404 kubelet[2830]: E0123 01:40:38.075694 2830 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.588s" Jan 23 01:40:40.022442 kubelet[2830]: I0123 01:40:40.018969 2830 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 01:40:40.086429 containerd[1583]: time="2026-01-23T01:40:40.086013725Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 01:40:40.177353 kubelet[2830]: I0123 01:40:40.176214 2830 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 01:40:41.685359 kubelet[2830]: I0123 01:40:41.680827 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ec79a8fb-6e43-4f7c-88de-a100f49e1a53-xtables-lock\") pod \"kube-proxy-754fq\" (UID: \"ec79a8fb-6e43-4f7c-88de-a100f49e1a53\") " pod="kube-system/kube-proxy-754fq" Jan 23 01:40:41.685359 kubelet[2830]: I0123 01:40:41.681089 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ec79a8fb-6e43-4f7c-88de-a100f49e1a53-kube-proxy\") pod \"kube-proxy-754fq\" (UID: \"ec79a8fb-6e43-4f7c-88de-a100f49e1a53\") " pod="kube-system/kube-proxy-754fq" Jan 23 01:40:41.685359 kubelet[2830]: I0123 01:40:41.681187 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec79a8fb-6e43-4f7c-88de-a100f49e1a53-lib-modules\") pod \"kube-proxy-754fq\" (UID: \"ec79a8fb-6e43-4f7c-88de-a100f49e1a53\") " pod="kube-system/kube-proxy-754fq" Jan 23 01:40:41.685359 kubelet[2830]: I0123 01:40:41.681205 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dhbp\" (UniqueName: \"kubernetes.io/projected/ec79a8fb-6e43-4f7c-88de-a100f49e1a53-kube-api-access-2dhbp\") pod \"kube-proxy-754fq\" (UID: \"ec79a8fb-6e43-4f7c-88de-a100f49e1a53\") " pod="kube-system/kube-proxy-754fq" Jan 23 01:40:41.720805 systemd[1]: Created slice kubepods-besteffort-podec79a8fb_6e43_4f7c_88de_a100f49e1a53.slice - libcontainer container kubepods-besteffort-podec79a8fb_6e43_4f7c_88de_a100f49e1a53.slice. Jan 23 01:40:42.021516 kubelet[2830]: I0123 01:40:41.988743 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/718a3053-d1e5-4a84-8a97-43f4226d1013-var-lib-calico\") pod \"tigera-operator-7dcd859c48-cdm9t\" (UID: \"718a3053-d1e5-4a84-8a97-43f4226d1013\") " pod="tigera-operator/tigera-operator-7dcd859c48-cdm9t" Jan 23 01:40:42.145057 kubelet[2830]: I0123 01:40:42.142078 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95xfq\" (UniqueName: \"kubernetes.io/projected/718a3053-d1e5-4a84-8a97-43f4226d1013-kube-api-access-95xfq\") pod \"tigera-operator-7dcd859c48-cdm9t\" (UID: \"718a3053-d1e5-4a84-8a97-43f4226d1013\") " pod="tigera-operator/tigera-operator-7dcd859c48-cdm9t" Jan 23 01:40:42.180007 systemd[1]: Created slice kubepods-besteffort-pod718a3053_d1e5_4a84_8a97_43f4226d1013.slice - libcontainer container kubepods-besteffort-pod718a3053_d1e5_4a84_8a97_43f4226d1013.slice. Jan 23 01:40:42.358866 containerd[1583]: time="2026-01-23T01:40:42.353876084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-754fq,Uid:ec79a8fb-6e43-4f7c-88de-a100f49e1a53,Namespace:kube-system,Attempt:0,}" Jan 23 01:40:42.628824 containerd[1583]: time="2026-01-23T01:40:42.626447947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-cdm9t,Uid:718a3053-d1e5-4a84-8a97-43f4226d1013,Namespace:tigera-operator,Attempt:0,}" Jan 23 01:40:42.937957 containerd[1583]: time="2026-01-23T01:40:42.926000963Z" level=info msg="connecting to shim 285f2a192175bf40743cdd7d206339cb1c578da4714186664d00551958c4a0b3" address="unix:///run/containerd/s/4cb9c17d2b7530abbcb6f81d49cb726b2b0f9d12667070d833c95e1734bfcddc" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:40:43.146264 containerd[1583]: time="2026-01-23T01:40:43.144904303Z" level=info msg="connecting to shim 370b5dc23a73523e2748a92475c1fdeb96f70c35f58aa5633f69a50b9cb639f5" address="unix:///run/containerd/s/4792ce43b064db00b40d5adf3d0f0b387e5f319f3b0aa3d6f9a127992608a4a6" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:40:43.869163 systemd[1]: Started cri-containerd-285f2a192175bf40743cdd7d206339cb1c578da4714186664d00551958c4a0b3.scope - libcontainer container 285f2a192175bf40743cdd7d206339cb1c578da4714186664d00551958c4a0b3. Jan 23 01:40:44.140485 systemd[1]: Started cri-containerd-370b5dc23a73523e2748a92475c1fdeb96f70c35f58aa5633f69a50b9cb639f5.scope - libcontainer container 370b5dc23a73523e2748a92475c1fdeb96f70c35f58aa5633f69a50b9cb639f5. Jan 23 01:40:46.582528 containerd[1583]: time="2026-01-23T01:40:46.383439024Z" level=error msg="get state for 285f2a192175bf40743cdd7d206339cb1c578da4714186664d00551958c4a0b3" error="context deadline exceeded" Jan 23 01:40:46.582528 containerd[1583]: time="2026-01-23T01:40:46.562440604Z" level=warning msg="unknown status" status=0 Jan 23 01:40:48.081012 kubelet[2830]: E0123 01:40:47.990191 2830 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.621s" Jan 23 01:40:49.545940 containerd[1583]: time="2026-01-23T01:40:49.527015052Z" level=error msg="get state for 285f2a192175bf40743cdd7d206339cb1c578da4714186664d00551958c4a0b3" error="context deadline exceeded" Jan 23 01:40:50.183415 containerd[1583]: time="2026-01-23T01:40:49.553222351Z" level=warning msg="unknown status" status=0 Jan 23 01:40:50.445436 kubelet[2830]: E0123 01:40:50.425191 2830 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.04s" Jan 23 01:40:50.565443 containerd[1583]: time="2026-01-23T01:40:50.564028809Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Jan 23 01:40:50.571952 containerd[1583]: time="2026-01-23T01:40:50.570828533Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Jan 23 01:40:50.969011 containerd[1583]: time="2026-01-23T01:40:50.968464987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-754fq,Uid:ec79a8fb-6e43-4f7c-88de-a100f49e1a53,Namespace:kube-system,Attempt:0,} returns sandbox id \"285f2a192175bf40743cdd7d206339cb1c578da4714186664d00551958c4a0b3\"" Jan 23 01:40:51.021891 containerd[1583]: time="2026-01-23T01:40:51.021837368Z" level=info msg="CreateContainer within sandbox \"285f2a192175bf40743cdd7d206339cb1c578da4714186664d00551958c4a0b3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 01:40:51.038473 containerd[1583]: time="2026-01-23T01:40:51.038422626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-cdm9t,Uid:718a3053-d1e5-4a84-8a97-43f4226d1013,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"370b5dc23a73523e2748a92475c1fdeb96f70c35f58aa5633f69a50b9cb639f5\"" Jan 23 01:40:51.043858 containerd[1583]: time="2026-01-23T01:40:51.043762319Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 01:40:51.084245 containerd[1583]: time="2026-01-23T01:40:51.084200498Z" level=info msg="Container 83b80ff0cb41cc5f58a91c0a051db99856eca8b2655921168a3cce24e0444d0f: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:40:51.119884 containerd[1583]: time="2026-01-23T01:40:51.119491722Z" level=info msg="CreateContainer within sandbox \"285f2a192175bf40743cdd7d206339cb1c578da4714186664d00551958c4a0b3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"83b80ff0cb41cc5f58a91c0a051db99856eca8b2655921168a3cce24e0444d0f\"" Jan 23 01:40:51.122305 containerd[1583]: time="2026-01-23T01:40:51.122166507Z" level=info msg="StartContainer for \"83b80ff0cb41cc5f58a91c0a051db99856eca8b2655921168a3cce24e0444d0f\"" Jan 23 01:40:51.125970 containerd[1583]: time="2026-01-23T01:40:51.125891071Z" level=info msg="connecting to shim 83b80ff0cb41cc5f58a91c0a051db99856eca8b2655921168a3cce24e0444d0f" address="unix:///run/containerd/s/4cb9c17d2b7530abbcb6f81d49cb726b2b0f9d12667070d833c95e1734bfcddc" protocol=ttrpc version=3 Jan 23 01:40:51.195953 systemd[1]: Started cri-containerd-83b80ff0cb41cc5f58a91c0a051db99856eca8b2655921168a3cce24e0444d0f.scope - libcontainer container 83b80ff0cb41cc5f58a91c0a051db99856eca8b2655921168a3cce24e0444d0f. Jan 23 01:40:51.415049 containerd[1583]: time="2026-01-23T01:40:51.415009608Z" level=info msg="StartContainer for \"83b80ff0cb41cc5f58a91c0a051db99856eca8b2655921168a3cce24e0444d0f\" returns successfully" Jan 23 01:40:52.035339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount561128027.mount: Deactivated successfully. Jan 23 01:40:54.768333 containerd[1583]: time="2026-01-23T01:40:54.768201457Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:40:54.770322 containerd[1583]: time="2026-01-23T01:40:54.770204696Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 23 01:40:54.772863 containerd[1583]: time="2026-01-23T01:40:54.772509251Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:40:54.778388 containerd[1583]: time="2026-01-23T01:40:54.778321965Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:40:54.779213 containerd[1583]: time="2026-01-23T01:40:54.779026511Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.735229397s" Jan 23 01:40:54.779213 containerd[1583]: time="2026-01-23T01:40:54.779127559Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 23 01:40:54.790502 containerd[1583]: time="2026-01-23T01:40:54.788906547Z" level=info msg="CreateContainer within sandbox \"370b5dc23a73523e2748a92475c1fdeb96f70c35f58aa5633f69a50b9cb639f5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 01:40:54.817084 containerd[1583]: time="2026-01-23T01:40:54.816849253Z" level=info msg="Container ebc7debe70166c424b0882f798e223de32ebf5936de7dd5cd13805cba8996ec5: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:40:54.833028 containerd[1583]: time="2026-01-23T01:40:54.832887912Z" level=info msg="CreateContainer within sandbox \"370b5dc23a73523e2748a92475c1fdeb96f70c35f58aa5633f69a50b9cb639f5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ebc7debe70166c424b0882f798e223de32ebf5936de7dd5cd13805cba8996ec5\"" Jan 23 01:40:54.835886 containerd[1583]: time="2026-01-23T01:40:54.834499677Z" level=info msg="StartContainer for \"ebc7debe70166c424b0882f798e223de32ebf5936de7dd5cd13805cba8996ec5\"" Jan 23 01:40:54.836990 containerd[1583]: time="2026-01-23T01:40:54.836856270Z" level=info msg="connecting to shim ebc7debe70166c424b0882f798e223de32ebf5936de7dd5cd13805cba8996ec5" address="unix:///run/containerd/s/4792ce43b064db00b40d5adf3d0f0b387e5f319f3b0aa3d6f9a127992608a4a6" protocol=ttrpc version=3 Jan 23 01:40:54.916949 systemd[1]: Started cri-containerd-ebc7debe70166c424b0882f798e223de32ebf5936de7dd5cd13805cba8996ec5.scope - libcontainer container ebc7debe70166c424b0882f798e223de32ebf5936de7dd5cd13805cba8996ec5. Jan 23 01:40:56.061013 containerd[1583]: time="2026-01-23T01:40:56.060958806Z" level=info msg="StartContainer for \"ebc7debe70166c424b0882f798e223de32ebf5936de7dd5cd13805cba8996ec5\" returns successfully" Jan 23 01:40:56.947453 kubelet[2830]: I0123 01:40:56.942524 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-754fq" podStartSLOduration=15.942232206 podStartE2EDuration="15.942232206s" podCreationTimestamp="2026-01-23 01:40:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:40:51.488468862 +0000 UTC m=+21.451476045" watchObservedRunningTime="2026-01-23 01:40:56.942232206 +0000 UTC m=+26.905239388" Jan 23 01:41:01.418494 systemd[1]: cri-containerd-ebc7debe70166c424b0882f798e223de32ebf5936de7dd5cd13805cba8996ec5.scope: Deactivated successfully. Jan 23 01:41:01.419323 systemd[1]: cri-containerd-ebc7debe70166c424b0882f798e223de32ebf5936de7dd5cd13805cba8996ec5.scope: Consumed 1.297s CPU time, 35.7M memory peak. Jan 23 01:41:01.426938 containerd[1583]: time="2026-01-23T01:41:01.426519380Z" level=info msg="received container exit event container_id:\"ebc7debe70166c424b0882f798e223de32ebf5936de7dd5cd13805cba8996ec5\" id:\"ebc7debe70166c424b0882f798e223de32ebf5936de7dd5cd13805cba8996ec5\" pid:3171 exit_status:1 exited_at:{seconds:1769132461 nanos:424395650}" Jan 23 01:41:01.620181 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ebc7debe70166c424b0882f798e223de32ebf5936de7dd5cd13805cba8996ec5-rootfs.mount: Deactivated successfully. Jan 23 01:41:01.953324 kubelet[2830]: I0123 01:41:01.952790 2830 scope.go:117] "RemoveContainer" containerID="ebc7debe70166c424b0882f798e223de32ebf5936de7dd5cd13805cba8996ec5" Jan 23 01:41:01.959957 containerd[1583]: time="2026-01-23T01:41:01.958409470Z" level=info msg="CreateContainer within sandbox \"370b5dc23a73523e2748a92475c1fdeb96f70c35f58aa5633f69a50b9cb639f5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 23 01:41:02.008290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3596580514.mount: Deactivated successfully. Jan 23 01:41:02.010974 containerd[1583]: time="2026-01-23T01:41:02.010922308Z" level=info msg="Container 812f25e8b2a155f9b501ebba4d399a19c026d0358de4e1821c7fa82f66345305: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:41:02.040360 containerd[1583]: time="2026-01-23T01:41:02.040305331Z" level=info msg="CreateContainer within sandbox \"370b5dc23a73523e2748a92475c1fdeb96f70c35f58aa5633f69a50b9cb639f5\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"812f25e8b2a155f9b501ebba4d399a19c026d0358de4e1821c7fa82f66345305\"" Jan 23 01:41:02.046929 containerd[1583]: time="2026-01-23T01:41:02.046081393Z" level=info msg="StartContainer for \"812f25e8b2a155f9b501ebba4d399a19c026d0358de4e1821c7fa82f66345305\"" Jan 23 01:41:02.051368 containerd[1583]: time="2026-01-23T01:41:02.051016086Z" level=info msg="connecting to shim 812f25e8b2a155f9b501ebba4d399a19c026d0358de4e1821c7fa82f66345305" address="unix:///run/containerd/s/4792ce43b064db00b40d5adf3d0f0b387e5f319f3b0aa3d6f9a127992608a4a6" protocol=ttrpc version=3 Jan 23 01:41:02.177406 systemd[1]: Started cri-containerd-812f25e8b2a155f9b501ebba4d399a19c026d0358de4e1821c7fa82f66345305.scope - libcontainer container 812f25e8b2a155f9b501ebba4d399a19c026d0358de4e1821c7fa82f66345305. Jan 23 01:41:02.379838 containerd[1583]: time="2026-01-23T01:41:02.379074130Z" level=info msg="StartContainer for \"812f25e8b2a155f9b501ebba4d399a19c026d0358de4e1821c7fa82f66345305\" returns successfully" Jan 23 01:41:02.995386 kubelet[2830]: I0123 01:41:02.995235 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-cdm9t" podStartSLOduration=18.256529659 podStartE2EDuration="21.995214274s" podCreationTimestamp="2026-01-23 01:40:41 +0000 UTC" firstStartedPulling="2026-01-23 01:40:51.042252117 +0000 UTC m=+21.005259309" lastFinishedPulling="2026-01-23 01:40:54.780936742 +0000 UTC m=+24.743943924" observedRunningTime="2026-01-23 01:40:56.947415369 +0000 UTC m=+26.910422551" watchObservedRunningTime="2026-01-23 01:41:02.995214274 +0000 UTC m=+32.958221456" Jan 23 01:41:06.297193 sudo[1812]: pam_unix(sudo:session): session closed for user root Jan 23 01:41:06.317223 sshd[1811]: Connection closed by 10.0.0.1 port 36786 Jan 23 01:41:06.330301 sshd-session[1808]: pam_unix(sshd:session): session closed for user core Jan 23 01:41:06.355011 systemd[1]: sshd@8-10.0.0.137:22-10.0.0.1:36786.service: Deactivated successfully. Jan 23 01:41:06.356923 systemd-logind[1556]: Session 9 logged out. Waiting for processes to exit. Jan 23 01:41:06.374328 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 01:41:06.377090 systemd[1]: session-9.scope: Consumed 21.584s CPU time, 225.4M memory peak. Jan 23 01:41:06.384420 systemd-logind[1556]: Removed session 9. Jan 23 01:41:16.033495 systemd[1]: Created slice kubepods-besteffort-pod0733b854_c9f9_4e02_9660_7d54fbd14298.slice - libcontainer container kubepods-besteffort-pod0733b854_c9f9_4e02_9660_7d54fbd14298.slice. Jan 23 01:41:16.124248 kubelet[2830]: I0123 01:41:16.124047 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms24t\" (UniqueName: \"kubernetes.io/projected/0733b854-c9f9-4e02-9660-7d54fbd14298-kube-api-access-ms24t\") pod \"calico-typha-56c548f9f9-xpf27\" (UID: \"0733b854-c9f9-4e02-9660-7d54fbd14298\") " pod="calico-system/calico-typha-56c548f9f9-xpf27" Jan 23 01:41:16.124248 kubelet[2830]: I0123 01:41:16.124126 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0733b854-c9f9-4e02-9660-7d54fbd14298-tigera-ca-bundle\") pod \"calico-typha-56c548f9f9-xpf27\" (UID: \"0733b854-c9f9-4e02-9660-7d54fbd14298\") " pod="calico-system/calico-typha-56c548f9f9-xpf27" Jan 23 01:41:16.124248 kubelet[2830]: I0123 01:41:16.124150 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0733b854-c9f9-4e02-9660-7d54fbd14298-typha-certs\") pod \"calico-typha-56c548f9f9-xpf27\" (UID: \"0733b854-c9f9-4e02-9660-7d54fbd14298\") " pod="calico-system/calico-typha-56c548f9f9-xpf27" Jan 23 01:41:16.322144 systemd[1]: Created slice kubepods-besteffort-podb6765b29_0838_4c00_91a2_8b045de10a43.slice - libcontainer container kubepods-besteffort-podb6765b29_0838_4c00_91a2_8b045de10a43.slice. Jan 23 01:41:16.331653 kubelet[2830]: I0123 01:41:16.331005 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6765b29-0838-4c00-91a2-8b045de10a43-lib-modules\") pod \"calico-node-d272b\" (UID: \"b6765b29-0838-4c00-91a2-8b045de10a43\") " pod="calico-system/calico-node-d272b" Jan 23 01:41:16.331653 kubelet[2830]: I0123 01:41:16.331048 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b6765b29-0838-4c00-91a2-8b045de10a43-var-lib-calico\") pod \"calico-node-d272b\" (UID: \"b6765b29-0838-4c00-91a2-8b045de10a43\") " pod="calico-system/calico-node-d272b" Jan 23 01:41:16.331653 kubelet[2830]: I0123 01:41:16.331071 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6765b29-0838-4c00-91a2-8b045de10a43-xtables-lock\") pod \"calico-node-d272b\" (UID: \"b6765b29-0838-4c00-91a2-8b045de10a43\") " pod="calico-system/calico-node-d272b" Jan 23 01:41:16.331653 kubelet[2830]: I0123 01:41:16.331100 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6765b29-0838-4c00-91a2-8b045de10a43-tigera-ca-bundle\") pod \"calico-node-d272b\" (UID: \"b6765b29-0838-4c00-91a2-8b045de10a43\") " pod="calico-system/calico-node-d272b" Jan 23 01:41:16.331653 kubelet[2830]: I0123 01:41:16.331209 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b6765b29-0838-4c00-91a2-8b045de10a43-cni-log-dir\") pod \"calico-node-d272b\" (UID: \"b6765b29-0838-4c00-91a2-8b045de10a43\") " pod="calico-system/calico-node-d272b" Jan 23 01:41:16.332013 kubelet[2830]: I0123 01:41:16.331234 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b6765b29-0838-4c00-91a2-8b045de10a43-policysync\") pod \"calico-node-d272b\" (UID: \"b6765b29-0838-4c00-91a2-8b045de10a43\") " pod="calico-system/calico-node-d272b" Jan 23 01:41:16.332013 kubelet[2830]: I0123 01:41:16.331255 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b6765b29-0838-4c00-91a2-8b045de10a43-cni-bin-dir\") pod \"calico-node-d272b\" (UID: \"b6765b29-0838-4c00-91a2-8b045de10a43\") " pod="calico-system/calico-node-d272b" Jan 23 01:41:16.332013 kubelet[2830]: I0123 01:41:16.331276 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b6765b29-0838-4c00-91a2-8b045de10a43-var-run-calico\") pod \"calico-node-d272b\" (UID: \"b6765b29-0838-4c00-91a2-8b045de10a43\") " pod="calico-system/calico-node-d272b" Jan 23 01:41:16.332013 kubelet[2830]: I0123 01:41:16.331298 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b6765b29-0838-4c00-91a2-8b045de10a43-flexvol-driver-host\") pod \"calico-node-d272b\" (UID: \"b6765b29-0838-4c00-91a2-8b045de10a43\") " pod="calico-system/calico-node-d272b" Jan 23 01:41:16.332013 kubelet[2830]: I0123 01:41:16.331319 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b6765b29-0838-4c00-91a2-8b045de10a43-node-certs\") pod \"calico-node-d272b\" (UID: \"b6765b29-0838-4c00-91a2-8b045de10a43\") " pod="calico-system/calico-node-d272b" Jan 23 01:41:16.336037 kubelet[2830]: I0123 01:41:16.331344 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b6765b29-0838-4c00-91a2-8b045de10a43-cni-net-dir\") pod \"calico-node-d272b\" (UID: \"b6765b29-0838-4c00-91a2-8b045de10a43\") " pod="calico-system/calico-node-d272b" Jan 23 01:41:16.336037 kubelet[2830]: I0123 01:41:16.331368 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrhrt\" (UniqueName: \"kubernetes.io/projected/b6765b29-0838-4c00-91a2-8b045de10a43-kube-api-access-nrhrt\") pod \"calico-node-d272b\" (UID: \"b6765b29-0838-4c00-91a2-8b045de10a43\") " pod="calico-system/calico-node-d272b" Jan 23 01:41:16.438363 kubelet[2830]: E0123 01:41:16.436944 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7xq6t" podUID="d4655da0-4d87-462c-8176-c9772e42f76a" Jan 23 01:41:16.459739 kubelet[2830]: E0123 01:41:16.459269 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.459929 kubelet[2830]: W0123 01:41:16.459501 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.460247 kubelet[2830]: E0123 01:41:16.460157 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.494803 kubelet[2830]: E0123 01:41:16.494302 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.494803 kubelet[2830]: W0123 01:41:16.494418 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.494803 kubelet[2830]: E0123 01:41:16.494449 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.532793 kubelet[2830]: E0123 01:41:16.526163 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.532793 kubelet[2830]: W0123 01:41:16.526216 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.532793 kubelet[2830]: E0123 01:41:16.526239 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.537135 kubelet[2830]: E0123 01:41:16.536893 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.537135 kubelet[2830]: W0123 01:41:16.537050 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.537135 kubelet[2830]: E0123 01:41:16.537073 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.538996 kubelet[2830]: E0123 01:41:16.538402 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.538996 kubelet[2830]: W0123 01:41:16.538443 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.538996 kubelet[2830]: E0123 01:41:16.538483 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.543133 kubelet[2830]: E0123 01:41:16.543006 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.543133 kubelet[2830]: W0123 01:41:16.543102 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.543133 kubelet[2830]: E0123 01:41:16.543125 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.545904 kubelet[2830]: E0123 01:41:16.545822 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.545904 kubelet[2830]: W0123 01:41:16.545839 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.545904 kubelet[2830]: E0123 01:41:16.545854 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.546753 kubelet[2830]: E0123 01:41:16.546369 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.546753 kubelet[2830]: W0123 01:41:16.546384 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.546753 kubelet[2830]: E0123 01:41:16.546399 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.547233 kubelet[2830]: E0123 01:41:16.547073 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.547233 kubelet[2830]: W0123 01:41:16.547179 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.547973 kubelet[2830]: E0123 01:41:16.547198 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.548376 kubelet[2830]: E0123 01:41:16.548308 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.548376 kubelet[2830]: W0123 01:41:16.548322 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.548376 kubelet[2830]: E0123 01:41:16.548332 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.549279 kubelet[2830]: E0123 01:41:16.549213 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.549279 kubelet[2830]: W0123 01:41:16.549232 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.549279 kubelet[2830]: E0123 01:41:16.549247 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.550470 kubelet[2830]: E0123 01:41:16.550248 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.550470 kubelet[2830]: W0123 01:41:16.550271 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.553327 kubelet[2830]: E0123 01:41:16.553293 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.553383 kubelet[2830]: I0123 01:41:16.553345 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d4655da0-4d87-462c-8176-c9772e42f76a-registration-dir\") pod \"csi-node-driver-7xq6t\" (UID: \"d4655da0-4d87-462c-8176-c9772e42f76a\") " pod="calico-system/csi-node-driver-7xq6t" Jan 23 01:41:16.556149 kubelet[2830]: E0123 01:41:16.555901 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.556149 kubelet[2830]: W0123 01:41:16.556006 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.556149 kubelet[2830]: E0123 01:41:16.556023 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.558755 kubelet[2830]: E0123 01:41:16.558409 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.560372 kubelet[2830]: W0123 01:41:16.558526 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.560422 kubelet[2830]: I0123 01:41:16.559949 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d4655da0-4d87-462c-8176-c9772e42f76a-kubelet-dir\") pod \"csi-node-driver-7xq6t\" (UID: \"d4655da0-4d87-462c-8176-c9772e42f76a\") " pod="calico-system/csi-node-driver-7xq6t" Jan 23 01:41:16.560458 kubelet[2830]: E0123 01:41:16.560439 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.561389 kubelet[2830]: E0123 01:41:16.561345 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.561389 kubelet[2830]: W0123 01:41:16.561366 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.561389 kubelet[2830]: E0123 01:41:16.561381 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.562838 kubelet[2830]: E0123 01:41:16.562110 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.562838 kubelet[2830]: W0123 01:41:16.562214 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.562838 kubelet[2830]: E0123 01:41:16.562230 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.562938 kubelet[2830]: E0123 01:41:16.562529 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.562938 kubelet[2830]: W0123 01:41:16.562865 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.562938 kubelet[2830]: E0123 01:41:16.562879 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.564290 kubelet[2830]: E0123 01:41:16.564178 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.564290 kubelet[2830]: W0123 01:41:16.564283 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.564377 kubelet[2830]: E0123 01:41:16.564299 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.566203 kubelet[2830]: E0123 01:41:16.566043 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.566203 kubelet[2830]: W0123 01:41:16.566062 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.566203 kubelet[2830]: E0123 01:41:16.566074 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.567293 kubelet[2830]: E0123 01:41:16.567178 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.567293 kubelet[2830]: W0123 01:41:16.567281 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.567482 kubelet[2830]: E0123 01:41:16.567298 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.569017 kubelet[2830]: E0123 01:41:16.568921 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.569017 kubelet[2830]: W0123 01:41:16.568941 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.569017 kubelet[2830]: E0123 01:41:16.568958 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.569878 kubelet[2830]: E0123 01:41:16.569242 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.569878 kubelet[2830]: W0123 01:41:16.569348 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.569878 kubelet[2830]: E0123 01:41:16.569362 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.570994 kubelet[2830]: E0123 01:41:16.570330 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.570994 kubelet[2830]: W0123 01:41:16.570428 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.570994 kubelet[2830]: E0123 01:41:16.570442 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.571471 kubelet[2830]: E0123 01:41:16.571123 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.571471 kubelet[2830]: W0123 01:41:16.571224 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.571471 kubelet[2830]: E0123 01:41:16.571240 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.571876 kubelet[2830]: E0123 01:41:16.571494 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.571876 kubelet[2830]: W0123 01:41:16.571509 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.571876 kubelet[2830]: E0123 01:41:16.571523 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.573128 kubelet[2830]: E0123 01:41:16.572291 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.573128 kubelet[2830]: W0123 01:41:16.572386 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.573128 kubelet[2830]: E0123 01:41:16.572402 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.573128 kubelet[2830]: E0123 01:41:16.573084 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.573128 kubelet[2830]: W0123 01:41:16.573095 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.575132 kubelet[2830]: E0123 01:41:16.573107 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.577964 kubelet[2830]: E0123 01:41:16.577489 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.578848 kubelet[2830]: W0123 01:41:16.578377 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.578848 kubelet[2830]: E0123 01:41:16.578490 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.585075 kubelet[2830]: E0123 01:41:16.585016 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.585075 kubelet[2830]: W0123 01:41:16.585048 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.585075 kubelet[2830]: E0123 01:41:16.585069 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.645889 containerd[1583]: time="2026-01-23T01:41:16.643119590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d272b,Uid:b6765b29-0838-4c00-91a2-8b045de10a43,Namespace:calico-system,Attempt:0,}" Jan 23 01:41:16.651809 containerd[1583]: time="2026-01-23T01:41:16.651281967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56c548f9f9-xpf27,Uid:0733b854-c9f9-4e02-9660-7d54fbd14298,Namespace:calico-system,Attempt:0,}" Jan 23 01:41:16.664961 kubelet[2830]: E0123 01:41:16.664919 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.665146 kubelet[2830]: W0123 01:41:16.665126 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.665238 kubelet[2830]: E0123 01:41:16.665219 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.666915 kubelet[2830]: E0123 01:41:16.666190 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.666995 kubelet[2830]: W0123 01:41:16.666980 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.667057 kubelet[2830]: E0123 01:41:16.667045 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.667116 kubelet[2830]: I0123 01:41:16.667103 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d4655da0-4d87-462c-8176-c9772e42f76a-varrun\") pod \"csi-node-driver-7xq6t\" (UID: \"d4655da0-4d87-462c-8176-c9772e42f76a\") " pod="calico-system/csi-node-driver-7xq6t" Jan 23 01:41:16.669082 kubelet[2830]: E0123 01:41:16.668211 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.669156 kubelet[2830]: W0123 01:41:16.669140 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.669874 kubelet[2830]: E0123 01:41:16.669857 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.672296 kubelet[2830]: E0123 01:41:16.671184 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.672412 kubelet[2830]: W0123 01:41:16.672391 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.672489 kubelet[2830]: E0123 01:41:16.672470 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.672884 kubelet[2830]: I0123 01:41:16.672863 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d4655da0-4d87-462c-8176-c9772e42f76a-socket-dir\") pod \"csi-node-driver-7xq6t\" (UID: \"d4655da0-4d87-462c-8176-c9772e42f76a\") " pod="calico-system/csi-node-driver-7xq6t" Jan 23 01:41:16.686915 kubelet[2830]: E0123 01:41:16.685073 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.686915 kubelet[2830]: W0123 01:41:16.685118 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.686915 kubelet[2830]: E0123 01:41:16.685169 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.689000 kubelet[2830]: E0123 01:41:16.688881 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.689000 kubelet[2830]: W0123 01:41:16.688979 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.689228 kubelet[2830]: E0123 01:41:16.689016 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.699851 kubelet[2830]: E0123 01:41:16.699460 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.699851 kubelet[2830]: W0123 01:41:16.699759 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.699851 kubelet[2830]: E0123 01:41:16.699790 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.700065 kubelet[2830]: I0123 01:41:16.700003 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb8wt\" (UniqueName: \"kubernetes.io/projected/d4655da0-4d87-462c-8176-c9772e42f76a-kube-api-access-gb8wt\") pod \"csi-node-driver-7xq6t\" (UID: \"d4655da0-4d87-462c-8176-c9772e42f76a\") " pod="calico-system/csi-node-driver-7xq6t" Jan 23 01:41:16.719115 kubelet[2830]: E0123 01:41:16.715023 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.719115 kubelet[2830]: W0123 01:41:16.715115 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.719115 kubelet[2830]: E0123 01:41:16.715144 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.719115 kubelet[2830]: E0123 01:41:16.716202 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.719115 kubelet[2830]: W0123 01:41:16.716220 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.719115 kubelet[2830]: E0123 01:41:16.716243 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.719512 kubelet[2830]: E0123 01:41:16.719204 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.719512 kubelet[2830]: W0123 01:41:16.719218 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.719512 kubelet[2830]: E0123 01:41:16.719237 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.722317 kubelet[2830]: E0123 01:41:16.720206 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.722317 kubelet[2830]: W0123 01:41:16.720221 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.722317 kubelet[2830]: E0123 01:41:16.720239 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.722317 kubelet[2830]: E0123 01:41:16.721133 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.722317 kubelet[2830]: W0123 01:41:16.721149 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.722317 kubelet[2830]: E0123 01:41:16.721163 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.727149 kubelet[2830]: E0123 01:41:16.725900 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.727149 kubelet[2830]: W0123 01:41:16.725921 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.727149 kubelet[2830]: E0123 01:41:16.725934 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.728378 kubelet[2830]: E0123 01:41:16.728275 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.728441 kubelet[2830]: W0123 01:41:16.728378 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.728441 kubelet[2830]: E0123 01:41:16.728396 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.729298 kubelet[2830]: E0123 01:41:16.729192 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.729298 kubelet[2830]: W0123 01:41:16.729295 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.729370 kubelet[2830]: E0123 01:41:16.729310 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.732107 kubelet[2830]: E0123 01:41:16.731520 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.732983 kubelet[2830]: W0123 01:41:16.732863 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.732983 kubelet[2830]: E0123 01:41:16.732972 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.735859 kubelet[2830]: E0123 01:41:16.735523 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.736658 kubelet[2830]: W0123 01:41:16.736245 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.736658 kubelet[2830]: E0123 01:41:16.736347 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.737470 kubelet[2830]: E0123 01:41:16.737174 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.737470 kubelet[2830]: W0123 01:41:16.737278 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.737470 kubelet[2830]: E0123 01:41:16.737293 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.738167 kubelet[2830]: E0123 01:41:16.738015 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.738167 kubelet[2830]: W0123 01:41:16.738119 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.738167 kubelet[2830]: E0123 01:41:16.738134 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.804980 kubelet[2830]: E0123 01:41:16.804870 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.804980 kubelet[2830]: W0123 01:41:16.804979 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.805123 kubelet[2830]: E0123 01:41:16.805009 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.812237 kubelet[2830]: E0123 01:41:16.810078 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.812237 kubelet[2830]: W0123 01:41:16.810178 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.812237 kubelet[2830]: E0123 01:41:16.810196 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.812237 kubelet[2830]: E0123 01:41:16.811526 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.812237 kubelet[2830]: W0123 01:41:16.811866 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.812237 kubelet[2830]: E0123 01:41:16.811882 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.813860 kubelet[2830]: E0123 01:41:16.813310 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.813860 kubelet[2830]: W0123 01:41:16.813325 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.813860 kubelet[2830]: E0123 01:41:16.813339 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.819242 kubelet[2830]: E0123 01:41:16.819015 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.819242 kubelet[2830]: W0123 01:41:16.819033 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.819242 kubelet[2830]: E0123 01:41:16.819048 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.819380 kubelet[2830]: E0123 01:41:16.819297 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.819380 kubelet[2830]: W0123 01:41:16.819308 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.819380 kubelet[2830]: E0123 01:41:16.819320 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.820901 kubelet[2830]: E0123 01:41:16.820882 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.820901 kubelet[2830]: W0123 01:41:16.820899 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.820999 kubelet[2830]: E0123 01:41:16.820913 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.823076 kubelet[2830]: E0123 01:41:16.822869 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.823076 kubelet[2830]: W0123 01:41:16.822888 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.823076 kubelet[2830]: E0123 01:41:16.822901 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.827969 kubelet[2830]: E0123 01:41:16.824511 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.827969 kubelet[2830]: W0123 01:41:16.824916 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.827969 kubelet[2830]: E0123 01:41:16.824935 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.827969 kubelet[2830]: E0123 01:41:16.827456 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.827969 kubelet[2830]: W0123 01:41:16.827469 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.827969 kubelet[2830]: E0123 01:41:16.827482 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.835512 containerd[1583]: time="2026-01-23T01:41:16.835022242Z" level=info msg="connecting to shim 5f631c9d3d75005d7097a09f007103ba9b0a2f3e6605fd7af0edc5a51b9195f3" address="unix:///run/containerd/s/811aba232f40ef90235eb317c462ee8fd2f7a13408a050f346354b822dbb0d2a" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:41:16.839948 kubelet[2830]: E0123 01:41:16.836410 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.839948 kubelet[2830]: W0123 01:41:16.836509 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.839948 kubelet[2830]: E0123 01:41:16.836528 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.839948 kubelet[2830]: E0123 01:41:16.838917 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.839948 kubelet[2830]: W0123 01:41:16.838932 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.839948 kubelet[2830]: E0123 01:41:16.838948 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.840198 kubelet[2830]: E0123 01:41:16.840080 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.840198 kubelet[2830]: W0123 01:41:16.840092 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.840198 kubelet[2830]: E0123 01:41:16.840106 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.841433 kubelet[2830]: E0123 01:41:16.841260 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.841433 kubelet[2830]: W0123 01:41:16.841360 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.841433 kubelet[2830]: E0123 01:41:16.841377 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.846057 kubelet[2830]: E0123 01:41:16.845350 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.846057 kubelet[2830]: W0123 01:41:16.845449 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.846057 kubelet[2830]: E0123 01:41:16.845467 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:16.879493 containerd[1583]: time="2026-01-23T01:41:16.879135711Z" level=info msg="connecting to shim 4a860c5c357e9d7c8f83151ff52d76820d193313acf4936cb30385d0ede02795" address="unix:///run/containerd/s/fc05300187cee6678d0ddbd20715d2074bb3c2810499a4a653120c7516c450a7" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:41:16.901996 kubelet[2830]: E0123 01:41:16.901965 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:41:16.903994 kubelet[2830]: W0123 01:41:16.903358 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:41:16.903994 kubelet[2830]: E0123 01:41:16.903391 2830 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:41:17.008318 systemd[1]: Started cri-containerd-5f631c9d3d75005d7097a09f007103ba9b0a2f3e6605fd7af0edc5a51b9195f3.scope - libcontainer container 5f631c9d3d75005d7097a09f007103ba9b0a2f3e6605fd7af0edc5a51b9195f3. Jan 23 01:41:17.088363 systemd[1]: Started cri-containerd-4a860c5c357e9d7c8f83151ff52d76820d193313acf4936cb30385d0ede02795.scope - libcontainer container 4a860c5c357e9d7c8f83151ff52d76820d193313acf4936cb30385d0ede02795. Jan 23 01:41:17.377227 containerd[1583]: time="2026-01-23T01:41:17.375112686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d272b,Uid:b6765b29-0838-4c00-91a2-8b045de10a43,Namespace:calico-system,Attempt:0,} returns sandbox id \"4a860c5c357e9d7c8f83151ff52d76820d193313acf4936cb30385d0ede02795\"" Jan 23 01:41:17.396208 containerd[1583]: time="2026-01-23T01:41:17.396111798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56c548f9f9-xpf27,Uid:0733b854-c9f9-4e02-9660-7d54fbd14298,Namespace:calico-system,Attempt:0,} returns sandbox id \"5f631c9d3d75005d7097a09f007103ba9b0a2f3e6605fd7af0edc5a51b9195f3\"" Jan 23 01:41:17.406851 containerd[1583]: time="2026-01-23T01:41:17.406408486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 01:41:18.216104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3749165031.mount: Deactivated successfully. Jan 23 01:41:18.365145 kubelet[2830]: E0123 01:41:18.365005 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7xq6t" podUID="d4655da0-4d87-462c-8176-c9772e42f76a" Jan 23 01:41:18.466287 containerd[1583]: time="2026-01-23T01:41:18.466036653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:41:18.468899 containerd[1583]: time="2026-01-23T01:41:18.468833571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Jan 23 01:41:18.471888 containerd[1583]: time="2026-01-23T01:41:18.471464034Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:41:18.478974 containerd[1583]: time="2026-01-23T01:41:18.478840790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:41:18.482163 containerd[1583]: time="2026-01-23T01:41:18.481881806Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.075429849s" Jan 23 01:41:18.482163 containerd[1583]: time="2026-01-23T01:41:18.481949994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 23 01:41:18.486193 containerd[1583]: time="2026-01-23T01:41:18.486046312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 01:41:18.496323 containerd[1583]: time="2026-01-23T01:41:18.495308959Z" level=info msg="CreateContainer within sandbox \"4a860c5c357e9d7c8f83151ff52d76820d193313acf4936cb30385d0ede02795\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 01:41:18.520066 containerd[1583]: time="2026-01-23T01:41:18.519937287Z" level=info msg="Container 9602af2f8d5926e4df2d5452eb3cd3a21405ea5469a146a5dc3f8903846d3b46: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:41:18.536738 containerd[1583]: time="2026-01-23T01:41:18.536197670Z" level=info msg="CreateContainer within sandbox \"4a860c5c357e9d7c8f83151ff52d76820d193313acf4936cb30385d0ede02795\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9602af2f8d5926e4df2d5452eb3cd3a21405ea5469a146a5dc3f8903846d3b46\"" Jan 23 01:41:18.539862 containerd[1583]: time="2026-01-23T01:41:18.539408069Z" level=info msg="StartContainer for \"9602af2f8d5926e4df2d5452eb3cd3a21405ea5469a146a5dc3f8903846d3b46\"" Jan 23 01:41:18.543136 containerd[1583]: time="2026-01-23T01:41:18.542391104Z" level=info msg="connecting to shim 9602af2f8d5926e4df2d5452eb3cd3a21405ea5469a146a5dc3f8903846d3b46" address="unix:///run/containerd/s/fc05300187cee6678d0ddbd20715d2074bb3c2810499a4a653120c7516c450a7" protocol=ttrpc version=3 Jan 23 01:41:18.603424 systemd[1]: Started cri-containerd-9602af2f8d5926e4df2d5452eb3cd3a21405ea5469a146a5dc3f8903846d3b46.scope - libcontainer container 9602af2f8d5926e4df2d5452eb3cd3a21405ea5469a146a5dc3f8903846d3b46. Jan 23 01:41:18.794521 containerd[1583]: time="2026-01-23T01:41:18.794322421Z" level=info msg="StartContainer for \"9602af2f8d5926e4df2d5452eb3cd3a21405ea5469a146a5dc3f8903846d3b46\" returns successfully" Jan 23 01:41:18.829009 systemd[1]: cri-containerd-9602af2f8d5926e4df2d5452eb3cd3a21405ea5469a146a5dc3f8903846d3b46.scope: Deactivated successfully. Jan 23 01:41:18.846977 containerd[1583]: time="2026-01-23T01:41:18.846921847Z" level=info msg="received container exit event container_id:\"9602af2f8d5926e4df2d5452eb3cd3a21405ea5469a146a5dc3f8903846d3b46\" id:\"9602af2f8d5926e4df2d5452eb3cd3a21405ea5469a146a5dc3f8903846d3b46\" pid:3501 exited_at:{seconds:1769132478 nanos:846173696}" Jan 23 01:41:18.992328 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9602af2f8d5926e4df2d5452eb3cd3a21405ea5469a146a5dc3f8903846d3b46-rootfs.mount: Deactivated successfully. Jan 23 01:41:20.372032 kubelet[2830]: E0123 01:41:20.371826 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7xq6t" podUID="d4655da0-4d87-462c-8176-c9772e42f76a" Jan 23 01:41:22.365876 kubelet[2830]: E0123 01:41:22.365361 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7xq6t" podUID="d4655da0-4d87-462c-8176-c9772e42f76a" Jan 23 01:41:22.755812 containerd[1583]: time="2026-01-23T01:41:22.755069586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:41:22.759264 containerd[1583]: time="2026-01-23T01:41:22.757956711Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33739890" Jan 23 01:41:22.763257 containerd[1583]: time="2026-01-23T01:41:22.763129894Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:41:22.768222 containerd[1583]: time="2026-01-23T01:41:22.768188818Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:41:22.769392 containerd[1583]: time="2026-01-23T01:41:22.769206792Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 4.281420853s" Jan 23 01:41:22.770054 containerd[1583]: time="2026-01-23T01:41:22.769861763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 23 01:41:22.772891 containerd[1583]: time="2026-01-23T01:41:22.772353873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 01:41:22.809887 containerd[1583]: time="2026-01-23T01:41:22.809487962Z" level=info msg="CreateContainer within sandbox \"5f631c9d3d75005d7097a09f007103ba9b0a2f3e6605fd7af0edc5a51b9195f3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 01:41:22.834025 containerd[1583]: time="2026-01-23T01:41:22.833993158Z" level=info msg="Container 54343da5f7df7a16a18a18e3911b0891affb60013cfc579f86e109585f25f38f: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:41:22.856216 containerd[1583]: time="2026-01-23T01:41:22.856086266Z" level=info msg="CreateContainer within sandbox \"5f631c9d3d75005d7097a09f007103ba9b0a2f3e6605fd7af0edc5a51b9195f3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"54343da5f7df7a16a18a18e3911b0891affb60013cfc579f86e109585f25f38f\"" Jan 23 01:41:22.858292 containerd[1583]: time="2026-01-23T01:41:22.858111302Z" level=info msg="StartContainer for \"54343da5f7df7a16a18a18e3911b0891affb60013cfc579f86e109585f25f38f\"" Jan 23 01:41:22.860661 containerd[1583]: time="2026-01-23T01:41:22.860271402Z" level=info msg="connecting to shim 54343da5f7df7a16a18a18e3911b0891affb60013cfc579f86e109585f25f38f" address="unix:///run/containerd/s/811aba232f40ef90235eb317c462ee8fd2f7a13408a050f346354b822dbb0d2a" protocol=ttrpc version=3 Jan 23 01:41:22.937255 systemd[1]: Started cri-containerd-54343da5f7df7a16a18a18e3911b0891affb60013cfc579f86e109585f25f38f.scope - libcontainer container 54343da5f7df7a16a18a18e3911b0891affb60013cfc579f86e109585f25f38f. Jan 23 01:41:23.162981 containerd[1583]: time="2026-01-23T01:41:23.162469058Z" level=info msg="StartContainer for \"54343da5f7df7a16a18a18e3911b0891affb60013cfc579f86e109585f25f38f\" returns successfully" Jan 23 01:41:23.263449 kubelet[2830]: I0123 01:41:23.263221 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-56c548f9f9-xpf27" podStartSLOduration=2.8945718019999997 podStartE2EDuration="8.263201345s" podCreationTimestamp="2026-01-23 01:41:15 +0000 UTC" firstStartedPulling="2026-01-23 01:41:17.403500537 +0000 UTC m=+47.366507708" lastFinishedPulling="2026-01-23 01:41:22.772130078 +0000 UTC m=+52.735137251" observedRunningTime="2026-01-23 01:41:23.261242792 +0000 UTC m=+53.224249984" watchObservedRunningTime="2026-01-23 01:41:23.263201345 +0000 UTC m=+53.226208527" Jan 23 01:41:24.366175 kubelet[2830]: E0123 01:41:24.365879 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7xq6t" podUID="d4655da0-4d87-462c-8176-c9772e42f76a" Jan 23 01:41:26.365731 kubelet[2830]: E0123 01:41:26.365312 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7xq6t" podUID="d4655da0-4d87-462c-8176-c9772e42f76a" Jan 23 01:41:28.368380 kubelet[2830]: E0123 01:41:28.367832 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7xq6t" podUID="d4655da0-4d87-462c-8176-c9772e42f76a" Jan 23 01:41:28.587473 containerd[1583]: time="2026-01-23T01:41:28.586476359Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:41:28.588948 containerd[1583]: time="2026-01-23T01:41:28.588738795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 23 01:41:28.591955 containerd[1583]: time="2026-01-23T01:41:28.591782028Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:41:28.597178 containerd[1583]: time="2026-01-23T01:41:28.597092051Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:41:28.597817 containerd[1583]: time="2026-01-23T01:41:28.597480070Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 5.82502102s" Jan 23 01:41:28.597817 containerd[1583]: time="2026-01-23T01:41:28.597787764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 23 01:41:28.612737 containerd[1583]: time="2026-01-23T01:41:28.611100197Z" level=info msg="CreateContainer within sandbox \"4a860c5c357e9d7c8f83151ff52d76820d193313acf4936cb30385d0ede02795\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 01:41:28.633912 containerd[1583]: time="2026-01-23T01:41:28.632289244Z" level=info msg="Container fc908496efdb43b3e94c7fcd2bf2c5419d42d5bbe815f00f49781ed487d210a8: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:41:28.663054 containerd[1583]: time="2026-01-23T01:41:28.663006886Z" level=info msg="CreateContainer within sandbox \"4a860c5c357e9d7c8f83151ff52d76820d193313acf4936cb30385d0ede02795\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fc908496efdb43b3e94c7fcd2bf2c5419d42d5bbe815f00f49781ed487d210a8\"" Jan 23 01:41:28.667311 containerd[1583]: time="2026-01-23T01:41:28.667266051Z" level=info msg="StartContainer for \"fc908496efdb43b3e94c7fcd2bf2c5419d42d5bbe815f00f49781ed487d210a8\"" Jan 23 01:41:28.674303 containerd[1583]: time="2026-01-23T01:41:28.674099642Z" level=info msg="connecting to shim fc908496efdb43b3e94c7fcd2bf2c5419d42d5bbe815f00f49781ed487d210a8" address="unix:///run/containerd/s/fc05300187cee6678d0ddbd20715d2074bb3c2810499a4a653120c7516c450a7" protocol=ttrpc version=3 Jan 23 01:41:28.800164 systemd[1]: Started cri-containerd-fc908496efdb43b3e94c7fcd2bf2c5419d42d5bbe815f00f49781ed487d210a8.scope - libcontainer container fc908496efdb43b3e94c7fcd2bf2c5419d42d5bbe815f00f49781ed487d210a8. Jan 23 01:41:29.011982 containerd[1583]: time="2026-01-23T01:41:29.011944382Z" level=info msg="StartContainer for \"fc908496efdb43b3e94c7fcd2bf2c5419d42d5bbe815f00f49781ed487d210a8\" returns successfully" Jan 23 01:41:30.367411 kubelet[2830]: E0123 01:41:30.366307 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7xq6t" podUID="d4655da0-4d87-462c-8176-c9772e42f76a" Jan 23 01:41:30.614246 systemd[1]: cri-containerd-fc908496efdb43b3e94c7fcd2bf2c5419d42d5bbe815f00f49781ed487d210a8.scope: Deactivated successfully. Jan 23 01:41:30.615097 systemd[1]: cri-containerd-fc908496efdb43b3e94c7fcd2bf2c5419d42d5bbe815f00f49781ed487d210a8.scope: Consumed 2.005s CPU time, 186.8M memory peak, 3.9M read from disk, 171.3M written to disk. Jan 23 01:41:30.628385 containerd[1583]: time="2026-01-23T01:41:30.627871570Z" level=info msg="received container exit event container_id:\"fc908496efdb43b3e94c7fcd2bf2c5419d42d5bbe815f00f49781ed487d210a8\" id:\"fc908496efdb43b3e94c7fcd2bf2c5419d42d5bbe815f00f49781ed487d210a8\" pid:3611 exited_at:{seconds:1769132490 nanos:627246053}" Jan 23 01:41:30.703259 kubelet[2830]: I0123 01:41:30.702477 2830 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 01:41:30.721059 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc908496efdb43b3e94c7fcd2bf2c5419d42d5bbe815f00f49781ed487d210a8-rootfs.mount: Deactivated successfully. Jan 23 01:41:30.839116 systemd[1]: Created slice kubepods-besteffort-podb0a954f0_0bce_4a3f_aa7d_4601546324c7.slice - libcontainer container kubepods-besteffort-podb0a954f0_0bce_4a3f_aa7d_4601546324c7.slice. Jan 23 01:41:30.860200 systemd[1]: Created slice kubepods-besteffort-pod117ed452_382a_4cae_a50f_439078d719fb.slice - libcontainer container kubepods-besteffort-pod117ed452_382a_4cae_a50f_439078d719fb.slice. Jan 23 01:41:30.883874 systemd[1]: Created slice kubepods-besteffort-pod73bba584_49ff_4a6a_a59a_46cd1ea9004d.slice - libcontainer container kubepods-besteffort-pod73bba584_49ff_4a6a_a59a_46cd1ea9004d.slice. Jan 23 01:41:30.902112 systemd[1]: Created slice kubepods-besteffort-pod3557eedc_6578_421a_8c65_fff9d3233af5.slice - libcontainer container kubepods-besteffort-pod3557eedc_6578_421a_8c65_fff9d3233af5.slice. Jan 23 01:41:30.904851 kubelet[2830]: I0123 01:41:30.903252 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/73bba584-49ff-4a6a-a59a-46cd1ea9004d-calico-apiserver-certs\") pod \"calico-apiserver-694fcf68f5-4q5l8\" (UID: \"73bba584-49ff-4a6a-a59a-46cd1ea9004d\") " pod="calico-apiserver/calico-apiserver-694fcf68f5-4q5l8" Jan 23 01:41:30.904851 kubelet[2830]: I0123 01:41:30.903297 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt7lg\" (UniqueName: \"kubernetes.io/projected/45f8db5c-10e9-4970-b59b-9e6ccdff633a-kube-api-access-gt7lg\") pod \"calico-apiserver-694fcf68f5-p2bxz\" (UID: \"45f8db5c-10e9-4970-b59b-9e6ccdff633a\") " pod="calico-apiserver/calico-apiserver-694fcf68f5-p2bxz" Jan 23 01:41:30.904851 kubelet[2830]: I0123 01:41:30.903332 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3557eedc-6578-421a-8c65-fff9d3233af5-config\") pod \"goldmane-666569f655-msxcs\" (UID: \"3557eedc-6578-421a-8c65-fff9d3233af5\") " pod="calico-system/goldmane-666569f655-msxcs" Jan 23 01:41:30.904851 kubelet[2830]: I0123 01:41:30.903361 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/45f8db5c-10e9-4970-b59b-9e6ccdff633a-calico-apiserver-certs\") pod \"calico-apiserver-694fcf68f5-p2bxz\" (UID: \"45f8db5c-10e9-4970-b59b-9e6ccdff633a\") " pod="calico-apiserver/calico-apiserver-694fcf68f5-p2bxz" Jan 23 01:41:30.904851 kubelet[2830]: I0123 01:41:30.903385 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrfkl\" (UniqueName: \"kubernetes.io/projected/117ed452-382a-4cae-a50f-439078d719fb-kube-api-access-wrfkl\") pod \"calico-kube-controllers-6c9c68dbf8-nsnd4\" (UID: \"117ed452-382a-4cae-a50f-439078d719fb\") " pod="calico-system/calico-kube-controllers-6c9c68dbf8-nsnd4" Jan 23 01:41:30.905213 kubelet[2830]: I0123 01:41:30.903408 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3557eedc-6578-421a-8c65-fff9d3233af5-goldmane-ca-bundle\") pod \"goldmane-666569f655-msxcs\" (UID: \"3557eedc-6578-421a-8c65-fff9d3233af5\") " pod="calico-system/goldmane-666569f655-msxcs" Jan 23 01:41:30.905213 kubelet[2830]: I0123 01:41:30.903434 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br642\" (UniqueName: \"kubernetes.io/projected/b0a954f0-0bce-4a3f-aa7d-4601546324c7-kube-api-access-br642\") pod \"whisker-f46868fc8-4wnj7\" (UID: \"b0a954f0-0bce-4a3f-aa7d-4601546324c7\") " pod="calico-system/whisker-f46868fc8-4wnj7" Jan 23 01:41:30.905213 kubelet[2830]: I0123 01:41:30.903463 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dtjn\" (UniqueName: \"kubernetes.io/projected/73bba584-49ff-4a6a-a59a-46cd1ea9004d-kube-api-access-8dtjn\") pod \"calico-apiserver-694fcf68f5-4q5l8\" (UID: \"73bba584-49ff-4a6a-a59a-46cd1ea9004d\") " pod="calico-apiserver/calico-apiserver-694fcf68f5-4q5l8" Jan 23 01:41:30.905213 kubelet[2830]: I0123 01:41:30.903485 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwg2c\" (UniqueName: \"kubernetes.io/projected/c5579b08-3693-4846-9ba6-5f0864556381-kube-api-access-nwg2c\") pod \"coredns-674b8bbfcf-snsbz\" (UID: \"c5579b08-3693-4846-9ba6-5f0864556381\") " pod="kube-system/coredns-674b8bbfcf-snsbz" Jan 23 01:41:30.906898 kubelet[2830]: I0123 01:41:30.906870 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5579b08-3693-4846-9ba6-5f0864556381-config-volume\") pod \"coredns-674b8bbfcf-snsbz\" (UID: \"c5579b08-3693-4846-9ba6-5f0864556381\") " pod="kube-system/coredns-674b8bbfcf-snsbz" Jan 23 01:41:30.907032 kubelet[2830]: I0123 01:41:30.907011 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgc28\" (UniqueName: \"kubernetes.io/projected/3557eedc-6578-421a-8c65-fff9d3233af5-kube-api-access-rgc28\") pod \"goldmane-666569f655-msxcs\" (UID: \"3557eedc-6578-421a-8c65-fff9d3233af5\") " pod="calico-system/goldmane-666569f655-msxcs" Jan 23 01:41:30.907150 kubelet[2830]: I0123 01:41:30.907130 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r48p\" (UniqueName: \"kubernetes.io/projected/ca431a16-f60f-49a8-ad90-ef63fc269ffe-kube-api-access-7r48p\") pod \"coredns-674b8bbfcf-99xj6\" (UID: \"ca431a16-f60f-49a8-ad90-ef63fc269ffe\") " pod="kube-system/coredns-674b8bbfcf-99xj6" Jan 23 01:41:30.907255 kubelet[2830]: I0123 01:41:30.907234 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b0a954f0-0bce-4a3f-aa7d-4601546324c7-whisker-backend-key-pair\") pod \"whisker-f46868fc8-4wnj7\" (UID: \"b0a954f0-0bce-4a3f-aa7d-4601546324c7\") " pod="calico-system/whisker-f46868fc8-4wnj7" Jan 23 01:41:30.907361 kubelet[2830]: I0123 01:41:30.907341 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/117ed452-382a-4cae-a50f-439078d719fb-tigera-ca-bundle\") pod \"calico-kube-controllers-6c9c68dbf8-nsnd4\" (UID: \"117ed452-382a-4cae-a50f-439078d719fb\") " pod="calico-system/calico-kube-controllers-6c9c68dbf8-nsnd4" Jan 23 01:41:30.907468 kubelet[2830]: I0123 01:41:30.907446 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3557eedc-6578-421a-8c65-fff9d3233af5-goldmane-key-pair\") pod \"goldmane-666569f655-msxcs\" (UID: \"3557eedc-6578-421a-8c65-fff9d3233af5\") " pod="calico-system/goldmane-666569f655-msxcs" Jan 23 01:41:30.907896 kubelet[2830]: I0123 01:41:30.907873 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca431a16-f60f-49a8-ad90-ef63fc269ffe-config-volume\") pod \"coredns-674b8bbfcf-99xj6\" (UID: \"ca431a16-f60f-49a8-ad90-ef63fc269ffe\") " pod="kube-system/coredns-674b8bbfcf-99xj6" Jan 23 01:41:30.908012 kubelet[2830]: I0123 01:41:30.907992 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0a954f0-0bce-4a3f-aa7d-4601546324c7-whisker-ca-bundle\") pod \"whisker-f46868fc8-4wnj7\" (UID: \"b0a954f0-0bce-4a3f-aa7d-4601546324c7\") " pod="calico-system/whisker-f46868fc8-4wnj7" Jan 23 01:41:30.921436 systemd[1]: Created slice kubepods-besteffort-pod45f8db5c_10e9_4970_b59b_9e6ccdff633a.slice - libcontainer container kubepods-besteffort-pod45f8db5c_10e9_4970_b59b_9e6ccdff633a.slice. Jan 23 01:41:30.941219 systemd[1]: Created slice kubepods-burstable-podc5579b08_3693_4846_9ba6_5f0864556381.slice - libcontainer container kubepods-burstable-podc5579b08_3693_4846_9ba6_5f0864556381.slice. Jan 23 01:41:30.957801 systemd[1]: Created slice kubepods-burstable-podca431a16_f60f_49a8_ad90_ef63fc269ffe.slice - libcontainer container kubepods-burstable-podca431a16_f60f_49a8_ad90_ef63fc269ffe.slice. Jan 23 01:41:31.154935 containerd[1583]: time="2026-01-23T01:41:31.153983911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f46868fc8-4wnj7,Uid:b0a954f0-0bce-4a3f-aa7d-4601546324c7,Namespace:calico-system,Attempt:0,}" Jan 23 01:41:31.175657 containerd[1583]: time="2026-01-23T01:41:31.175337212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c9c68dbf8-nsnd4,Uid:117ed452-382a-4cae-a50f-439078d719fb,Namespace:calico-system,Attempt:0,}" Jan 23 01:41:31.206260 containerd[1583]: time="2026-01-23T01:41:31.206230560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-694fcf68f5-4q5l8,Uid:73bba584-49ff-4a6a-a59a-46cd1ea9004d,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:41:31.237833 containerd[1583]: time="2026-01-23T01:41:31.237798901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-msxcs,Uid:3557eedc-6578-421a-8c65-fff9d3233af5,Namespace:calico-system,Attempt:0,}" Jan 23 01:41:31.255744 containerd[1583]: time="2026-01-23T01:41:31.254494028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-snsbz,Uid:c5579b08-3693-4846-9ba6-5f0864556381,Namespace:kube-system,Attempt:0,}" Jan 23 01:41:31.255744 containerd[1583]: time="2026-01-23T01:41:31.255452416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-694fcf68f5-p2bxz,Uid:45f8db5c-10e9-4970-b59b-9e6ccdff633a,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:41:31.294063 containerd[1583]: time="2026-01-23T01:41:31.294016594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-99xj6,Uid:ca431a16-f60f-49a8-ad90-ef63fc269ffe,Namespace:kube-system,Attempt:0,}" Jan 23 01:41:31.388073 containerd[1583]: time="2026-01-23T01:41:31.388035727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 01:41:31.905161 containerd[1583]: time="2026-01-23T01:41:31.905111675Z" level=error msg="Failed to destroy network for sandbox \"355864a052b1f712c17b8efa4d03015cc28a136af4403df4b65b5d44ec1a716e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:31.915226 systemd[1]: run-netns-cni\x2d7f39beb7\x2dc4c4\x2d9f8c\x2dc64b\x2d87b2d3ecc0c3.mount: Deactivated successfully. Jan 23 01:41:31.925786 containerd[1583]: time="2026-01-23T01:41:31.920965939Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f46868fc8-4wnj7,Uid:b0a954f0-0bce-4a3f-aa7d-4601546324c7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"355864a052b1f712c17b8efa4d03015cc28a136af4403df4b65b5d44ec1a716e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:31.968779 containerd[1583]: time="2026-01-23T01:41:31.963813174Z" level=error msg="Failed to destroy network for sandbox \"b52182c734dbe95795d07821558109c0eeb2f6d5cbbd2ed3d0339b43008bea9f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:31.973970 systemd[1]: run-netns-cni\x2d55e06dce\x2d4c81\x2dcb22\x2dfa31\x2d216ea56fbd60.mount: Deactivated successfully. Jan 23 01:41:31.974832 containerd[1583]: time="2026-01-23T01:41:31.974389884Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-694fcf68f5-p2bxz,Uid:45f8db5c-10e9-4970-b59b-9e6ccdff633a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b52182c734dbe95795d07821558109c0eeb2f6d5cbbd2ed3d0339b43008bea9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:31.986945 kubelet[2830]: E0123 01:41:31.986012 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b52182c734dbe95795d07821558109c0eeb2f6d5cbbd2ed3d0339b43008bea9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:31.988386 kubelet[2830]: E0123 01:41:31.988348 2830 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b52182c734dbe95795d07821558109c0eeb2f6d5cbbd2ed3d0339b43008bea9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-694fcf68f5-p2bxz" Jan 23 01:41:31.988969 kubelet[2830]: E0123 01:41:31.988939 2830 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b52182c734dbe95795d07821558109c0eeb2f6d5cbbd2ed3d0339b43008bea9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-694fcf68f5-p2bxz" Jan 23 01:41:31.989174 kubelet[2830]: E0123 01:41:31.986166 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"355864a052b1f712c17b8efa4d03015cc28a136af4403df4b65b5d44ec1a716e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:31.989354 kubelet[2830]: E0123 01:41:31.989217 2830 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"355864a052b1f712c17b8efa4d03015cc28a136af4403df4b65b5d44ec1a716e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f46868fc8-4wnj7" Jan 23 01:41:31.989354 kubelet[2830]: E0123 01:41:31.989336 2830 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"355864a052b1f712c17b8efa4d03015cc28a136af4403df4b65b5d44ec1a716e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f46868fc8-4wnj7" Jan 23 01:41:31.989459 kubelet[2830]: E0123 01:41:31.989414 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-f46868fc8-4wnj7_calico-system(b0a954f0-0bce-4a3f-aa7d-4601546324c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-f46868fc8-4wnj7_calico-system(b0a954f0-0bce-4a3f-aa7d-4601546324c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"355864a052b1f712c17b8efa4d03015cc28a136af4403df4b65b5d44ec1a716e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-f46868fc8-4wnj7" podUID="b0a954f0-0bce-4a3f-aa7d-4601546324c7" Jan 23 01:41:31.990014 kubelet[2830]: E0123 01:41:31.989953 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-694fcf68f5-p2bxz_calico-apiserver(45f8db5c-10e9-4970-b59b-9e6ccdff633a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-694fcf68f5-p2bxz_calico-apiserver(45f8db5c-10e9-4970-b59b-9e6ccdff633a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b52182c734dbe95795d07821558109c0eeb2f6d5cbbd2ed3d0339b43008bea9f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-694fcf68f5-p2bxz" podUID="45f8db5c-10e9-4970-b59b-9e6ccdff633a" Jan 23 01:41:32.001812 containerd[1583]: time="2026-01-23T01:41:32.001478294Z" level=error msg="Failed to destroy network for sandbox \"520a05f132b1db64f2fee714009fee67fcfea634b7c2242d2b4d37db9f6c8e60\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:32.009287 systemd[1]: run-netns-cni\x2d6991a194\x2d4c09\x2d7c8b\x2d84cb\x2d10e3f458bab0.mount: Deactivated successfully. Jan 23 01:41:32.017337 containerd[1583]: time="2026-01-23T01:41:32.017206006Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c9c68dbf8-nsnd4,Uid:117ed452-382a-4cae-a50f-439078d719fb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"520a05f132b1db64f2fee714009fee67fcfea634b7c2242d2b4d37db9f6c8e60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:32.018794 kubelet[2830]: E0123 01:41:32.017933 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"520a05f132b1db64f2fee714009fee67fcfea634b7c2242d2b4d37db9f6c8e60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:32.018794 kubelet[2830]: E0123 01:41:32.017997 2830 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"520a05f132b1db64f2fee714009fee67fcfea634b7c2242d2b4d37db9f6c8e60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c9c68dbf8-nsnd4" Jan 23 01:41:32.018794 kubelet[2830]: E0123 01:41:32.018026 2830 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"520a05f132b1db64f2fee714009fee67fcfea634b7c2242d2b4d37db9f6c8e60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c9c68dbf8-nsnd4" Jan 23 01:41:32.019077 kubelet[2830]: E0123 01:41:32.018177 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c9c68dbf8-nsnd4_calico-system(117ed452-382a-4cae-a50f-439078d719fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c9c68dbf8-nsnd4_calico-system(117ed452-382a-4cae-a50f-439078d719fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"520a05f132b1db64f2fee714009fee67fcfea634b7c2242d2b4d37db9f6c8e60\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c9c68dbf8-nsnd4" podUID="117ed452-382a-4cae-a50f-439078d719fb" Jan 23 01:41:32.046235 containerd[1583]: time="2026-01-23T01:41:32.045927251Z" level=error msg="Failed to destroy network for sandbox \"c5f581c85b1aa54e5b518e69f67c57b649417ee0fe2a5d021469d0d40729be6b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:32.052438 systemd[1]: run-netns-cni\x2d88cd2bfa\x2d047d\x2dfccd\x2d31ce\x2d71dc366b28a8.mount: Deactivated successfully. Jan 23 01:41:32.066934 containerd[1583]: time="2026-01-23T01:41:32.066417600Z" level=error msg="Failed to destroy network for sandbox \"372267771383aa6f827e177806d15e5a3439a8b48ce6a399c95915980ba050de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:32.075451 containerd[1583]: time="2026-01-23T01:41:32.074169760Z" level=error msg="Failed to destroy network for sandbox \"c38e36c3a1fa91b08ddca41a7a7197da70cd9cd4fa7d0cf7b3f90a212bc398f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:32.078183 containerd[1583]: time="2026-01-23T01:41:32.077350518Z" level=error msg="Failed to destroy network for sandbox \"61d3a5c743781967b37ff57fb1150fabc101e48684a628d3d6adbdd4bfb650b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:32.078369 containerd[1583]: time="2026-01-23T01:41:32.078285182Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-snsbz,Uid:c5579b08-3693-4846-9ba6-5f0864556381,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5f581c85b1aa54e5b518e69f67c57b649417ee0fe2a5d021469d0d40729be6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:32.080068 kubelet[2830]: E0123 01:41:32.079340 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5f581c85b1aa54e5b518e69f67c57b649417ee0fe2a5d021469d0d40729be6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:32.080424 kubelet[2830]: E0123 01:41:32.080290 2830 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5f581c85b1aa54e5b518e69f67c57b649417ee0fe2a5d021469d0d40729be6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-snsbz" Jan 23 01:41:32.082016 containerd[1583]: time="2026-01-23T01:41:32.081170948Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-99xj6,Uid:ca431a16-f60f-49a8-ad90-ef63fc269ffe,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"372267771383aa6f827e177806d15e5a3439a8b48ce6a399c95915980ba050de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:32.083131 kubelet[2830]: E0123 01:41:32.082448 2830 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5f581c85b1aa54e5b518e69f67c57b649417ee0fe2a5d021469d0d40729be6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-snsbz" Jan 23 01:41:32.083131 kubelet[2830]: E0123 01:41:32.082811 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-snsbz_kube-system(c5579b08-3693-4846-9ba6-5f0864556381)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-snsbz_kube-system(c5579b08-3693-4846-9ba6-5f0864556381)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5f581c85b1aa54e5b518e69f67c57b649417ee0fe2a5d021469d0d40729be6b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-snsbz" podUID="c5579b08-3693-4846-9ba6-5f0864556381" Jan 23 01:41:32.084899 kubelet[2830]: E0123 01:41:32.084837 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"372267771383aa6f827e177806d15e5a3439a8b48ce6a399c95915980ba050de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:32.084969 kubelet[2830]: E0123 01:41:32.084896 2830 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"372267771383aa6f827e177806d15e5a3439a8b48ce6a399c95915980ba050de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-99xj6" Jan 23 01:41:32.084969 kubelet[2830]: E0123 01:41:32.084922 2830 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"372267771383aa6f827e177806d15e5a3439a8b48ce6a399c95915980ba050de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-99xj6" Jan 23 01:41:32.085028 kubelet[2830]: E0123 01:41:32.084974 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-99xj6_kube-system(ca431a16-f60f-49a8-ad90-ef63fc269ffe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-99xj6_kube-system(ca431a16-f60f-49a8-ad90-ef63fc269ffe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"372267771383aa6f827e177806d15e5a3439a8b48ce6a399c95915980ba050de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-99xj6" podUID="ca431a16-f60f-49a8-ad90-ef63fc269ffe" Jan 23 01:41:32.088250 containerd[1583]: time="2026-01-23T01:41:32.087841674Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-694fcf68f5-4q5l8,Uid:73bba584-49ff-4a6a-a59a-46cd1ea9004d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c38e36c3a1fa91b08ddca41a7a7197da70cd9cd4fa7d0cf7b3f90a212bc398f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:32.090434 kubelet[2830]: E0123 01:41:32.090172 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c38e36c3a1fa91b08ddca41a7a7197da70cd9cd4fa7d0cf7b3f90a212bc398f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:32.090434 kubelet[2830]: E0123 01:41:32.090327 2830 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c38e36c3a1fa91b08ddca41a7a7197da70cd9cd4fa7d0cf7b3f90a212bc398f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-694fcf68f5-4q5l8" Jan 23 01:41:32.090434 kubelet[2830]: E0123 01:41:32.090364 2830 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c38e36c3a1fa91b08ddca41a7a7197da70cd9cd4fa7d0cf7b3f90a212bc398f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-694fcf68f5-4q5l8" Jan 23 01:41:32.091015 kubelet[2830]: E0123 01:41:32.090433 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-694fcf68f5-4q5l8_calico-apiserver(73bba584-49ff-4a6a-a59a-46cd1ea9004d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-694fcf68f5-4q5l8_calico-apiserver(73bba584-49ff-4a6a-a59a-46cd1ea9004d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c38e36c3a1fa91b08ddca41a7a7197da70cd9cd4fa7d0cf7b3f90a212bc398f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-694fcf68f5-4q5l8" podUID="73bba584-49ff-4a6a-a59a-46cd1ea9004d" Jan 23 01:41:32.093376 containerd[1583]: time="2026-01-23T01:41:32.093266078Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-msxcs,Uid:3557eedc-6578-421a-8c65-fff9d3233af5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"61d3a5c743781967b37ff57fb1150fabc101e48684a628d3d6adbdd4bfb650b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:32.095384 kubelet[2830]: E0123 01:41:32.095172 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61d3a5c743781967b37ff57fb1150fabc101e48684a628d3d6adbdd4bfb650b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:32.095384 kubelet[2830]: E0123 01:41:32.095234 2830 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61d3a5c743781967b37ff57fb1150fabc101e48684a628d3d6adbdd4bfb650b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-msxcs" Jan 23 01:41:32.095384 kubelet[2830]: E0123 01:41:32.095265 2830 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61d3a5c743781967b37ff57fb1150fabc101e48684a628d3d6adbdd4bfb650b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-msxcs" Jan 23 01:41:32.096014 kubelet[2830]: E0123 01:41:32.095326 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-msxcs_calico-system(3557eedc-6578-421a-8c65-fff9d3233af5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-msxcs_calico-system(3557eedc-6578-421a-8c65-fff9d3233af5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61d3a5c743781967b37ff57fb1150fabc101e48684a628d3d6adbdd4bfb650b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-msxcs" podUID="3557eedc-6578-421a-8c65-fff9d3233af5" Jan 23 01:41:32.380843 systemd[1]: Created slice kubepods-besteffort-podd4655da0_4d87_462c_8176_c9772e42f76a.slice - libcontainer container kubepods-besteffort-podd4655da0_4d87_462c_8176_c9772e42f76a.slice. Jan 23 01:41:32.390883 containerd[1583]: time="2026-01-23T01:41:32.390315928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7xq6t,Uid:d4655da0-4d87-462c-8176-c9772e42f76a,Namespace:calico-system,Attempt:0,}" Jan 23 01:41:32.677332 containerd[1583]: time="2026-01-23T01:41:32.677168800Z" level=error msg="Failed to destroy network for sandbox \"33e8d46ba2d30eb69e37638f61501d43c565b5df96a147664f7405b5e6cd21d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:32.683807 containerd[1583]: time="2026-01-23T01:41:32.683392590Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7xq6t,Uid:d4655da0-4d87-462c-8176-c9772e42f76a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"33e8d46ba2d30eb69e37638f61501d43c565b5df96a147664f7405b5e6cd21d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:32.684801 kubelet[2830]: E0123 01:41:32.684759 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33e8d46ba2d30eb69e37638f61501d43c565b5df96a147664f7405b5e6cd21d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:32.685107 kubelet[2830]: E0123 01:41:32.684924 2830 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33e8d46ba2d30eb69e37638f61501d43c565b5df96a147664f7405b5e6cd21d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7xq6t" Jan 23 01:41:32.685107 kubelet[2830]: E0123 01:41:32.685059 2830 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33e8d46ba2d30eb69e37638f61501d43c565b5df96a147664f7405b5e6cd21d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7xq6t" Jan 23 01:41:32.685107 kubelet[2830]: E0123 01:41:32.685113 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7xq6t_calico-system(d4655da0-4d87-462c-8176-c9772e42f76a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7xq6t_calico-system(d4655da0-4d87-462c-8176-c9772e42f76a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33e8d46ba2d30eb69e37638f61501d43c565b5df96a147664f7405b5e6cd21d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7xq6t" podUID="d4655da0-4d87-462c-8176-c9772e42f76a" Jan 23 01:41:32.719020 systemd[1]: run-netns-cni\x2d083082b8\x2de13f\x2dc387\x2db1f6\x2d57b400d474fb.mount: Deactivated successfully. Jan 23 01:41:32.719224 systemd[1]: run-netns-cni\x2d13f65d37\x2d8f94\x2dd801\x2d9465\x2dcf8d4512fb50.mount: Deactivated successfully. Jan 23 01:41:32.719337 systemd[1]: run-netns-cni\x2d9e0d12b9\x2df64a\x2d8bf5\x2dd8be\x2d44e04424b1bb.mount: Deactivated successfully. Jan 23 01:41:43.373102 containerd[1583]: time="2026-01-23T01:41:43.372322754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-snsbz,Uid:c5579b08-3693-4846-9ba6-5f0864556381,Namespace:kube-system,Attempt:0,}" Jan 23 01:41:43.373102 containerd[1583]: time="2026-01-23T01:41:43.372351315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f46868fc8-4wnj7,Uid:b0a954f0-0bce-4a3f-aa7d-4601546324c7,Namespace:calico-system,Attempt:0,}" Jan 23 01:41:43.374169 containerd[1583]: time="2026-01-23T01:41:43.372415034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-msxcs,Uid:3557eedc-6578-421a-8c65-fff9d3233af5,Namespace:calico-system,Attempt:0,}" Jan 23 01:41:43.748968 containerd[1583]: time="2026-01-23T01:41:43.748763076Z" level=error msg="Failed to destroy network for sandbox \"f5b5a3b52067877fbb8ee2a163fd7ecfbf3523a27a0c503c41804f851eb5f924\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:43.753979 systemd[1]: run-netns-cni\x2d03e2b004\x2d3ba6\x2d794f\x2df269\x2db9e3389e3175.mount: Deactivated successfully. Jan 23 01:41:43.764793 containerd[1583]: time="2026-01-23T01:41:43.764323805Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-snsbz,Uid:c5579b08-3693-4846-9ba6-5f0864556381,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5b5a3b52067877fbb8ee2a163fd7ecfbf3523a27a0c503c41804f851eb5f924\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:43.765292 kubelet[2830]: E0123 01:41:43.765253 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5b5a3b52067877fbb8ee2a163fd7ecfbf3523a27a0c503c41804f851eb5f924\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:43.766789 kubelet[2830]: E0123 01:41:43.766078 2830 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5b5a3b52067877fbb8ee2a163fd7ecfbf3523a27a0c503c41804f851eb5f924\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-snsbz" Jan 23 01:41:43.766789 kubelet[2830]: E0123 01:41:43.766122 2830 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5b5a3b52067877fbb8ee2a163fd7ecfbf3523a27a0c503c41804f851eb5f924\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-snsbz" Jan 23 01:41:43.766789 kubelet[2830]: E0123 01:41:43.766263 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-snsbz_kube-system(c5579b08-3693-4846-9ba6-5f0864556381)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-snsbz_kube-system(c5579b08-3693-4846-9ba6-5f0864556381)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5b5a3b52067877fbb8ee2a163fd7ecfbf3523a27a0c503c41804f851eb5f924\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-snsbz" podUID="c5579b08-3693-4846-9ba6-5f0864556381" Jan 23 01:41:43.770115 containerd[1583]: time="2026-01-23T01:41:43.767983325Z" level=error msg="Failed to destroy network for sandbox \"dcf109778d836725d99086c495a907f1d5ea0ab576b0147b070fdf64fb75c46e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:43.771999 containerd[1583]: time="2026-01-23T01:41:43.771873395Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f46868fc8-4wnj7,Uid:b0a954f0-0bce-4a3f-aa7d-4601546324c7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcf109778d836725d99086c495a907f1d5ea0ab576b0147b070fdf64fb75c46e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:43.772335 kubelet[2830]: E0123 01:41:43.772201 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcf109778d836725d99086c495a907f1d5ea0ab576b0147b070fdf64fb75c46e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:43.772335 kubelet[2830]: E0123 01:41:43.772245 2830 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcf109778d836725d99086c495a907f1d5ea0ab576b0147b070fdf64fb75c46e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f46868fc8-4wnj7" Jan 23 01:41:43.772335 kubelet[2830]: E0123 01:41:43.772263 2830 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcf109778d836725d99086c495a907f1d5ea0ab576b0147b070fdf64fb75c46e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f46868fc8-4wnj7" Jan 23 01:41:43.772433 kubelet[2830]: E0123 01:41:43.772312 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-f46868fc8-4wnj7_calico-system(b0a954f0-0bce-4a3f-aa7d-4601546324c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-f46868fc8-4wnj7_calico-system(b0a954f0-0bce-4a3f-aa7d-4601546324c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dcf109778d836725d99086c495a907f1d5ea0ab576b0147b070fdf64fb75c46e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-f46868fc8-4wnj7" podUID="b0a954f0-0bce-4a3f-aa7d-4601546324c7" Jan 23 01:41:43.774282 systemd[1]: run-netns-cni\x2d7a833d9a\x2d396e\x2d50d1\x2d21e9\x2d664faea9d89f.mount: Deactivated successfully. Jan 23 01:41:43.777114 containerd[1583]: time="2026-01-23T01:41:43.777075908Z" level=error msg="Failed to destroy network for sandbox \"97b8932e1cbf0e8e41da660f7afbd15c2b6fc43edf8f1e16afc309ae64cb2a64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:43.786479 containerd[1583]: time="2026-01-23T01:41:43.786297232Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-msxcs,Uid:3557eedc-6578-421a-8c65-fff9d3233af5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"97b8932e1cbf0e8e41da660f7afbd15c2b6fc43edf8f1e16afc309ae64cb2a64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:43.789284 kubelet[2830]: E0123 01:41:43.788971 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97b8932e1cbf0e8e41da660f7afbd15c2b6fc43edf8f1e16afc309ae64cb2a64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:43.789284 kubelet[2830]: E0123 01:41:43.789048 2830 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97b8932e1cbf0e8e41da660f7afbd15c2b6fc43edf8f1e16afc309ae64cb2a64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-msxcs" Jan 23 01:41:43.789284 kubelet[2830]: E0123 01:41:43.789068 2830 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97b8932e1cbf0e8e41da660f7afbd15c2b6fc43edf8f1e16afc309ae64cb2a64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-msxcs" Jan 23 01:41:43.789854 kubelet[2830]: E0123 01:41:43.789113 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-msxcs_calico-system(3557eedc-6578-421a-8c65-fff9d3233af5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-msxcs_calico-system(3557eedc-6578-421a-8c65-fff9d3233af5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97b8932e1cbf0e8e41da660f7afbd15c2b6fc43edf8f1e16afc309ae64cb2a64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-msxcs" podUID="3557eedc-6578-421a-8c65-fff9d3233af5" Jan 23 01:41:44.368066 containerd[1583]: time="2026-01-23T01:41:44.367054148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-99xj6,Uid:ca431a16-f60f-49a8-ad90-ef63fc269ffe,Namespace:kube-system,Attempt:0,}" Jan 23 01:41:44.368428 containerd[1583]: time="2026-01-23T01:41:44.368394338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c9c68dbf8-nsnd4,Uid:117ed452-382a-4cae-a50f-439078d719fb,Namespace:calico-system,Attempt:0,}" Jan 23 01:41:44.430786 systemd[1]: run-netns-cni\x2de8878f11\x2d76ec\x2dcfd6\x2decb0\x2d65493d673f4a.mount: Deactivated successfully. Jan 23 01:41:44.789915 containerd[1583]: time="2026-01-23T01:41:44.789416607Z" level=error msg="Failed to destroy network for sandbox \"9ff907b3a40756852759524c8bb78aecc0a4a48e697300a1793e08d5fc320610\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:44.802229 systemd[1]: run-netns-cni\x2d21058938\x2d2b46\x2de6c6\x2d00eb\x2dea3b9975f5af.mount: Deactivated successfully. Jan 23 01:41:44.821032 containerd[1583]: time="2026-01-23T01:41:44.820742712Z" level=error msg="Failed to destroy network for sandbox \"6ffc37898e1e4132a1ad39e0bf07904d6ea02568458b96863935240a12d73710\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:44.826770 systemd[1]: run-netns-cni\x2d038147ab\x2d70a8\x2d41b3\x2d46f4\x2dced78c57e31a.mount: Deactivated successfully. Jan 23 01:41:44.848809 containerd[1583]: time="2026-01-23T01:41:44.848436912Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c9c68dbf8-nsnd4,Uid:117ed452-382a-4cae-a50f-439078d719fb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ff907b3a40756852759524c8bb78aecc0a4a48e697300a1793e08d5fc320610\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:44.850788 kubelet[2830]: E0123 01:41:44.849910 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ff907b3a40756852759524c8bb78aecc0a4a48e697300a1793e08d5fc320610\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:44.850788 kubelet[2830]: E0123 01:41:44.849991 2830 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ff907b3a40756852759524c8bb78aecc0a4a48e697300a1793e08d5fc320610\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c9c68dbf8-nsnd4" Jan 23 01:41:44.850788 kubelet[2830]: E0123 01:41:44.850022 2830 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ff907b3a40756852759524c8bb78aecc0a4a48e697300a1793e08d5fc320610\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6c9c68dbf8-nsnd4" Jan 23 01:41:44.851389 kubelet[2830]: E0123 01:41:44.850087 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6c9c68dbf8-nsnd4_calico-system(117ed452-382a-4cae-a50f-439078d719fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6c9c68dbf8-nsnd4_calico-system(117ed452-382a-4cae-a50f-439078d719fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ff907b3a40756852759524c8bb78aecc0a4a48e697300a1793e08d5fc320610\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6c9c68dbf8-nsnd4" podUID="117ed452-382a-4cae-a50f-439078d719fb" Jan 23 01:41:44.853141 containerd[1583]: time="2026-01-23T01:41:44.851898142Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-99xj6,Uid:ca431a16-f60f-49a8-ad90-ef63fc269ffe,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ffc37898e1e4132a1ad39e0bf07904d6ea02568458b96863935240a12d73710\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:44.853369 kubelet[2830]: E0123 01:41:44.852247 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ffc37898e1e4132a1ad39e0bf07904d6ea02568458b96863935240a12d73710\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:44.853369 kubelet[2830]: E0123 01:41:44.852294 2830 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ffc37898e1e4132a1ad39e0bf07904d6ea02568458b96863935240a12d73710\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-99xj6" Jan 23 01:41:44.853369 kubelet[2830]: E0123 01:41:44.852317 2830 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ffc37898e1e4132a1ad39e0bf07904d6ea02568458b96863935240a12d73710\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-99xj6" Jan 23 01:41:44.853450 kubelet[2830]: E0123 01:41:44.852365 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-99xj6_kube-system(ca431a16-f60f-49a8-ad90-ef63fc269ffe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-99xj6_kube-system(ca431a16-f60f-49a8-ad90-ef63fc269ffe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ffc37898e1e4132a1ad39e0bf07904d6ea02568458b96863935240a12d73710\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-99xj6" podUID="ca431a16-f60f-49a8-ad90-ef63fc269ffe" Jan 23 01:41:45.368911 containerd[1583]: time="2026-01-23T01:41:45.368069358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7xq6t,Uid:d4655da0-4d87-462c-8176-c9772e42f76a,Namespace:calico-system,Attempt:0,}" Jan 23 01:41:45.369834 containerd[1583]: time="2026-01-23T01:41:45.368077138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-694fcf68f5-4q5l8,Uid:73bba584-49ff-4a6a-a59a-46cd1ea9004d,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:41:45.649734 containerd[1583]: time="2026-01-23T01:41:45.646288726Z" level=error msg="Failed to destroy network for sandbox \"23e5f1c36d76e773e263e181980d3f6df833f7c9e2bc511edfd38c4b267e014a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:45.652947 systemd[1]: run-netns-cni\x2da898b4b5\x2d00d8\x2d45fd\x2d0220\x2d12e8740fdf5d.mount: Deactivated successfully. Jan 23 01:41:45.677059 containerd[1583]: time="2026-01-23T01:41:45.676950935Z" level=error msg="Failed to destroy network for sandbox \"5b4e1ecadd71f1006d33a25cb44452357e88a68d4f66a8b9be2d5ca1021dc023\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:45.682431 systemd[1]: run-netns-cni\x2d201d3d2e\x2d55b4\x2d7b67\x2d77ba\x2deed0114e0616.mount: Deactivated successfully. Jan 23 01:41:45.697828 containerd[1583]: time="2026-01-23T01:41:45.697776935Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7xq6t,Uid:d4655da0-4d87-462c-8176-c9772e42f76a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"23e5f1c36d76e773e263e181980d3f6df833f7c9e2bc511edfd38c4b267e014a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:45.700377 kubelet[2830]: E0123 01:41:45.698305 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23e5f1c36d76e773e263e181980d3f6df833f7c9e2bc511edfd38c4b267e014a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:45.700377 kubelet[2830]: E0123 01:41:45.698372 2830 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23e5f1c36d76e773e263e181980d3f6df833f7c9e2bc511edfd38c4b267e014a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7xq6t" Jan 23 01:41:45.700377 kubelet[2830]: E0123 01:41:45.698405 2830 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23e5f1c36d76e773e263e181980d3f6df833f7c9e2bc511edfd38c4b267e014a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7xq6t" Jan 23 01:41:45.700869 kubelet[2830]: E0123 01:41:45.698463 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7xq6t_calico-system(d4655da0-4d87-462c-8176-c9772e42f76a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7xq6t_calico-system(d4655da0-4d87-462c-8176-c9772e42f76a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23e5f1c36d76e773e263e181980d3f6df833f7c9e2bc511edfd38c4b267e014a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7xq6t" podUID="d4655da0-4d87-462c-8176-c9772e42f76a" Jan 23 01:41:45.704003 containerd[1583]: time="2026-01-23T01:41:45.703872784Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-694fcf68f5-4q5l8,Uid:73bba584-49ff-4a6a-a59a-46cd1ea9004d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b4e1ecadd71f1006d33a25cb44452357e88a68d4f66a8b9be2d5ca1021dc023\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:45.705992 kubelet[2830]: E0123 01:41:45.705889 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b4e1ecadd71f1006d33a25cb44452357e88a68d4f66a8b9be2d5ca1021dc023\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:45.706050 kubelet[2830]: E0123 01:41:45.706019 2830 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b4e1ecadd71f1006d33a25cb44452357e88a68d4f66a8b9be2d5ca1021dc023\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-694fcf68f5-4q5l8" Jan 23 01:41:45.706083 kubelet[2830]: E0123 01:41:45.706046 2830 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b4e1ecadd71f1006d33a25cb44452357e88a68d4f66a8b9be2d5ca1021dc023\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-694fcf68f5-4q5l8" Jan 23 01:41:45.706128 kubelet[2830]: E0123 01:41:45.706100 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-694fcf68f5-4q5l8_calico-apiserver(73bba584-49ff-4a6a-a59a-46cd1ea9004d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-694fcf68f5-4q5l8_calico-apiserver(73bba584-49ff-4a6a-a59a-46cd1ea9004d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b4e1ecadd71f1006d33a25cb44452357e88a68d4f66a8b9be2d5ca1021dc023\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-694fcf68f5-4q5l8" podUID="73bba584-49ff-4a6a-a59a-46cd1ea9004d" Jan 23 01:41:47.366924 containerd[1583]: time="2026-01-23T01:41:47.366424342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-694fcf68f5-p2bxz,Uid:45f8db5c-10e9-4970-b59b-9e6ccdff633a,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:41:47.581066 containerd[1583]: time="2026-01-23T01:41:47.580875582Z" level=error msg="Failed to destroy network for sandbox \"341c923c86c6d4906c9255135dbfece2e0331e6f7bae0d444c76bc8dce2e89e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:47.585193 systemd[1]: run-netns-cni\x2d5c76414f\x2ddf69\x2d18bb\x2d3219\x2d6ab5945b6bf2.mount: Deactivated successfully. Jan 23 01:41:47.594373 containerd[1583]: time="2026-01-23T01:41:47.594218214Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-694fcf68f5-p2bxz,Uid:45f8db5c-10e9-4970-b59b-9e6ccdff633a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"341c923c86c6d4906c9255135dbfece2e0331e6f7bae0d444c76bc8dce2e89e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:47.595160 kubelet[2830]: E0123 01:41:47.595014 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"341c923c86c6d4906c9255135dbfece2e0331e6f7bae0d444c76bc8dce2e89e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:41:47.595869 kubelet[2830]: E0123 01:41:47.595792 2830 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"341c923c86c6d4906c9255135dbfece2e0331e6f7bae0d444c76bc8dce2e89e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-694fcf68f5-p2bxz" Jan 23 01:41:47.595869 kubelet[2830]: E0123 01:41:47.595839 2830 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"341c923c86c6d4906c9255135dbfece2e0331e6f7bae0d444c76bc8dce2e89e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-694fcf68f5-p2bxz" Jan 23 01:41:47.598209 kubelet[2830]: E0123 01:41:47.597110 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-694fcf68f5-p2bxz_calico-apiserver(45f8db5c-10e9-4970-b59b-9e6ccdff633a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-694fcf68f5-p2bxz_calico-apiserver(45f8db5c-10e9-4970-b59b-9e6ccdff633a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"341c923c86c6d4906c9255135dbfece2e0331e6f7bae0d444c76bc8dce2e89e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-694fcf68f5-p2bxz" podUID="45f8db5c-10e9-4970-b59b-9e6ccdff633a" Jan 23 01:41:47.749455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2645176250.mount: Deactivated successfully. Jan 23 01:41:47.804773 containerd[1583]: time="2026-01-23T01:41:47.804077075Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:41:47.807464 containerd[1583]: time="2026-01-23T01:41:47.807428676Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 23 01:41:47.814777 containerd[1583]: time="2026-01-23T01:41:47.813287365Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:41:47.830690 containerd[1583]: time="2026-01-23T01:41:47.830443159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:41:47.831980 containerd[1583]: time="2026-01-23T01:41:47.831765066Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 16.420672717s" Jan 23 01:41:47.831980 containerd[1583]: time="2026-01-23T01:41:47.831914935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 23 01:41:47.885838 containerd[1583]: time="2026-01-23T01:41:47.885344432Z" level=info msg="CreateContainer within sandbox \"4a860c5c357e9d7c8f83151ff52d76820d193313acf4936cb30385d0ede02795\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 01:41:47.918197 containerd[1583]: time="2026-01-23T01:41:47.917826672Z" level=info msg="Container 48a15fa50f6bda4d43f26427429964fe48bfe993881355b55ec9572bf8d955ce: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:41:47.948806 containerd[1583]: time="2026-01-23T01:41:47.945963674Z" level=info msg="CreateContainer within sandbox \"4a860c5c357e9d7c8f83151ff52d76820d193313acf4936cb30385d0ede02795\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"48a15fa50f6bda4d43f26427429964fe48bfe993881355b55ec9572bf8d955ce\"" Jan 23 01:41:47.948806 containerd[1583]: time="2026-01-23T01:41:47.947115311Z" level=info msg="StartContainer for \"48a15fa50f6bda4d43f26427429964fe48bfe993881355b55ec9572bf8d955ce\"" Jan 23 01:41:47.953163 containerd[1583]: time="2026-01-23T01:41:47.951019368Z" level=info msg="connecting to shim 48a15fa50f6bda4d43f26427429964fe48bfe993881355b55ec9572bf8d955ce" address="unix:///run/containerd/s/fc05300187cee6678d0ddbd20715d2074bb3c2810499a4a653120c7516c450a7" protocol=ttrpc version=3 Jan 23 01:41:48.016124 systemd[1]: Started cri-containerd-48a15fa50f6bda4d43f26427429964fe48bfe993881355b55ec9572bf8d955ce.scope - libcontainer container 48a15fa50f6bda4d43f26427429964fe48bfe993881355b55ec9572bf8d955ce. Jan 23 01:41:48.256293 containerd[1583]: time="2026-01-23T01:41:48.255845352Z" level=info msg="StartContainer for \"48a15fa50f6bda4d43f26427429964fe48bfe993881355b55ec9572bf8d955ce\" returns successfully" Jan 23 01:41:48.542204 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 01:41:48.548402 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 01:41:48.562369 kubelet[2830]: I0123 01:41:48.560130 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-d272b" podStartSLOduration=2.118090465 podStartE2EDuration="32.560109268s" podCreationTimestamp="2026-01-23 01:41:16 +0000 UTC" firstStartedPulling="2026-01-23 01:41:17.395017793 +0000 UTC m=+47.358024974" lastFinishedPulling="2026-01-23 01:41:47.837036595 +0000 UTC m=+77.800043777" observedRunningTime="2026-01-23 01:41:48.558846905 +0000 UTC m=+78.521854087" watchObservedRunningTime="2026-01-23 01:41:48.560109268 +0000 UTC m=+78.523116471" Jan 23 01:41:49.142746 kubelet[2830]: I0123 01:41:49.141222 2830 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b0a954f0-0bce-4a3f-aa7d-4601546324c7-whisker-backend-key-pair\") pod \"b0a954f0-0bce-4a3f-aa7d-4601546324c7\" (UID: \"b0a954f0-0bce-4a3f-aa7d-4601546324c7\") " Jan 23 01:41:49.142746 kubelet[2830]: I0123 01:41:49.141279 2830 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0a954f0-0bce-4a3f-aa7d-4601546324c7-whisker-ca-bundle\") pod \"b0a954f0-0bce-4a3f-aa7d-4601546324c7\" (UID: \"b0a954f0-0bce-4a3f-aa7d-4601546324c7\") " Jan 23 01:41:49.142746 kubelet[2830]: I0123 01:41:49.141323 2830 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-br642\" (UniqueName: \"kubernetes.io/projected/b0a954f0-0bce-4a3f-aa7d-4601546324c7-kube-api-access-br642\") pod \"b0a954f0-0bce-4a3f-aa7d-4601546324c7\" (UID: \"b0a954f0-0bce-4a3f-aa7d-4601546324c7\") " Jan 23 01:41:49.149775 kubelet[2830]: I0123 01:41:49.149413 2830 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0a954f0-0bce-4a3f-aa7d-4601546324c7-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "b0a954f0-0bce-4a3f-aa7d-4601546324c7" (UID: "b0a954f0-0bce-4a3f-aa7d-4601546324c7"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 01:41:49.186755 systemd[1]: var-lib-kubelet-pods-b0a954f0\x2d0bce\x2d4a3f\x2daa7d\x2d4601546324c7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbr642.mount: Deactivated successfully. Jan 23 01:41:49.196397 kubelet[2830]: I0123 01:41:49.196293 2830 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0a954f0-0bce-4a3f-aa7d-4601546324c7-kube-api-access-br642" (OuterVolumeSpecName: "kube-api-access-br642") pod "b0a954f0-0bce-4a3f-aa7d-4601546324c7" (UID: "b0a954f0-0bce-4a3f-aa7d-4601546324c7"). InnerVolumeSpecName "kube-api-access-br642". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 01:41:49.198846 kubelet[2830]: I0123 01:41:49.198268 2830 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0a954f0-0bce-4a3f-aa7d-4601546324c7-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "b0a954f0-0bce-4a3f-aa7d-4601546324c7" (UID: "b0a954f0-0bce-4a3f-aa7d-4601546324c7"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 01:41:49.206775 systemd[1]: var-lib-kubelet-pods-b0a954f0\x2d0bce\x2d4a3f\x2daa7d\x2d4601546324c7-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 01:41:49.243974 kubelet[2830]: I0123 01:41:49.243871 2830 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-br642\" (UniqueName: \"kubernetes.io/projected/b0a954f0-0bce-4a3f-aa7d-4601546324c7-kube-api-access-br642\") on node \"localhost\" DevicePath \"\"" Jan 23 01:41:49.243974 kubelet[2830]: I0123 01:41:49.243925 2830 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b0a954f0-0bce-4a3f-aa7d-4601546324c7-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 23 01:41:49.243974 kubelet[2830]: I0123 01:41:49.243942 2830 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0a954f0-0bce-4a3f-aa7d-4601546324c7-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 23 01:41:49.498409 systemd[1]: Removed slice kubepods-besteffort-podb0a954f0_0bce_4a3f_aa7d_4601546324c7.slice - libcontainer container kubepods-besteffort-podb0a954f0_0bce_4a3f_aa7d_4601546324c7.slice. Jan 23 01:41:49.771790 systemd[1]: Created slice kubepods-besteffort-podaf904215_7ebf_4966_8c59_420d5c45351b.slice - libcontainer container kubepods-besteffort-podaf904215_7ebf_4966_8c59_420d5c45351b.slice. Jan 23 01:41:49.861721 kubelet[2830]: I0123 01:41:49.860896 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/af904215-7ebf-4966-8c59-420d5c45351b-whisker-backend-key-pair\") pod \"whisker-54f6844c7b-qww2p\" (UID: \"af904215-7ebf-4966-8c59-420d5c45351b\") " pod="calico-system/whisker-54f6844c7b-qww2p" Jan 23 01:41:49.861721 kubelet[2830]: I0123 01:41:49.860945 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af904215-7ebf-4966-8c59-420d5c45351b-whisker-ca-bundle\") pod \"whisker-54f6844c7b-qww2p\" (UID: \"af904215-7ebf-4966-8c59-420d5c45351b\") " pod="calico-system/whisker-54f6844c7b-qww2p" Jan 23 01:41:49.861721 kubelet[2830]: I0123 01:41:49.860960 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85j8m\" (UniqueName: \"kubernetes.io/projected/af904215-7ebf-4966-8c59-420d5c45351b-kube-api-access-85j8m\") pod \"whisker-54f6844c7b-qww2p\" (UID: \"af904215-7ebf-4966-8c59-420d5c45351b\") " pod="calico-system/whisker-54f6844c7b-qww2p" Jan 23 01:41:50.084434 containerd[1583]: time="2026-01-23T01:41:50.083900960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54f6844c7b-qww2p,Uid:af904215-7ebf-4966-8c59-420d5c45351b,Namespace:calico-system,Attempt:0,}" Jan 23 01:41:50.372769 kubelet[2830]: I0123 01:41:50.372403 2830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0a954f0-0bce-4a3f-aa7d-4601546324c7" path="/var/lib/kubelet/pods/b0a954f0-0bce-4a3f-aa7d-4601546324c7/volumes" Jan 23 01:41:50.779255 systemd-networkd[1471]: cali4f4c6b498c5: Link UP Jan 23 01:41:50.781439 systemd-networkd[1471]: cali4f4c6b498c5: Gained carrier Jan 23 01:41:50.823871 containerd[1583]: 2026-01-23 01:41:50.206 [INFO][4287] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 01:41:50.823871 containerd[1583]: 2026-01-23 01:41:50.288 [INFO][4287] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--54f6844c7b--qww2p-eth0 whisker-54f6844c7b- calico-system af904215-7ebf-4966-8c59-420d5c45351b 978 0 2026-01-23 01:41:49 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:54f6844c7b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-54f6844c7b-qww2p eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4f4c6b498c5 [] [] }} ContainerID="28888b8c8055df9b9aafb7402d419842f967015d06cc229ba7a14d9bb307e134" Namespace="calico-system" Pod="whisker-54f6844c7b-qww2p" WorkloadEndpoint="localhost-k8s-whisker--54f6844c7b--qww2p-" Jan 23 01:41:50.823871 containerd[1583]: 2026-01-23 01:41:50.288 [INFO][4287] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="28888b8c8055df9b9aafb7402d419842f967015d06cc229ba7a14d9bb307e134" Namespace="calico-system" Pod="whisker-54f6844c7b-qww2p" WorkloadEndpoint="localhost-k8s-whisker--54f6844c7b--qww2p-eth0" Jan 23 01:41:50.823871 containerd[1583]: 2026-01-23 01:41:50.569 [INFO][4303] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28888b8c8055df9b9aafb7402d419842f967015d06cc229ba7a14d9bb307e134" HandleID="k8s-pod-network.28888b8c8055df9b9aafb7402d419842f967015d06cc229ba7a14d9bb307e134" Workload="localhost-k8s-whisker--54f6844c7b--qww2p-eth0" Jan 23 01:41:50.824276 containerd[1583]: 2026-01-23 01:41:50.573 [INFO][4303] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="28888b8c8055df9b9aafb7402d419842f967015d06cc229ba7a14d9bb307e134" HandleID="k8s-pod-network.28888b8c8055df9b9aafb7402d419842f967015d06cc229ba7a14d9bb307e134" Workload="localhost-k8s-whisker--54f6844c7b--qww2p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004c8530), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-54f6844c7b-qww2p", "timestamp":"2026-01-23 01:41:50.569427352 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:41:50.824276 containerd[1583]: 2026-01-23 01:41:50.574 [INFO][4303] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:41:50.824276 containerd[1583]: 2026-01-23 01:41:50.574 [INFO][4303] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:41:50.824276 containerd[1583]: 2026-01-23 01:41:50.575 [INFO][4303] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 01:41:50.824276 containerd[1583]: 2026-01-23 01:41:50.599 [INFO][4303] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.28888b8c8055df9b9aafb7402d419842f967015d06cc229ba7a14d9bb307e134" host="localhost" Jan 23 01:41:50.824276 containerd[1583]: 2026-01-23 01:41:50.622 [INFO][4303] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 01:41:50.824276 containerd[1583]: 2026-01-23 01:41:50.646 [INFO][4303] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 01:41:50.824276 containerd[1583]: 2026-01-23 01:41:50.658 [INFO][4303] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 01:41:50.824276 containerd[1583]: 2026-01-23 01:41:50.668 [INFO][4303] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 01:41:50.824276 containerd[1583]: 2026-01-23 01:41:50.668 [INFO][4303] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.28888b8c8055df9b9aafb7402d419842f967015d06cc229ba7a14d9bb307e134" host="localhost" Jan 23 01:41:50.824905 containerd[1583]: 2026-01-23 01:41:50.675 [INFO][4303] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.28888b8c8055df9b9aafb7402d419842f967015d06cc229ba7a14d9bb307e134 Jan 23 01:41:50.824905 containerd[1583]: 2026-01-23 01:41:50.687 [INFO][4303] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.28888b8c8055df9b9aafb7402d419842f967015d06cc229ba7a14d9bb307e134" host="localhost" Jan 23 01:41:50.824905 containerd[1583]: 2026-01-23 01:41:50.722 [INFO][4303] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.28888b8c8055df9b9aafb7402d419842f967015d06cc229ba7a14d9bb307e134" host="localhost" Jan 23 01:41:50.824905 containerd[1583]: 2026-01-23 01:41:50.723 [INFO][4303] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.28888b8c8055df9b9aafb7402d419842f967015d06cc229ba7a14d9bb307e134" host="localhost" Jan 23 01:41:50.824905 containerd[1583]: 2026-01-23 01:41:50.723 [INFO][4303] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:41:50.824905 containerd[1583]: 2026-01-23 01:41:50.723 [INFO][4303] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="28888b8c8055df9b9aafb7402d419842f967015d06cc229ba7a14d9bb307e134" HandleID="k8s-pod-network.28888b8c8055df9b9aafb7402d419842f967015d06cc229ba7a14d9bb307e134" Workload="localhost-k8s-whisker--54f6844c7b--qww2p-eth0" Jan 23 01:41:50.825125 containerd[1583]: 2026-01-23 01:41:50.731 [INFO][4287] cni-plugin/k8s.go 418: Populated endpoint ContainerID="28888b8c8055df9b9aafb7402d419842f967015d06cc229ba7a14d9bb307e134" Namespace="calico-system" Pod="whisker-54f6844c7b-qww2p" WorkloadEndpoint="localhost-k8s-whisker--54f6844c7b--qww2p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--54f6844c7b--qww2p-eth0", GenerateName:"whisker-54f6844c7b-", Namespace:"calico-system", SelfLink:"", UID:"af904215-7ebf-4966-8c59-420d5c45351b", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 41, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54f6844c7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-54f6844c7b-qww2p", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4f4c6b498c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:41:50.825125 containerd[1583]: 2026-01-23 01:41:50.736 [INFO][4287] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="28888b8c8055df9b9aafb7402d419842f967015d06cc229ba7a14d9bb307e134" Namespace="calico-system" Pod="whisker-54f6844c7b-qww2p" WorkloadEndpoint="localhost-k8s-whisker--54f6844c7b--qww2p-eth0" Jan 23 01:41:50.825334 containerd[1583]: 2026-01-23 01:41:50.736 [INFO][4287] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4f4c6b498c5 ContainerID="28888b8c8055df9b9aafb7402d419842f967015d06cc229ba7a14d9bb307e134" Namespace="calico-system" Pod="whisker-54f6844c7b-qww2p" WorkloadEndpoint="localhost-k8s-whisker--54f6844c7b--qww2p-eth0" Jan 23 01:41:50.825334 containerd[1583]: 2026-01-23 01:41:50.780 [INFO][4287] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28888b8c8055df9b9aafb7402d419842f967015d06cc229ba7a14d9bb307e134" Namespace="calico-system" Pod="whisker-54f6844c7b-qww2p" WorkloadEndpoint="localhost-k8s-whisker--54f6844c7b--qww2p-eth0" Jan 23 01:41:50.825375 containerd[1583]: 2026-01-23 01:41:50.781 [INFO][4287] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="28888b8c8055df9b9aafb7402d419842f967015d06cc229ba7a14d9bb307e134" Namespace="calico-system" Pod="whisker-54f6844c7b-qww2p" WorkloadEndpoint="localhost-k8s-whisker--54f6844c7b--qww2p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--54f6844c7b--qww2p-eth0", GenerateName:"whisker-54f6844c7b-", Namespace:"calico-system", SelfLink:"", UID:"af904215-7ebf-4966-8c59-420d5c45351b", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 41, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54f6844c7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"28888b8c8055df9b9aafb7402d419842f967015d06cc229ba7a14d9bb307e134", Pod:"whisker-54f6844c7b-qww2p", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4f4c6b498c5", MAC:"96:e9:3f:9f:b1:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:41:50.826219 containerd[1583]: 2026-01-23 01:41:50.816 [INFO][4287] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="28888b8c8055df9b9aafb7402d419842f967015d06cc229ba7a14d9bb307e134" Namespace="calico-system" Pod="whisker-54f6844c7b-qww2p" WorkloadEndpoint="localhost-k8s-whisker--54f6844c7b--qww2p-eth0" Jan 23 01:41:51.056858 containerd[1583]: time="2026-01-23T01:41:51.056087931Z" level=info msg="connecting to shim 28888b8c8055df9b9aafb7402d419842f967015d06cc229ba7a14d9bb307e134" address="unix:///run/containerd/s/d104a9e39d293edaa14f787fe2ebb33229827f04f969e2f280d3b81213168ec2" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:41:51.307968 systemd[1]: Started cri-containerd-28888b8c8055df9b9aafb7402d419842f967015d06cc229ba7a14d9bb307e134.scope - libcontainer container 28888b8c8055df9b9aafb7402d419842f967015d06cc229ba7a14d9bb307e134. Jan 23 01:41:51.370107 systemd-resolved[1398]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 01:41:51.502077 containerd[1583]: time="2026-01-23T01:41:51.502032558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54f6844c7b-qww2p,Uid:af904215-7ebf-4966-8c59-420d5c45351b,Namespace:calico-system,Attempt:0,} returns sandbox id \"28888b8c8055df9b9aafb7402d419842f967015d06cc229ba7a14d9bb307e134\"" Jan 23 01:41:51.510876 containerd[1583]: time="2026-01-23T01:41:51.510848491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:41:51.624859 containerd[1583]: time="2026-01-23T01:41:51.624040489Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:41:51.631131 containerd[1583]: time="2026-01-23T01:41:51.630819615Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:41:51.650367 containerd[1583]: time="2026-01-23T01:41:51.648104770Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:41:51.650697 kubelet[2830]: E0123 01:41:51.649050 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:41:51.650697 kubelet[2830]: E0123 01:41:51.649174 2830 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:41:51.653309 kubelet[2830]: E0123 01:41:51.652151 2830 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:807a26d448fa42adb3b80c712c77b43e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-85j8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54f6844c7b-qww2p_calico-system(af904215-7ebf-4966-8c59-420d5c45351b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:41:51.660216 containerd[1583]: time="2026-01-23T01:41:51.659424872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:41:51.734044 containerd[1583]: time="2026-01-23T01:41:51.733808241Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:41:51.738374 containerd[1583]: time="2026-01-23T01:41:51.737992320Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:41:51.738374 containerd[1583]: time="2026-01-23T01:41:51.738183477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:41:51.740213 kubelet[2830]: E0123 01:41:51.739182 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:41:51.740213 kubelet[2830]: E0123 01:41:51.739244 2830 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:41:51.740303 kubelet[2830]: E0123 01:41:51.739358 2830 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-85j8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54f6844c7b-qww2p_calico-system(af904215-7ebf-4966-8c59-420d5c45351b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:41:51.742116 kubelet[2830]: E0123 01:41:51.741978 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54f6844c7b-qww2p" podUID="af904215-7ebf-4966-8c59-420d5c45351b" Jan 23 01:41:51.984795 systemd-networkd[1471]: cali4f4c6b498c5: Gained IPv6LL Jan 23 01:41:52.465149 systemd-networkd[1471]: vxlan.calico: Link UP Jan 23 01:41:52.465237 systemd-networkd[1471]: vxlan.calico: Gained carrier Jan 23 01:41:52.501294 kubelet[2830]: E0123 01:41:52.500198 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54f6844c7b-qww2p" podUID="af904215-7ebf-4966-8c59-420d5c45351b" Jan 23 01:41:53.505093 kubelet[2830]: E0123 01:41:53.504940 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54f6844c7b-qww2p" podUID="af904215-7ebf-4966-8c59-420d5c45351b" Jan 23 01:41:54.544989 systemd-networkd[1471]: vxlan.calico: Gained IPv6LL Jan 23 01:41:55.366802 containerd[1583]: time="2026-01-23T01:41:55.366458200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c9c68dbf8-nsnd4,Uid:117ed452-382a-4cae-a50f-439078d719fb,Namespace:calico-system,Attempt:0,}" Jan 23 01:41:55.795099 systemd-networkd[1471]: cali92d46097b68: Link UP Jan 23 01:41:55.797881 systemd-networkd[1471]: cali92d46097b68: Gained carrier Jan 23 01:41:55.833202 containerd[1583]: 2026-01-23 01:41:55.505 [INFO][4581] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6c9c68dbf8--nsnd4-eth0 calico-kube-controllers-6c9c68dbf8- calico-system 117ed452-382a-4cae-a50f-439078d719fb 873 0 2026-01-23 01:41:16 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6c9c68dbf8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6c9c68dbf8-nsnd4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali92d46097b68 [] [] }} ContainerID="1100f5b38efc109070879b861b7c40b0baa00ae2209d3611e14f97e5bec761b2" Namespace="calico-system" Pod="calico-kube-controllers-6c9c68dbf8-nsnd4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c9c68dbf8--nsnd4-" Jan 23 01:41:55.833202 containerd[1583]: 2026-01-23 01:41:55.505 [INFO][4581] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1100f5b38efc109070879b861b7c40b0baa00ae2209d3611e14f97e5bec761b2" Namespace="calico-system" Pod="calico-kube-controllers-6c9c68dbf8-nsnd4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c9c68dbf8--nsnd4-eth0" Jan 23 01:41:55.833202 containerd[1583]: 2026-01-23 01:41:55.690 [INFO][4596] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1100f5b38efc109070879b861b7c40b0baa00ae2209d3611e14f97e5bec761b2" HandleID="k8s-pod-network.1100f5b38efc109070879b861b7c40b0baa00ae2209d3611e14f97e5bec761b2" Workload="localhost-k8s-calico--kube--controllers--6c9c68dbf8--nsnd4-eth0" Jan 23 01:41:55.833907 containerd[1583]: 2026-01-23 01:41:55.690 [INFO][4596] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1100f5b38efc109070879b861b7c40b0baa00ae2209d3611e14f97e5bec761b2" HandleID="k8s-pod-network.1100f5b38efc109070879b861b7c40b0baa00ae2209d3611e14f97e5bec761b2" Workload="localhost-k8s-calico--kube--controllers--6c9c68dbf8--nsnd4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004360e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6c9c68dbf8-nsnd4", "timestamp":"2026-01-23 01:41:55.690029685 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:41:55.833907 containerd[1583]: 2026-01-23 01:41:55.690 [INFO][4596] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:41:55.833907 containerd[1583]: 2026-01-23 01:41:55.690 [INFO][4596] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:41:55.833907 containerd[1583]: 2026-01-23 01:41:55.690 [INFO][4596] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 01:41:55.833907 containerd[1583]: 2026-01-23 01:41:55.707 [INFO][4596] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1100f5b38efc109070879b861b7c40b0baa00ae2209d3611e14f97e5bec761b2" host="localhost" Jan 23 01:41:55.833907 containerd[1583]: 2026-01-23 01:41:55.723 [INFO][4596] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 01:41:55.833907 containerd[1583]: 2026-01-23 01:41:55.738 [INFO][4596] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 01:41:55.833907 containerd[1583]: 2026-01-23 01:41:55.743 [INFO][4596] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 01:41:55.833907 containerd[1583]: 2026-01-23 01:41:55.751 [INFO][4596] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 01:41:55.833907 containerd[1583]: 2026-01-23 01:41:55.751 [INFO][4596] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1100f5b38efc109070879b861b7c40b0baa00ae2209d3611e14f97e5bec761b2" host="localhost" Jan 23 01:41:55.834308 containerd[1583]: 2026-01-23 01:41:55.758 [INFO][4596] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1100f5b38efc109070879b861b7c40b0baa00ae2209d3611e14f97e5bec761b2 Jan 23 01:41:55.834308 containerd[1583]: 2026-01-23 01:41:55.771 [INFO][4596] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1100f5b38efc109070879b861b7c40b0baa00ae2209d3611e14f97e5bec761b2" host="localhost" Jan 23 01:41:55.834308 containerd[1583]: 2026-01-23 01:41:55.782 [INFO][4596] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.1100f5b38efc109070879b861b7c40b0baa00ae2209d3611e14f97e5bec761b2" host="localhost" Jan 23 01:41:55.834308 containerd[1583]: 2026-01-23 01:41:55.782 [INFO][4596] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.1100f5b38efc109070879b861b7c40b0baa00ae2209d3611e14f97e5bec761b2" host="localhost" Jan 23 01:41:55.834308 containerd[1583]: 2026-01-23 01:41:55.782 [INFO][4596] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:41:55.834308 containerd[1583]: 2026-01-23 01:41:55.782 [INFO][4596] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="1100f5b38efc109070879b861b7c40b0baa00ae2209d3611e14f97e5bec761b2" HandleID="k8s-pod-network.1100f5b38efc109070879b861b7c40b0baa00ae2209d3611e14f97e5bec761b2" Workload="localhost-k8s-calico--kube--controllers--6c9c68dbf8--nsnd4-eth0" Jan 23 01:41:55.834747 containerd[1583]: 2026-01-23 01:41:55.789 [INFO][4581] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1100f5b38efc109070879b861b7c40b0baa00ae2209d3611e14f97e5bec761b2" Namespace="calico-system" Pod="calico-kube-controllers-6c9c68dbf8-nsnd4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c9c68dbf8--nsnd4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6c9c68dbf8--nsnd4-eth0", GenerateName:"calico-kube-controllers-6c9c68dbf8-", Namespace:"calico-system", SelfLink:"", UID:"117ed452-382a-4cae-a50f-439078d719fb", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 41, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c9c68dbf8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6c9c68dbf8-nsnd4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali92d46097b68", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:41:55.834924 containerd[1583]: 2026-01-23 01:41:55.789 [INFO][4581] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="1100f5b38efc109070879b861b7c40b0baa00ae2209d3611e14f97e5bec761b2" Namespace="calico-system" Pod="calico-kube-controllers-6c9c68dbf8-nsnd4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c9c68dbf8--nsnd4-eth0" Jan 23 01:41:55.834924 containerd[1583]: 2026-01-23 01:41:55.789 [INFO][4581] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali92d46097b68 ContainerID="1100f5b38efc109070879b861b7c40b0baa00ae2209d3611e14f97e5bec761b2" Namespace="calico-system" Pod="calico-kube-controllers-6c9c68dbf8-nsnd4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c9c68dbf8--nsnd4-eth0" Jan 23 01:41:55.834924 containerd[1583]: 2026-01-23 01:41:55.796 [INFO][4581] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1100f5b38efc109070879b861b7c40b0baa00ae2209d3611e14f97e5bec761b2" Namespace="calico-system" Pod="calico-kube-controllers-6c9c68dbf8-nsnd4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c9c68dbf8--nsnd4-eth0" Jan 23 01:41:55.834992 containerd[1583]: 2026-01-23 01:41:55.799 [INFO][4581] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1100f5b38efc109070879b861b7c40b0baa00ae2209d3611e14f97e5bec761b2" Namespace="calico-system" Pod="calico-kube-controllers-6c9c68dbf8-nsnd4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c9c68dbf8--nsnd4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6c9c68dbf8--nsnd4-eth0", GenerateName:"calico-kube-controllers-6c9c68dbf8-", Namespace:"calico-system", SelfLink:"", UID:"117ed452-382a-4cae-a50f-439078d719fb", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 41, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6c9c68dbf8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1100f5b38efc109070879b861b7c40b0baa00ae2209d3611e14f97e5bec761b2", Pod:"calico-kube-controllers-6c9c68dbf8-nsnd4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali92d46097b68", MAC:"fe:b3:f7:03:a8:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:41:55.835156 containerd[1583]: 2026-01-23 01:41:55.821 [INFO][4581] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1100f5b38efc109070879b861b7c40b0baa00ae2209d3611e14f97e5bec761b2" Namespace="calico-system" Pod="calico-kube-controllers-6c9c68dbf8-nsnd4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6c9c68dbf8--nsnd4-eth0" Jan 23 01:41:55.903929 containerd[1583]: time="2026-01-23T01:41:55.903417878Z" level=info msg="connecting to shim 1100f5b38efc109070879b861b7c40b0baa00ae2209d3611e14f97e5bec761b2" address="unix:///run/containerd/s/f1346e035287440b9b563a7634ebb755233c407223a05195cb322d3fa80b9530" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:41:56.003280 systemd[1]: Started cri-containerd-1100f5b38efc109070879b861b7c40b0baa00ae2209d3611e14f97e5bec761b2.scope - libcontainer container 1100f5b38efc109070879b861b7c40b0baa00ae2209d3611e14f97e5bec761b2. Jan 23 01:41:56.042792 systemd-resolved[1398]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 01:41:56.177424 containerd[1583]: time="2026-01-23T01:41:56.177212279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6c9c68dbf8-nsnd4,Uid:117ed452-382a-4cae-a50f-439078d719fb,Namespace:calico-system,Attempt:0,} returns sandbox id \"1100f5b38efc109070879b861b7c40b0baa00ae2209d3611e14f97e5bec761b2\"" Jan 23 01:41:56.184417 containerd[1583]: time="2026-01-23T01:41:56.184098781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:41:56.252768 containerd[1583]: time="2026-01-23T01:41:56.252332680Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:41:56.256090 containerd[1583]: time="2026-01-23T01:41:56.255975098Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:41:56.256090 containerd[1583]: time="2026-01-23T01:41:56.256057110Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:41:56.256459 kubelet[2830]: E0123 01:41:56.256428 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:41:56.257370 kubelet[2830]: E0123 01:41:56.257052 2830 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:41:56.258078 kubelet[2830]: E0123 01:41:56.258032 2830 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrfkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6c9c68dbf8-nsnd4_calico-system(117ed452-382a-4cae-a50f-439078d719fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:41:56.263095 kubelet[2830]: E0123 01:41:56.263049 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c9c68dbf8-nsnd4" podUID="117ed452-382a-4cae-a50f-439078d719fb" Jan 23 01:41:56.367390 containerd[1583]: time="2026-01-23T01:41:56.367237659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7xq6t,Uid:d4655da0-4d87-462c-8176-c9772e42f76a,Namespace:calico-system,Attempt:0,}" Jan 23 01:41:56.528251 kubelet[2830]: E0123 01:41:56.528070 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c9c68dbf8-nsnd4" podUID="117ed452-382a-4cae-a50f-439078d719fb" Jan 23 01:41:56.721764 systemd-networkd[1471]: cali4619ba44ef7: Link UP Jan 23 01:41:56.722995 systemd-networkd[1471]: cali4619ba44ef7: Gained carrier Jan 23 01:41:56.763698 containerd[1583]: 2026-01-23 01:41:56.500 [INFO][4656] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--7xq6t-eth0 csi-node-driver- calico-system d4655da0-4d87-462c-8176-c9772e42f76a 763 0 2026-01-23 01:41:16 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-7xq6t eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4619ba44ef7 [] [] }} ContainerID="e0cf0085b7f60e64326a2b696567673fb4028a393277bb02e957445ddf3f746a" Namespace="calico-system" Pod="csi-node-driver-7xq6t" WorkloadEndpoint="localhost-k8s-csi--node--driver--7xq6t-" Jan 23 01:41:56.763698 containerd[1583]: 2026-01-23 01:41:56.501 [INFO][4656] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e0cf0085b7f60e64326a2b696567673fb4028a393277bb02e957445ddf3f746a" Namespace="calico-system" Pod="csi-node-driver-7xq6t" WorkloadEndpoint="localhost-k8s-csi--node--driver--7xq6t-eth0" Jan 23 01:41:56.763698 containerd[1583]: 2026-01-23 01:41:56.610 [INFO][4670] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e0cf0085b7f60e64326a2b696567673fb4028a393277bb02e957445ddf3f746a" HandleID="k8s-pod-network.e0cf0085b7f60e64326a2b696567673fb4028a393277bb02e957445ddf3f746a" Workload="localhost-k8s-csi--node--driver--7xq6t-eth0" Jan 23 01:41:56.764067 containerd[1583]: 2026-01-23 01:41:56.611 [INFO][4670] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e0cf0085b7f60e64326a2b696567673fb4028a393277bb02e957445ddf3f746a" HandleID="k8s-pod-network.e0cf0085b7f60e64326a2b696567673fb4028a393277bb02e957445ddf3f746a" Workload="localhost-k8s-csi--node--driver--7xq6t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003f1f10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-7xq6t", "timestamp":"2026-01-23 01:41:56.610400884 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:41:56.764067 containerd[1583]: 2026-01-23 01:41:56.611 [INFO][4670] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:41:56.764067 containerd[1583]: 2026-01-23 01:41:56.611 [INFO][4670] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:41:56.764067 containerd[1583]: 2026-01-23 01:41:56.611 [INFO][4670] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 01:41:56.764067 containerd[1583]: 2026-01-23 01:41:56.628 [INFO][4670] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e0cf0085b7f60e64326a2b696567673fb4028a393277bb02e957445ddf3f746a" host="localhost" Jan 23 01:41:56.764067 containerd[1583]: 2026-01-23 01:41:56.642 [INFO][4670] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 01:41:56.764067 containerd[1583]: 2026-01-23 01:41:56.660 [INFO][4670] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 01:41:56.764067 containerd[1583]: 2026-01-23 01:41:56.666 [INFO][4670] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 01:41:56.764067 containerd[1583]: 2026-01-23 01:41:56.675 [INFO][4670] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 01:41:56.764067 containerd[1583]: 2026-01-23 01:41:56.675 [INFO][4670] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e0cf0085b7f60e64326a2b696567673fb4028a393277bb02e957445ddf3f746a" host="localhost" Jan 23 01:41:56.764426 containerd[1583]: 2026-01-23 01:41:56.681 [INFO][4670] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e0cf0085b7f60e64326a2b696567673fb4028a393277bb02e957445ddf3f746a Jan 23 01:41:56.764426 containerd[1583]: 2026-01-23 01:41:56.688 [INFO][4670] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e0cf0085b7f60e64326a2b696567673fb4028a393277bb02e957445ddf3f746a" host="localhost" Jan 23 01:41:56.764426 containerd[1583]: 2026-01-23 01:41:56.706 [INFO][4670] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.e0cf0085b7f60e64326a2b696567673fb4028a393277bb02e957445ddf3f746a" host="localhost" Jan 23 01:41:56.764426 containerd[1583]: 2026-01-23 01:41:56.706 [INFO][4670] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.e0cf0085b7f60e64326a2b696567673fb4028a393277bb02e957445ddf3f746a" host="localhost" Jan 23 01:41:56.764426 containerd[1583]: 2026-01-23 01:41:56.706 [INFO][4670] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:41:56.764426 containerd[1583]: 2026-01-23 01:41:56.706 [INFO][4670] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="e0cf0085b7f60e64326a2b696567673fb4028a393277bb02e957445ddf3f746a" HandleID="k8s-pod-network.e0cf0085b7f60e64326a2b696567673fb4028a393277bb02e957445ddf3f746a" Workload="localhost-k8s-csi--node--driver--7xq6t-eth0" Jan 23 01:41:56.767320 containerd[1583]: 2026-01-23 01:41:56.711 [INFO][4656] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e0cf0085b7f60e64326a2b696567673fb4028a393277bb02e957445ddf3f746a" Namespace="calico-system" Pod="csi-node-driver-7xq6t" WorkloadEndpoint="localhost-k8s-csi--node--driver--7xq6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7xq6t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d4655da0-4d87-462c-8176-c9772e42f76a", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 41, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-7xq6t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4619ba44ef7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:41:56.767905 containerd[1583]: 2026-01-23 01:41:56.711 [INFO][4656] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="e0cf0085b7f60e64326a2b696567673fb4028a393277bb02e957445ddf3f746a" Namespace="calico-system" Pod="csi-node-driver-7xq6t" WorkloadEndpoint="localhost-k8s-csi--node--driver--7xq6t-eth0" Jan 23 01:41:56.767905 containerd[1583]: 2026-01-23 01:41:56.711 [INFO][4656] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4619ba44ef7 ContainerID="e0cf0085b7f60e64326a2b696567673fb4028a393277bb02e957445ddf3f746a" Namespace="calico-system" Pod="csi-node-driver-7xq6t" WorkloadEndpoint="localhost-k8s-csi--node--driver--7xq6t-eth0" Jan 23 01:41:56.767905 containerd[1583]: 2026-01-23 01:41:56.722 [INFO][4656] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e0cf0085b7f60e64326a2b696567673fb4028a393277bb02e957445ddf3f746a" Namespace="calico-system" Pod="csi-node-driver-7xq6t" WorkloadEndpoint="localhost-k8s-csi--node--driver--7xq6t-eth0" Jan 23 01:41:56.767984 containerd[1583]: 2026-01-23 01:41:56.731 [INFO][4656] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e0cf0085b7f60e64326a2b696567673fb4028a393277bb02e957445ddf3f746a" Namespace="calico-system" Pod="csi-node-driver-7xq6t" WorkloadEndpoint="localhost-k8s-csi--node--driver--7xq6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7xq6t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d4655da0-4d87-462c-8176-c9772e42f76a", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 41, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e0cf0085b7f60e64326a2b696567673fb4028a393277bb02e957445ddf3f746a", Pod:"csi-node-driver-7xq6t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4619ba44ef7", MAC:"fa:0b:67:8d:b8:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:41:56.768722 containerd[1583]: 2026-01-23 01:41:56.755 [INFO][4656] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e0cf0085b7f60e64326a2b696567673fb4028a393277bb02e957445ddf3f746a" Namespace="calico-system" Pod="csi-node-driver-7xq6t" WorkloadEndpoint="localhost-k8s-csi--node--driver--7xq6t-eth0" Jan 23 01:41:56.850278 containerd[1583]: time="2026-01-23T01:41:56.849838976Z" level=info msg="connecting to shim e0cf0085b7f60e64326a2b696567673fb4028a393277bb02e957445ddf3f746a" address="unix:///run/containerd/s/cf97159ed21c4d0cdb488b1b08f5e2f51dbc76592620959ce19b14c0fc6c5993" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:41:56.921924 systemd[1]: Started cri-containerd-e0cf0085b7f60e64326a2b696567673fb4028a393277bb02e957445ddf3f746a.scope - libcontainer container e0cf0085b7f60e64326a2b696567673fb4028a393277bb02e957445ddf3f746a. Jan 23 01:41:56.950354 systemd-resolved[1398]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 01:41:57.011183 containerd[1583]: time="2026-01-23T01:41:57.011041555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7xq6t,Uid:d4655da0-4d87-462c-8176-c9772e42f76a,Namespace:calico-system,Attempt:0,} returns sandbox id \"e0cf0085b7f60e64326a2b696567673fb4028a393277bb02e957445ddf3f746a\"" Jan 23 01:41:57.020250 containerd[1583]: time="2026-01-23T01:41:57.018733180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:41:57.082187 containerd[1583]: time="2026-01-23T01:41:57.081789574Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:41:57.085073 containerd[1583]: time="2026-01-23T01:41:57.084966554Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:41:57.085164 containerd[1583]: time="2026-01-23T01:41:57.085018531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:41:57.085672 kubelet[2830]: E0123 01:41:57.085425 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:41:57.085672 kubelet[2830]: E0123 01:41:57.085469 2830 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:41:57.086245 kubelet[2830]: E0123 01:41:57.086173 2830 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gb8wt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7xq6t_calico-system(d4655da0-4d87-462c-8176-c9772e42f76a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:41:57.090108 containerd[1583]: time="2026-01-23T01:41:57.089403847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:41:57.160147 containerd[1583]: time="2026-01-23T01:41:57.159945123Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:41:57.162648 containerd[1583]: time="2026-01-23T01:41:57.162441312Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:41:57.162908 containerd[1583]: time="2026-01-23T01:41:57.162860744Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:41:57.164915 kubelet[2830]: E0123 01:41:57.163103 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:41:57.164915 kubelet[2830]: E0123 01:41:57.163146 2830 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:41:57.164915 kubelet[2830]: E0123 01:41:57.163252 2830 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gb8wt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7xq6t_calico-system(d4655da0-4d87-462c-8176-c9772e42f76a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:41:57.164915 kubelet[2830]: E0123 01:41:57.164761 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7xq6t" podUID="d4655da0-4d87-462c-8176-c9772e42f76a" Jan 23 01:41:57.365839 containerd[1583]: time="2026-01-23T01:41:57.365797381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-msxcs,Uid:3557eedc-6578-421a-8c65-fff9d3233af5,Namespace:calico-system,Attempt:0,}" Jan 23 01:41:57.592459 kubelet[2830]: E0123 01:41:57.592309 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c9c68dbf8-nsnd4" podUID="117ed452-382a-4cae-a50f-439078d719fb" Jan 23 01:41:57.624284 kubelet[2830]: E0123 01:41:57.624248 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7xq6t" podUID="d4655da0-4d87-462c-8176-c9772e42f76a" Jan 23 01:41:57.812929 systemd-networkd[1471]: cali48b5607b107: Link UP Jan 23 01:41:57.814355 systemd-networkd[1471]: cali48b5607b107: Gained carrier Jan 23 01:41:57.872251 systemd-networkd[1471]: cali92d46097b68: Gained IPv6LL Jan 23 01:41:57.896677 containerd[1583]: 2026-01-23 01:41:57.509 [INFO][4730] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--msxcs-eth0 goldmane-666569f655- calico-system 3557eedc-6578-421a-8c65-fff9d3233af5 878 0 2026-01-23 01:41:12 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-msxcs eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali48b5607b107 [] [] }} ContainerID="9ab4272757d21f702cb1ee5a554cda725b731c138f26dedb98840b6a9f51b131" Namespace="calico-system" Pod="goldmane-666569f655-msxcs" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--msxcs-" Jan 23 01:41:57.896677 containerd[1583]: 2026-01-23 01:41:57.509 [INFO][4730] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9ab4272757d21f702cb1ee5a554cda725b731c138f26dedb98840b6a9f51b131" Namespace="calico-system" Pod="goldmane-666569f655-msxcs" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--msxcs-eth0" Jan 23 01:41:57.896677 containerd[1583]: 2026-01-23 01:41:57.664 [INFO][4745] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ab4272757d21f702cb1ee5a554cda725b731c138f26dedb98840b6a9f51b131" HandleID="k8s-pod-network.9ab4272757d21f702cb1ee5a554cda725b731c138f26dedb98840b6a9f51b131" Workload="localhost-k8s-goldmane--666569f655--msxcs-eth0" Jan 23 01:41:57.897353 containerd[1583]: 2026-01-23 01:41:57.668 [INFO][4745] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9ab4272757d21f702cb1ee5a554cda725b731c138f26dedb98840b6a9f51b131" HandleID="k8s-pod-network.9ab4272757d21f702cb1ee5a554cda725b731c138f26dedb98840b6a9f51b131" Workload="localhost-k8s-goldmane--666569f655--msxcs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000476120), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-msxcs", "timestamp":"2026-01-23 01:41:57.664167996 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:41:57.897353 containerd[1583]: 2026-01-23 01:41:57.672 [INFO][4745] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:41:57.897353 containerd[1583]: 2026-01-23 01:41:57.672 [INFO][4745] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:41:57.897353 containerd[1583]: 2026-01-23 01:41:57.673 [INFO][4745] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 01:41:57.897353 containerd[1583]: 2026-01-23 01:41:57.709 [INFO][4745] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9ab4272757d21f702cb1ee5a554cda725b731c138f26dedb98840b6a9f51b131" host="localhost" Jan 23 01:41:57.897353 containerd[1583]: 2026-01-23 01:41:57.726 [INFO][4745] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 01:41:57.897353 containerd[1583]: 2026-01-23 01:41:57.742 [INFO][4745] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 01:41:57.897353 containerd[1583]: 2026-01-23 01:41:57.749 [INFO][4745] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 01:41:57.897353 containerd[1583]: 2026-01-23 01:41:57.759 [INFO][4745] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 01:41:57.897353 containerd[1583]: 2026-01-23 01:41:57.759 [INFO][4745] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9ab4272757d21f702cb1ee5a554cda725b731c138f26dedb98840b6a9f51b131" host="localhost" Jan 23 01:41:57.898071 containerd[1583]: 2026-01-23 01:41:57.765 [INFO][4745] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9ab4272757d21f702cb1ee5a554cda725b731c138f26dedb98840b6a9f51b131 Jan 23 01:41:57.898071 containerd[1583]: 2026-01-23 01:41:57.784 [INFO][4745] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9ab4272757d21f702cb1ee5a554cda725b731c138f26dedb98840b6a9f51b131" host="localhost" Jan 23 01:41:57.898071 containerd[1583]: 2026-01-23 01:41:57.797 [INFO][4745] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.9ab4272757d21f702cb1ee5a554cda725b731c138f26dedb98840b6a9f51b131" host="localhost" Jan 23 01:41:57.898071 containerd[1583]: 2026-01-23 01:41:57.797 [INFO][4745] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.9ab4272757d21f702cb1ee5a554cda725b731c138f26dedb98840b6a9f51b131" host="localhost" Jan 23 01:41:57.898071 containerd[1583]: 2026-01-23 01:41:57.797 [INFO][4745] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:41:57.898071 containerd[1583]: 2026-01-23 01:41:57.798 [INFO][4745] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="9ab4272757d21f702cb1ee5a554cda725b731c138f26dedb98840b6a9f51b131" HandleID="k8s-pod-network.9ab4272757d21f702cb1ee5a554cda725b731c138f26dedb98840b6a9f51b131" Workload="localhost-k8s-goldmane--666569f655--msxcs-eth0" Jan 23 01:41:57.898426 containerd[1583]: 2026-01-23 01:41:57.803 [INFO][4730] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9ab4272757d21f702cb1ee5a554cda725b731c138f26dedb98840b6a9f51b131" Namespace="calico-system" Pod="goldmane-666569f655-msxcs" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--msxcs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--msxcs-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"3557eedc-6578-421a-8c65-fff9d3233af5", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 41, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-msxcs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali48b5607b107", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:41:57.898426 containerd[1583]: 2026-01-23 01:41:57.803 [INFO][4730] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="9ab4272757d21f702cb1ee5a554cda725b731c138f26dedb98840b6a9f51b131" Namespace="calico-system" Pod="goldmane-666569f655-msxcs" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--msxcs-eth0" Jan 23 01:41:57.898863 containerd[1583]: 2026-01-23 01:41:57.804 [INFO][4730] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali48b5607b107 ContainerID="9ab4272757d21f702cb1ee5a554cda725b731c138f26dedb98840b6a9f51b131" Namespace="calico-system" Pod="goldmane-666569f655-msxcs" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--msxcs-eth0" Jan 23 01:41:57.898863 containerd[1583]: 2026-01-23 01:41:57.814 [INFO][4730] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9ab4272757d21f702cb1ee5a554cda725b731c138f26dedb98840b6a9f51b131" Namespace="calico-system" Pod="goldmane-666569f655-msxcs" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--msxcs-eth0" Jan 23 01:41:57.898928 containerd[1583]: 2026-01-23 01:41:57.817 [INFO][4730] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9ab4272757d21f702cb1ee5a554cda725b731c138f26dedb98840b6a9f51b131" Namespace="calico-system" Pod="goldmane-666569f655-msxcs" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--msxcs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--msxcs-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"3557eedc-6578-421a-8c65-fff9d3233af5", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 41, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9ab4272757d21f702cb1ee5a554cda725b731c138f26dedb98840b6a9f51b131", Pod:"goldmane-666569f655-msxcs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali48b5607b107", MAC:"8a:ef:dd:23:22:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:41:57.899121 containerd[1583]: 2026-01-23 01:41:57.887 [INFO][4730] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9ab4272757d21f702cb1ee5a554cda725b731c138f26dedb98840b6a9f51b131" Namespace="calico-system" Pod="goldmane-666569f655-msxcs" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--msxcs-eth0" Jan 23 01:41:57.936398 systemd-networkd[1471]: cali4619ba44ef7: Gained IPv6LL Jan 23 01:41:57.977301 containerd[1583]: time="2026-01-23T01:41:57.976389397Z" level=info msg="connecting to shim 9ab4272757d21f702cb1ee5a554cda725b731c138f26dedb98840b6a9f51b131" address="unix:///run/containerd/s/e2d0af0930f5bcb0faa55b449d51c7345a6c60225c7b98ef8af44901782ab49c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:41:58.085073 systemd[1]: Started cri-containerd-9ab4272757d21f702cb1ee5a554cda725b731c138f26dedb98840b6a9f51b131.scope - libcontainer container 9ab4272757d21f702cb1ee5a554cda725b731c138f26dedb98840b6a9f51b131. Jan 23 01:41:58.150920 systemd-resolved[1398]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 01:41:58.224961 containerd[1583]: time="2026-01-23T01:41:58.224303123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-msxcs,Uid:3557eedc-6578-421a-8c65-fff9d3233af5,Namespace:calico-system,Attempt:0,} returns sandbox id \"9ab4272757d21f702cb1ee5a554cda725b731c138f26dedb98840b6a9f51b131\"" Jan 23 01:41:58.227930 containerd[1583]: time="2026-01-23T01:41:58.227420361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:41:58.307996 containerd[1583]: time="2026-01-23T01:41:58.307818527Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:41:58.311199 containerd[1583]: time="2026-01-23T01:41:58.310916167Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:41:58.311199 containerd[1583]: time="2026-01-23T01:41:58.311013358Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:41:58.311332 kubelet[2830]: E0123 01:41:58.311241 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:41:58.311332 kubelet[2830]: E0123 01:41:58.311302 2830 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:41:58.313286 kubelet[2830]: E0123 01:41:58.313064 2830 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rgc28,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-msxcs_calico-system(3557eedc-6578-421a-8c65-fff9d3233af5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:41:58.315797 kubelet[2830]: E0123 01:41:58.314457 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-msxcs" podUID="3557eedc-6578-421a-8c65-fff9d3233af5" Jan 23 01:41:58.372188 containerd[1583]: time="2026-01-23T01:41:58.371944180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-99xj6,Uid:ca431a16-f60f-49a8-ad90-ef63fc269ffe,Namespace:kube-system,Attempt:0,}" Jan 23 01:41:58.576867 kubelet[2830]: E0123 01:41:58.576136 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-msxcs" podUID="3557eedc-6578-421a-8c65-fff9d3233af5" Jan 23 01:41:58.577833 kubelet[2830]: E0123 01:41:58.577752 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7xq6t" podUID="d4655da0-4d87-462c-8176-c9772e42f76a" Jan 23 01:41:58.773450 systemd-networkd[1471]: cali409cf830698: Link UP Jan 23 01:41:58.779942 systemd-networkd[1471]: cali409cf830698: Gained carrier Jan 23 01:41:58.817735 containerd[1583]: 2026-01-23 01:41:58.486 [INFO][4816] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--99xj6-eth0 coredns-674b8bbfcf- kube-system ca431a16-f60f-49a8-ad90-ef63fc269ffe 881 0 2026-01-23 01:40:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-99xj6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali409cf830698 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fb271b9f26e52058d4d23449d6d47fad703db4e7234f60b6a909ec3b0f2c96dd" Namespace="kube-system" Pod="coredns-674b8bbfcf-99xj6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--99xj6-" Jan 23 01:41:58.817735 containerd[1583]: 2026-01-23 01:41:58.488 [INFO][4816] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fb271b9f26e52058d4d23449d6d47fad703db4e7234f60b6a909ec3b0f2c96dd" Namespace="kube-system" Pod="coredns-674b8bbfcf-99xj6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--99xj6-eth0" Jan 23 01:41:58.817735 containerd[1583]: 2026-01-23 01:41:58.619 [INFO][4832] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb271b9f26e52058d4d23449d6d47fad703db4e7234f60b6a909ec3b0f2c96dd" HandleID="k8s-pod-network.fb271b9f26e52058d4d23449d6d47fad703db4e7234f60b6a909ec3b0f2c96dd" Workload="localhost-k8s-coredns--674b8bbfcf--99xj6-eth0" Jan 23 01:41:58.818149 containerd[1583]: 2026-01-23 01:41:58.620 [INFO][4832] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fb271b9f26e52058d4d23449d6d47fad703db4e7234f60b6a909ec3b0f2c96dd" HandleID="k8s-pod-network.fb271b9f26e52058d4d23449d6d47fad703db4e7234f60b6a909ec3b0f2c96dd" Workload="localhost-k8s-coredns--674b8bbfcf--99xj6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fc80), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-99xj6", "timestamp":"2026-01-23 01:41:58.619055771 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:41:58.818149 containerd[1583]: 2026-01-23 01:41:58.620 [INFO][4832] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:41:58.818149 containerd[1583]: 2026-01-23 01:41:58.620 [INFO][4832] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:41:58.818149 containerd[1583]: 2026-01-23 01:41:58.620 [INFO][4832] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 01:41:58.818149 containerd[1583]: 2026-01-23 01:41:58.647 [INFO][4832] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fb271b9f26e52058d4d23449d6d47fad703db4e7234f60b6a909ec3b0f2c96dd" host="localhost" Jan 23 01:41:58.818149 containerd[1583]: 2026-01-23 01:41:58.679 [INFO][4832] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 01:41:58.818149 containerd[1583]: 2026-01-23 01:41:58.695 [INFO][4832] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 01:41:58.818149 containerd[1583]: 2026-01-23 01:41:58.704 [INFO][4832] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 01:41:58.818149 containerd[1583]: 2026-01-23 01:41:58.712 [INFO][4832] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 01:41:58.818149 containerd[1583]: 2026-01-23 01:41:58.712 [INFO][4832] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fb271b9f26e52058d4d23449d6d47fad703db4e7234f60b6a909ec3b0f2c96dd" host="localhost" Jan 23 01:41:58.818942 containerd[1583]: 2026-01-23 01:41:58.728 [INFO][4832] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fb271b9f26e52058d4d23449d6d47fad703db4e7234f60b6a909ec3b0f2c96dd Jan 23 01:41:58.818942 containerd[1583]: 2026-01-23 01:41:58.740 [INFO][4832] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fb271b9f26e52058d4d23449d6d47fad703db4e7234f60b6a909ec3b0f2c96dd" host="localhost" Jan 23 01:41:58.818942 containerd[1583]: 2026-01-23 01:41:58.758 [INFO][4832] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.fb271b9f26e52058d4d23449d6d47fad703db4e7234f60b6a909ec3b0f2c96dd" host="localhost" Jan 23 01:41:58.818942 containerd[1583]: 2026-01-23 01:41:58.758 [INFO][4832] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.fb271b9f26e52058d4d23449d6d47fad703db4e7234f60b6a909ec3b0f2c96dd" host="localhost" Jan 23 01:41:58.818942 containerd[1583]: 2026-01-23 01:41:58.758 [INFO][4832] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:41:58.818942 containerd[1583]: 2026-01-23 01:41:58.758 [INFO][4832] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="fb271b9f26e52058d4d23449d6d47fad703db4e7234f60b6a909ec3b0f2c96dd" HandleID="k8s-pod-network.fb271b9f26e52058d4d23449d6d47fad703db4e7234f60b6a909ec3b0f2c96dd" Workload="localhost-k8s-coredns--674b8bbfcf--99xj6-eth0" Jan 23 01:41:58.819108 containerd[1583]: 2026-01-23 01:41:58.765 [INFO][4816] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fb271b9f26e52058d4d23449d6d47fad703db4e7234f60b6a909ec3b0f2c96dd" Namespace="kube-system" Pod="coredns-674b8bbfcf-99xj6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--99xj6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--99xj6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ca431a16-f60f-49a8-ad90-ef63fc269ffe", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 40, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-99xj6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali409cf830698", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:41:58.819329 containerd[1583]: 2026-01-23 01:41:58.765 [INFO][4816] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="fb271b9f26e52058d4d23449d6d47fad703db4e7234f60b6a909ec3b0f2c96dd" Namespace="kube-system" Pod="coredns-674b8bbfcf-99xj6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--99xj6-eth0" Jan 23 01:41:58.819329 containerd[1583]: 2026-01-23 01:41:58.765 [INFO][4816] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali409cf830698 ContainerID="fb271b9f26e52058d4d23449d6d47fad703db4e7234f60b6a909ec3b0f2c96dd" Namespace="kube-system" Pod="coredns-674b8bbfcf-99xj6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--99xj6-eth0" Jan 23 01:41:58.819329 containerd[1583]: 2026-01-23 01:41:58.782 [INFO][4816] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb271b9f26e52058d4d23449d6d47fad703db4e7234f60b6a909ec3b0f2c96dd" Namespace="kube-system" Pod="coredns-674b8bbfcf-99xj6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--99xj6-eth0" Jan 23 01:41:58.819399 containerd[1583]: 2026-01-23 01:41:58.784 [INFO][4816] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fb271b9f26e52058d4d23449d6d47fad703db4e7234f60b6a909ec3b0f2c96dd" Namespace="kube-system" Pod="coredns-674b8bbfcf-99xj6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--99xj6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--99xj6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ca431a16-f60f-49a8-ad90-ef63fc269ffe", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 40, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fb271b9f26e52058d4d23449d6d47fad703db4e7234f60b6a909ec3b0f2c96dd", Pod:"coredns-674b8bbfcf-99xj6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali409cf830698", MAC:"72:fc:cf:a8:3c:08", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:41:58.819399 containerd[1583]: 2026-01-23 01:41:58.805 [INFO][4816] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fb271b9f26e52058d4d23449d6d47fad703db4e7234f60b6a909ec3b0f2c96dd" Namespace="kube-system" Pod="coredns-674b8bbfcf-99xj6" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--99xj6-eth0" Jan 23 01:41:58.898069 systemd-networkd[1471]: cali48b5607b107: Gained IPv6LL Jan 23 01:41:58.957757 containerd[1583]: time="2026-01-23T01:41:58.956776995Z" level=info msg="connecting to shim fb271b9f26e52058d4d23449d6d47fad703db4e7234f60b6a909ec3b0f2c96dd" address="unix:///run/containerd/s/bdf94cbf3e89614b7957cc39a382f5a39f67124f68065eeffad93d0ec1b52779" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:41:59.053096 systemd[1]: Started cri-containerd-fb271b9f26e52058d4d23449d6d47fad703db4e7234f60b6a909ec3b0f2c96dd.scope - libcontainer container fb271b9f26e52058d4d23449d6d47fad703db4e7234f60b6a909ec3b0f2c96dd. Jan 23 01:41:59.103237 systemd-resolved[1398]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 01:41:59.211179 containerd[1583]: time="2026-01-23T01:41:59.210903618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-99xj6,Uid:ca431a16-f60f-49a8-ad90-ef63fc269ffe,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb271b9f26e52058d4d23449d6d47fad703db4e7234f60b6a909ec3b0f2c96dd\"" Jan 23 01:41:59.233401 containerd[1583]: time="2026-01-23T01:41:59.233259485Z" level=info msg="CreateContainer within sandbox \"fb271b9f26e52058d4d23449d6d47fad703db4e7234f60b6a909ec3b0f2c96dd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 01:41:59.279920 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2761597971.mount: Deactivated successfully. Jan 23 01:41:59.282769 containerd[1583]: time="2026-01-23T01:41:59.282348290Z" level=info msg="Container 039160b1d718c8a7caf3afc7c34fcc47eebebfb94ed426c387af16c16c42cd4d: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:41:59.302782 containerd[1583]: time="2026-01-23T01:41:59.302419568Z" level=info msg="CreateContainer within sandbox \"fb271b9f26e52058d4d23449d6d47fad703db4e7234f60b6a909ec3b0f2c96dd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"039160b1d718c8a7caf3afc7c34fcc47eebebfb94ed426c387af16c16c42cd4d\"" Jan 23 01:41:59.305393 containerd[1583]: time="2026-01-23T01:41:59.304264016Z" level=info msg="StartContainer for \"039160b1d718c8a7caf3afc7c34fcc47eebebfb94ed426c387af16c16c42cd4d\"" Jan 23 01:41:59.305393 containerd[1583]: time="2026-01-23T01:41:59.305189674Z" level=info msg="connecting to shim 039160b1d718c8a7caf3afc7c34fcc47eebebfb94ed426c387af16c16c42cd4d" address="unix:///run/containerd/s/bdf94cbf3e89614b7957cc39a382f5a39f67124f68065eeffad93d0ec1b52779" protocol=ttrpc version=3 Jan 23 01:41:59.360096 systemd[1]: Started cri-containerd-039160b1d718c8a7caf3afc7c34fcc47eebebfb94ed426c387af16c16c42cd4d.scope - libcontainer container 039160b1d718c8a7caf3afc7c34fcc47eebebfb94ed426c387af16c16c42cd4d. Jan 23 01:41:59.369106 containerd[1583]: time="2026-01-23T01:41:59.368923318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-snsbz,Uid:c5579b08-3693-4846-9ba6-5f0864556381,Namespace:kube-system,Attempt:0,}" Jan 23 01:41:59.370340 containerd[1583]: time="2026-01-23T01:41:59.370257419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-694fcf68f5-4q5l8,Uid:73bba584-49ff-4a6a-a59a-46cd1ea9004d,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:41:59.531261 containerd[1583]: time="2026-01-23T01:41:59.527993156Z" level=info msg="StartContainer for \"039160b1d718c8a7caf3afc7c34fcc47eebebfb94ed426c387af16c16c42cd4d\" returns successfully" Jan 23 01:41:59.607747 kubelet[2830]: E0123 01:41:59.607235 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-msxcs" podUID="3557eedc-6578-421a-8c65-fff9d3233af5" Jan 23 01:41:59.637730 kubelet[2830]: I0123 01:41:59.635262 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-99xj6" podStartSLOduration=78.635243065 podStartE2EDuration="1m18.635243065s" podCreationTimestamp="2026-01-23 01:40:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:41:59.632420256 +0000 UTC m=+89.595427438" watchObservedRunningTime="2026-01-23 01:41:59.635243065 +0000 UTC m=+89.598250257" Jan 23 01:41:59.924816 systemd-networkd[1471]: cali56d9e01243a: Link UP Jan 23 01:41:59.928354 systemd-networkd[1471]: cali56d9e01243a: Gained carrier Jan 23 01:41:59.992821 systemd-networkd[1471]: cali409cf830698: Gained IPv6LL Jan 23 01:42:00.011837 containerd[1583]: 2026-01-23 01:41:59.529 [INFO][4925] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--694fcf68f5--4q5l8-eth0 calico-apiserver-694fcf68f5- calico-apiserver 73bba584-49ff-4a6a-a59a-46cd1ea9004d 876 0 2026-01-23 01:41:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:694fcf68f5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-694fcf68f5-4q5l8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali56d9e01243a [] [] }} ContainerID="e4b7709572e890149bf63213ad6910f6393ace45378347cd668f4ecebdc60e2b" Namespace="calico-apiserver" Pod="calico-apiserver-694fcf68f5-4q5l8" WorkloadEndpoint="localhost-k8s-calico--apiserver--694fcf68f5--4q5l8-" Jan 23 01:42:00.011837 containerd[1583]: 2026-01-23 01:41:59.529 [INFO][4925] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e4b7709572e890149bf63213ad6910f6393ace45378347cd668f4ecebdc60e2b" Namespace="calico-apiserver" Pod="calico-apiserver-694fcf68f5-4q5l8" WorkloadEndpoint="localhost-k8s-calico--apiserver--694fcf68f5--4q5l8-eth0" Jan 23 01:42:00.011837 containerd[1583]: 2026-01-23 01:41:59.718 [INFO][4956] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e4b7709572e890149bf63213ad6910f6393ace45378347cd668f4ecebdc60e2b" HandleID="k8s-pod-network.e4b7709572e890149bf63213ad6910f6393ace45378347cd668f4ecebdc60e2b" Workload="localhost-k8s-calico--apiserver--694fcf68f5--4q5l8-eth0" Jan 23 01:42:00.011837 containerd[1583]: 2026-01-23 01:41:59.719 [INFO][4956] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e4b7709572e890149bf63213ad6910f6393ace45378347cd668f4ecebdc60e2b" HandleID="k8s-pod-network.e4b7709572e890149bf63213ad6910f6393ace45378347cd668f4ecebdc60e2b" Workload="localhost-k8s-calico--apiserver--694fcf68f5--4q5l8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c76f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-694fcf68f5-4q5l8", "timestamp":"2026-01-23 01:41:59.718209328 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:42:00.011837 containerd[1583]: 2026-01-23 01:41:59.726 [INFO][4956] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:42:00.011837 containerd[1583]: 2026-01-23 01:41:59.728 [INFO][4956] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:42:00.011837 containerd[1583]: 2026-01-23 01:41:59.728 [INFO][4956] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 01:42:00.011837 containerd[1583]: 2026-01-23 01:41:59.758 [INFO][4956] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e4b7709572e890149bf63213ad6910f6393ace45378347cd668f4ecebdc60e2b" host="localhost" Jan 23 01:42:00.011837 containerd[1583]: 2026-01-23 01:41:59.795 [INFO][4956] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 01:42:00.011837 containerd[1583]: 2026-01-23 01:41:59.819 [INFO][4956] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 01:42:00.011837 containerd[1583]: 2026-01-23 01:41:59.828 [INFO][4956] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 01:42:00.011837 containerd[1583]: 2026-01-23 01:41:59.837 [INFO][4956] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 01:42:00.011837 containerd[1583]: 2026-01-23 01:41:59.838 [INFO][4956] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e4b7709572e890149bf63213ad6910f6393ace45378347cd668f4ecebdc60e2b" host="localhost" Jan 23 01:42:00.011837 containerd[1583]: 2026-01-23 01:41:59.845 [INFO][4956] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e4b7709572e890149bf63213ad6910f6393ace45378347cd668f4ecebdc60e2b Jan 23 01:42:00.011837 containerd[1583]: 2026-01-23 01:41:59.859 [INFO][4956] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e4b7709572e890149bf63213ad6910f6393ace45378347cd668f4ecebdc60e2b" host="localhost" Jan 23 01:42:00.011837 containerd[1583]: 2026-01-23 01:41:59.884 [INFO][4956] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e4b7709572e890149bf63213ad6910f6393ace45378347cd668f4ecebdc60e2b" host="localhost" Jan 23 01:42:00.011837 containerd[1583]: 2026-01-23 01:41:59.884 [INFO][4956] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e4b7709572e890149bf63213ad6910f6393ace45378347cd668f4ecebdc60e2b" host="localhost" Jan 23 01:42:00.011837 containerd[1583]: 2026-01-23 01:41:59.886 [INFO][4956] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:42:00.011837 containerd[1583]: 2026-01-23 01:41:59.886 [INFO][4956] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e4b7709572e890149bf63213ad6910f6393ace45378347cd668f4ecebdc60e2b" HandleID="k8s-pod-network.e4b7709572e890149bf63213ad6910f6393ace45378347cd668f4ecebdc60e2b" Workload="localhost-k8s-calico--apiserver--694fcf68f5--4q5l8-eth0" Jan 23 01:42:00.014330 containerd[1583]: 2026-01-23 01:41:59.891 [INFO][4925] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e4b7709572e890149bf63213ad6910f6393ace45378347cd668f4ecebdc60e2b" Namespace="calico-apiserver" Pod="calico-apiserver-694fcf68f5-4q5l8" WorkloadEndpoint="localhost-k8s-calico--apiserver--694fcf68f5--4q5l8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--694fcf68f5--4q5l8-eth0", GenerateName:"calico-apiserver-694fcf68f5-", Namespace:"calico-apiserver", SelfLink:"", UID:"73bba584-49ff-4a6a-a59a-46cd1ea9004d", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 41, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"694fcf68f5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-694fcf68f5-4q5l8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali56d9e01243a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:42:00.014330 containerd[1583]: 2026-01-23 01:41:59.891 [INFO][4925] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e4b7709572e890149bf63213ad6910f6393ace45378347cd668f4ecebdc60e2b" Namespace="calico-apiserver" Pod="calico-apiserver-694fcf68f5-4q5l8" WorkloadEndpoint="localhost-k8s-calico--apiserver--694fcf68f5--4q5l8-eth0" Jan 23 01:42:00.014330 containerd[1583]: 2026-01-23 01:41:59.891 [INFO][4925] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali56d9e01243a ContainerID="e4b7709572e890149bf63213ad6910f6393ace45378347cd668f4ecebdc60e2b" Namespace="calico-apiserver" Pod="calico-apiserver-694fcf68f5-4q5l8" WorkloadEndpoint="localhost-k8s-calico--apiserver--694fcf68f5--4q5l8-eth0" Jan 23 01:42:00.014330 containerd[1583]: 2026-01-23 01:41:59.924 [INFO][4925] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e4b7709572e890149bf63213ad6910f6393ace45378347cd668f4ecebdc60e2b" Namespace="calico-apiserver" Pod="calico-apiserver-694fcf68f5-4q5l8" WorkloadEndpoint="localhost-k8s-calico--apiserver--694fcf68f5--4q5l8-eth0" Jan 23 01:42:00.014330 containerd[1583]: 2026-01-23 01:41:59.933 [INFO][4925] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e4b7709572e890149bf63213ad6910f6393ace45378347cd668f4ecebdc60e2b" Namespace="calico-apiserver" Pod="calico-apiserver-694fcf68f5-4q5l8" WorkloadEndpoint="localhost-k8s-calico--apiserver--694fcf68f5--4q5l8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--694fcf68f5--4q5l8-eth0", GenerateName:"calico-apiserver-694fcf68f5-", Namespace:"calico-apiserver", SelfLink:"", UID:"73bba584-49ff-4a6a-a59a-46cd1ea9004d", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 41, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"694fcf68f5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e4b7709572e890149bf63213ad6910f6393ace45378347cd668f4ecebdc60e2b", Pod:"calico-apiserver-694fcf68f5-4q5l8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali56d9e01243a", MAC:"6e:1a:56:8c:1c:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:42:00.014330 containerd[1583]: 2026-01-23 01:41:59.974 [INFO][4925] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e4b7709572e890149bf63213ad6910f6393ace45378347cd668f4ecebdc60e2b" Namespace="calico-apiserver" Pod="calico-apiserver-694fcf68f5-4q5l8" WorkloadEndpoint="localhost-k8s-calico--apiserver--694fcf68f5--4q5l8-eth0" Jan 23 01:42:00.132846 systemd-networkd[1471]: cali6bac41b4184: Link UP Jan 23 01:42:00.140074 systemd-networkd[1471]: cali6bac41b4184: Gained carrier Jan 23 01:42:00.155479 containerd[1583]: time="2026-01-23T01:42:00.155223060Z" level=info msg="connecting to shim e4b7709572e890149bf63213ad6910f6393ace45378347cd668f4ecebdc60e2b" address="unix:///run/containerd/s/23b6c888ab2cd8c0df04a8747ca2de9199b36c62fc00ff7921ca87bb4db5bec3" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:42:00.216122 containerd[1583]: 2026-01-23 01:41:59.550 [INFO][4919] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--snsbz-eth0 coredns-674b8bbfcf- kube-system c5579b08-3693-4846-9ba6-5f0864556381 877 0 2026-01-23 01:40:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-snsbz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6bac41b4184 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a4d49a2619169c54b3e81efbaae018ab58451227289f4002c2e547857f91338c" Namespace="kube-system" Pod="coredns-674b8bbfcf-snsbz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--snsbz-" Jan 23 01:42:00.216122 containerd[1583]: 2026-01-23 01:41:59.555 [INFO][4919] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a4d49a2619169c54b3e81efbaae018ab58451227289f4002c2e547857f91338c" Namespace="kube-system" Pod="coredns-674b8bbfcf-snsbz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--snsbz-eth0" Jan 23 01:42:00.216122 containerd[1583]: 2026-01-23 01:41:59.839 [INFO][4966] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a4d49a2619169c54b3e81efbaae018ab58451227289f4002c2e547857f91338c" HandleID="k8s-pod-network.a4d49a2619169c54b3e81efbaae018ab58451227289f4002c2e547857f91338c" Workload="localhost-k8s-coredns--674b8bbfcf--snsbz-eth0" Jan 23 01:42:00.216122 containerd[1583]: 2026-01-23 01:41:59.842 [INFO][4966] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a4d49a2619169c54b3e81efbaae018ab58451227289f4002c2e547857f91338c" HandleID="k8s-pod-network.a4d49a2619169c54b3e81efbaae018ab58451227289f4002c2e547857f91338c" Workload="localhost-k8s-coredns--674b8bbfcf--snsbz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003d4ee0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-snsbz", "timestamp":"2026-01-23 01:41:59.839119365 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:42:00.216122 containerd[1583]: 2026-01-23 01:41:59.842 [INFO][4966] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:42:00.216122 containerd[1583]: 2026-01-23 01:41:59.885 [INFO][4966] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:42:00.216122 containerd[1583]: 2026-01-23 01:41:59.885 [INFO][4966] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 01:42:00.216122 containerd[1583]: 2026-01-23 01:41:59.914 [INFO][4966] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a4d49a2619169c54b3e81efbaae018ab58451227289f4002c2e547857f91338c" host="localhost" Jan 23 01:42:00.216122 containerd[1583]: 2026-01-23 01:41:59.939 [INFO][4966] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 01:42:00.216122 containerd[1583]: 2026-01-23 01:41:59.994 [INFO][4966] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 01:42:00.216122 containerd[1583]: 2026-01-23 01:42:00.017 [INFO][4966] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 01:42:00.216122 containerd[1583]: 2026-01-23 01:42:00.027 [INFO][4966] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 01:42:00.216122 containerd[1583]: 2026-01-23 01:42:00.027 [INFO][4966] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a4d49a2619169c54b3e81efbaae018ab58451227289f4002c2e547857f91338c" host="localhost" Jan 23 01:42:00.216122 containerd[1583]: 2026-01-23 01:42:00.041 [INFO][4966] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a4d49a2619169c54b3e81efbaae018ab58451227289f4002c2e547857f91338c Jan 23 01:42:00.216122 containerd[1583]: 2026-01-23 01:42:00.074 [INFO][4966] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a4d49a2619169c54b3e81efbaae018ab58451227289f4002c2e547857f91338c" host="localhost" Jan 23 01:42:00.216122 containerd[1583]: 2026-01-23 01:42:00.102 [INFO][4966] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.a4d49a2619169c54b3e81efbaae018ab58451227289f4002c2e547857f91338c" host="localhost" Jan 23 01:42:00.216122 containerd[1583]: 2026-01-23 01:42:00.105 [INFO][4966] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.a4d49a2619169c54b3e81efbaae018ab58451227289f4002c2e547857f91338c" host="localhost" Jan 23 01:42:00.216122 containerd[1583]: 2026-01-23 01:42:00.106 [INFO][4966] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:42:00.216122 containerd[1583]: 2026-01-23 01:42:00.106 [INFO][4966] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="a4d49a2619169c54b3e81efbaae018ab58451227289f4002c2e547857f91338c" HandleID="k8s-pod-network.a4d49a2619169c54b3e81efbaae018ab58451227289f4002c2e547857f91338c" Workload="localhost-k8s-coredns--674b8bbfcf--snsbz-eth0" Jan 23 01:42:00.217297 containerd[1583]: 2026-01-23 01:42:00.114 [INFO][4919] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a4d49a2619169c54b3e81efbaae018ab58451227289f4002c2e547857f91338c" Namespace="kube-system" Pod="coredns-674b8bbfcf-snsbz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--snsbz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--snsbz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c5579b08-3693-4846-9ba6-5f0864556381", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 40, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-snsbz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6bac41b4184", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:42:00.217297 containerd[1583]: 2026-01-23 01:42:00.117 [INFO][4919] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="a4d49a2619169c54b3e81efbaae018ab58451227289f4002c2e547857f91338c" Namespace="kube-system" Pod="coredns-674b8bbfcf-snsbz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--snsbz-eth0" Jan 23 01:42:00.217297 containerd[1583]: 2026-01-23 01:42:00.117 [INFO][4919] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6bac41b4184 ContainerID="a4d49a2619169c54b3e81efbaae018ab58451227289f4002c2e547857f91338c" Namespace="kube-system" Pod="coredns-674b8bbfcf-snsbz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--snsbz-eth0" Jan 23 01:42:00.217297 containerd[1583]: 2026-01-23 01:42:00.153 [INFO][4919] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a4d49a2619169c54b3e81efbaae018ab58451227289f4002c2e547857f91338c" Namespace="kube-system" Pod="coredns-674b8bbfcf-snsbz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--snsbz-eth0" Jan 23 01:42:00.217297 containerd[1583]: 2026-01-23 01:42:00.156 [INFO][4919] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a4d49a2619169c54b3e81efbaae018ab58451227289f4002c2e547857f91338c" Namespace="kube-system" Pod="coredns-674b8bbfcf-snsbz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--snsbz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--snsbz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c5579b08-3693-4846-9ba6-5f0864556381", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 40, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a4d49a2619169c54b3e81efbaae018ab58451227289f4002c2e547857f91338c", Pod:"coredns-674b8bbfcf-snsbz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6bac41b4184", MAC:"1a:87:f7:81:be:45", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:42:00.217297 containerd[1583]: 2026-01-23 01:42:00.208 [INFO][4919] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a4d49a2619169c54b3e81efbaae018ab58451227289f4002c2e547857f91338c" Namespace="kube-system" Pod="coredns-674b8bbfcf-snsbz" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--snsbz-eth0" Jan 23 01:42:00.263045 systemd[1]: Started cri-containerd-e4b7709572e890149bf63213ad6910f6393ace45378347cd668f4ecebdc60e2b.scope - libcontainer container e4b7709572e890149bf63213ad6910f6393ace45378347cd668f4ecebdc60e2b. Jan 23 01:42:00.318441 containerd[1583]: time="2026-01-23T01:42:00.317975985Z" level=info msg="connecting to shim a4d49a2619169c54b3e81efbaae018ab58451227289f4002c2e547857f91338c" address="unix:///run/containerd/s/4fd22595b3c8331476e6b7b4cd37fd2afee3d7fe439656368ef23cec6f6ef35a" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:42:00.354153 systemd-resolved[1398]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 01:42:00.453003 systemd[1]: Started cri-containerd-a4d49a2619169c54b3e81efbaae018ab58451227289f4002c2e547857f91338c.scope - libcontainer container a4d49a2619169c54b3e81efbaae018ab58451227289f4002c2e547857f91338c. Jan 23 01:42:00.475428 containerd[1583]: time="2026-01-23T01:42:00.475103628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-694fcf68f5-4q5l8,Uid:73bba584-49ff-4a6a-a59a-46cd1ea9004d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e4b7709572e890149bf63213ad6910f6393ace45378347cd668f4ecebdc60e2b\"" Jan 23 01:42:00.486150 containerd[1583]: time="2026-01-23T01:42:00.486121021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:42:00.506405 systemd-resolved[1398]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 01:42:00.557877 containerd[1583]: time="2026-01-23T01:42:00.557368706Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:42:00.562689 containerd[1583]: time="2026-01-23T01:42:00.561896925Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:42:00.562689 containerd[1583]: time="2026-01-23T01:42:00.561981815Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:42:00.562961 kubelet[2830]: E0123 01:42:00.562213 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:42:00.562961 kubelet[2830]: E0123 01:42:00.562265 2830 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:42:00.562961 kubelet[2830]: E0123 01:42:00.562418 2830 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8dtjn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-694fcf68f5-4q5l8_calico-apiserver(73bba584-49ff-4a6a-a59a-46cd1ea9004d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:42:00.570782 kubelet[2830]: E0123 01:42:00.564838 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-694fcf68f5-4q5l8" podUID="73bba584-49ff-4a6a-a59a-46cd1ea9004d" Jan 23 01:42:00.596313 containerd[1583]: time="2026-01-23T01:42:00.596118397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-snsbz,Uid:c5579b08-3693-4846-9ba6-5f0864556381,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4d49a2619169c54b3e81efbaae018ab58451227289f4002c2e547857f91338c\"" Jan 23 01:42:00.616276 kubelet[2830]: E0123 01:42:00.616224 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-694fcf68f5-4q5l8" podUID="73bba584-49ff-4a6a-a59a-46cd1ea9004d" Jan 23 01:42:00.619211 containerd[1583]: time="2026-01-23T01:42:00.615475957Z" level=info msg="CreateContainer within sandbox \"a4d49a2619169c54b3e81efbaae018ab58451227289f4002c2e547857f91338c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 01:42:00.661927 containerd[1583]: time="2026-01-23T01:42:00.661781508Z" level=info msg="Container 4a43ca219acc100cbfd331114cf2feb249b9ac314336b481e39844f2ee2f96a9: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:42:00.691821 containerd[1583]: time="2026-01-23T01:42:00.691379308Z" level=info msg="CreateContainer within sandbox \"a4d49a2619169c54b3e81efbaae018ab58451227289f4002c2e547857f91338c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4a43ca219acc100cbfd331114cf2feb249b9ac314336b481e39844f2ee2f96a9\"" Jan 23 01:42:00.704335 containerd[1583]: time="2026-01-23T01:42:00.703953276Z" level=info msg="StartContainer for \"4a43ca219acc100cbfd331114cf2feb249b9ac314336b481e39844f2ee2f96a9\"" Jan 23 01:42:00.716022 containerd[1583]: time="2026-01-23T01:42:00.715981074Z" level=info msg="connecting to shim 4a43ca219acc100cbfd331114cf2feb249b9ac314336b481e39844f2ee2f96a9" address="unix:///run/containerd/s/4fd22595b3c8331476e6b7b4cd37fd2afee3d7fe439656368ef23cec6f6ef35a" protocol=ttrpc version=3 Jan 23 01:42:00.865988 systemd[1]: Started cri-containerd-4a43ca219acc100cbfd331114cf2feb249b9ac314336b481e39844f2ee2f96a9.scope - libcontainer container 4a43ca219acc100cbfd331114cf2feb249b9ac314336b481e39844f2ee2f96a9. Jan 23 01:42:00.996190 containerd[1583]: time="2026-01-23T01:42:00.996036175Z" level=info msg="StartContainer for \"4a43ca219acc100cbfd331114cf2feb249b9ac314336b481e39844f2ee2f96a9\" returns successfully" Jan 23 01:42:01.650460 kubelet[2830]: E0123 01:42:01.648335 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-694fcf68f5-4q5l8" podUID="73bba584-49ff-4a6a-a59a-46cd1ea9004d" Jan 23 01:42:01.648471 systemd-networkd[1471]: cali56d9e01243a: Gained IPv6LL Jan 23 01:42:01.735393 kubelet[2830]: I0123 01:42:01.734878 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-snsbz" podStartSLOduration=80.734855215 podStartE2EDuration="1m20.734855215s" podCreationTimestamp="2026-01-23 01:40:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:42:01.685803806 +0000 UTC m=+91.648811019" watchObservedRunningTime="2026-01-23 01:42:01.734855215 +0000 UTC m=+91.697862397" Jan 23 01:42:01.776325 systemd-networkd[1471]: cali6bac41b4184: Gained IPv6LL Jan 23 01:42:02.367726 containerd[1583]: time="2026-01-23T01:42:02.367156675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-694fcf68f5-p2bxz,Uid:45f8db5c-10e9-4970-b59b-9e6ccdff633a,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:42:02.872403 systemd-networkd[1471]: cali4133299c893: Link UP Jan 23 01:42:02.887961 systemd-networkd[1471]: cali4133299c893: Gained carrier Jan 23 01:42:02.933884 containerd[1583]: 2026-01-23 01:42:02.526 [INFO][5133] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--694fcf68f5--p2bxz-eth0 calico-apiserver-694fcf68f5- calico-apiserver 45f8db5c-10e9-4970-b59b-9e6ccdff633a 880 0 2026-01-23 01:41:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:694fcf68f5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-694fcf68f5-p2bxz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4133299c893 [] [] }} ContainerID="01a762d0cd9a2285e255e4958fe70e81cfea5574301ae02f748af7d267f593a4" Namespace="calico-apiserver" Pod="calico-apiserver-694fcf68f5-p2bxz" WorkloadEndpoint="localhost-k8s-calico--apiserver--694fcf68f5--p2bxz-" Jan 23 01:42:02.933884 containerd[1583]: 2026-01-23 01:42:02.527 [INFO][5133] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="01a762d0cd9a2285e255e4958fe70e81cfea5574301ae02f748af7d267f593a4" Namespace="calico-apiserver" Pod="calico-apiserver-694fcf68f5-p2bxz" WorkloadEndpoint="localhost-k8s-calico--apiserver--694fcf68f5--p2bxz-eth0" Jan 23 01:42:02.933884 containerd[1583]: 2026-01-23 01:42:02.721 [INFO][5147] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01a762d0cd9a2285e255e4958fe70e81cfea5574301ae02f748af7d267f593a4" HandleID="k8s-pod-network.01a762d0cd9a2285e255e4958fe70e81cfea5574301ae02f748af7d267f593a4" Workload="localhost-k8s-calico--apiserver--694fcf68f5--p2bxz-eth0" Jan 23 01:42:02.933884 containerd[1583]: 2026-01-23 01:42:02.721 [INFO][5147] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="01a762d0cd9a2285e255e4958fe70e81cfea5574301ae02f748af7d267f593a4" HandleID="k8s-pod-network.01a762d0cd9a2285e255e4958fe70e81cfea5574301ae02f748af7d267f593a4" Workload="localhost-k8s-calico--apiserver--694fcf68f5--p2bxz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00058b260), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-694fcf68f5-p2bxz", "timestamp":"2026-01-23 01:42:02.721378572 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:42:02.933884 containerd[1583]: 2026-01-23 01:42:02.722 [INFO][5147] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:42:02.933884 containerd[1583]: 2026-01-23 01:42:02.722 [INFO][5147] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:42:02.933884 containerd[1583]: 2026-01-23 01:42:02.722 [INFO][5147] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 01:42:02.933884 containerd[1583]: 2026-01-23 01:42:02.749 [INFO][5147] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.01a762d0cd9a2285e255e4958fe70e81cfea5574301ae02f748af7d267f593a4" host="localhost" Jan 23 01:42:02.933884 containerd[1583]: 2026-01-23 01:42:02.770 [INFO][5147] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 01:42:02.933884 containerd[1583]: 2026-01-23 01:42:02.783 [INFO][5147] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 01:42:02.933884 containerd[1583]: 2026-01-23 01:42:02.792 [INFO][5147] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 01:42:02.933884 containerd[1583]: 2026-01-23 01:42:02.800 [INFO][5147] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 01:42:02.933884 containerd[1583]: 2026-01-23 01:42:02.800 [INFO][5147] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.01a762d0cd9a2285e255e4958fe70e81cfea5574301ae02f748af7d267f593a4" host="localhost" Jan 23 01:42:02.933884 containerd[1583]: 2026-01-23 01:42:02.806 [INFO][5147] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.01a762d0cd9a2285e255e4958fe70e81cfea5574301ae02f748af7d267f593a4 Jan 23 01:42:02.933884 containerd[1583]: 2026-01-23 01:42:02.823 [INFO][5147] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.01a762d0cd9a2285e255e4958fe70e81cfea5574301ae02f748af7d267f593a4" host="localhost" Jan 23 01:42:02.933884 containerd[1583]: 2026-01-23 01:42:02.847 [INFO][5147] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.01a762d0cd9a2285e255e4958fe70e81cfea5574301ae02f748af7d267f593a4" host="localhost" Jan 23 01:42:02.933884 containerd[1583]: 2026-01-23 01:42:02.848 [INFO][5147] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.01a762d0cd9a2285e255e4958fe70e81cfea5574301ae02f748af7d267f593a4" host="localhost" Jan 23 01:42:02.933884 containerd[1583]: 2026-01-23 01:42:02.848 [INFO][5147] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:42:02.933884 containerd[1583]: 2026-01-23 01:42:02.848 [INFO][5147] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="01a762d0cd9a2285e255e4958fe70e81cfea5574301ae02f748af7d267f593a4" HandleID="k8s-pod-network.01a762d0cd9a2285e255e4958fe70e81cfea5574301ae02f748af7d267f593a4" Workload="localhost-k8s-calico--apiserver--694fcf68f5--p2bxz-eth0" Jan 23 01:42:02.936473 containerd[1583]: 2026-01-23 01:42:02.859 [INFO][5133] cni-plugin/k8s.go 418: Populated endpoint ContainerID="01a762d0cd9a2285e255e4958fe70e81cfea5574301ae02f748af7d267f593a4" Namespace="calico-apiserver" Pod="calico-apiserver-694fcf68f5-p2bxz" WorkloadEndpoint="localhost-k8s-calico--apiserver--694fcf68f5--p2bxz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--694fcf68f5--p2bxz-eth0", GenerateName:"calico-apiserver-694fcf68f5-", Namespace:"calico-apiserver", SelfLink:"", UID:"45f8db5c-10e9-4970-b59b-9e6ccdff633a", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 41, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"694fcf68f5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-694fcf68f5-p2bxz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4133299c893", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:42:02.936473 containerd[1583]: 2026-01-23 01:42:02.861 [INFO][5133] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="01a762d0cd9a2285e255e4958fe70e81cfea5574301ae02f748af7d267f593a4" Namespace="calico-apiserver" Pod="calico-apiserver-694fcf68f5-p2bxz" WorkloadEndpoint="localhost-k8s-calico--apiserver--694fcf68f5--p2bxz-eth0" Jan 23 01:42:02.936473 containerd[1583]: 2026-01-23 01:42:02.861 [INFO][5133] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4133299c893 ContainerID="01a762d0cd9a2285e255e4958fe70e81cfea5574301ae02f748af7d267f593a4" Namespace="calico-apiserver" Pod="calico-apiserver-694fcf68f5-p2bxz" WorkloadEndpoint="localhost-k8s-calico--apiserver--694fcf68f5--p2bxz-eth0" Jan 23 01:42:02.936473 containerd[1583]: 2026-01-23 01:42:02.889 [INFO][5133] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01a762d0cd9a2285e255e4958fe70e81cfea5574301ae02f748af7d267f593a4" Namespace="calico-apiserver" Pod="calico-apiserver-694fcf68f5-p2bxz" WorkloadEndpoint="localhost-k8s-calico--apiserver--694fcf68f5--p2bxz-eth0" Jan 23 01:42:02.936473 containerd[1583]: 2026-01-23 01:42:02.890 [INFO][5133] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="01a762d0cd9a2285e255e4958fe70e81cfea5574301ae02f748af7d267f593a4" Namespace="calico-apiserver" Pod="calico-apiserver-694fcf68f5-p2bxz" WorkloadEndpoint="localhost-k8s-calico--apiserver--694fcf68f5--p2bxz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--694fcf68f5--p2bxz-eth0", GenerateName:"calico-apiserver-694fcf68f5-", Namespace:"calico-apiserver", SelfLink:"", UID:"45f8db5c-10e9-4970-b59b-9e6ccdff633a", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 41, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"694fcf68f5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"01a762d0cd9a2285e255e4958fe70e81cfea5574301ae02f748af7d267f593a4", Pod:"calico-apiserver-694fcf68f5-p2bxz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4133299c893", MAC:"7a:2a:b2:22:a7:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:42:02.936473 containerd[1583]: 2026-01-23 01:42:02.916 [INFO][5133] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="01a762d0cd9a2285e255e4958fe70e81cfea5574301ae02f748af7d267f593a4" Namespace="calico-apiserver" Pod="calico-apiserver-694fcf68f5-p2bxz" WorkloadEndpoint="localhost-k8s-calico--apiserver--694fcf68f5--p2bxz-eth0" Jan 23 01:42:03.021956 containerd[1583]: time="2026-01-23T01:42:03.021804874Z" level=info msg="connecting to shim 01a762d0cd9a2285e255e4958fe70e81cfea5574301ae02f748af7d267f593a4" address="unix:///run/containerd/s/7f7c5c5b5e46a0c09ef4f5555e63da6974ca53114de7d5d8d7bad25aaf2a8564" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:42:03.123911 systemd[1]: Started cri-containerd-01a762d0cd9a2285e255e4958fe70e81cfea5574301ae02f748af7d267f593a4.scope - libcontainer container 01a762d0cd9a2285e255e4958fe70e81cfea5574301ae02f748af7d267f593a4. Jan 23 01:42:03.171707 systemd-resolved[1398]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 01:42:03.252879 containerd[1583]: time="2026-01-23T01:42:03.252463044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-694fcf68f5-p2bxz,Uid:45f8db5c-10e9-4970-b59b-9e6ccdff633a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"01a762d0cd9a2285e255e4958fe70e81cfea5574301ae02f748af7d267f593a4\"" Jan 23 01:42:03.258786 containerd[1583]: time="2026-01-23T01:42:03.258439338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:42:03.373465 containerd[1583]: time="2026-01-23T01:42:03.373265172Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:42:03.376795 containerd[1583]: time="2026-01-23T01:42:03.375895120Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:42:03.376795 containerd[1583]: time="2026-01-23T01:42:03.375984027Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:42:03.377444 kubelet[2830]: E0123 01:42:03.377288 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:42:03.377444 kubelet[2830]: E0123 01:42:03.377430 2830 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:42:03.378171 kubelet[2830]: E0123 01:42:03.378090 2830 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gt7lg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-694fcf68f5-p2bxz_calico-apiserver(45f8db5c-10e9-4970-b59b-9e6ccdff633a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:42:03.381021 kubelet[2830]: E0123 01:42:03.380927 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-694fcf68f5-p2bxz" podUID="45f8db5c-10e9-4970-b59b-9e6ccdff633a" Jan 23 01:42:03.664780 kubelet[2830]: E0123 01:42:03.664272 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-694fcf68f5-p2bxz" podUID="45f8db5c-10e9-4970-b59b-9e6ccdff633a" Jan 23 01:42:04.666448 kubelet[2830]: E0123 01:42:04.666268 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-694fcf68f5-p2bxz" podUID="45f8db5c-10e9-4970-b59b-9e6ccdff633a" Jan 23 01:42:04.848250 systemd-networkd[1471]: cali4133299c893: Gained IPv6LL Jan 23 01:42:07.368950 containerd[1583]: time="2026-01-23T01:42:07.368901706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:42:07.436192 containerd[1583]: time="2026-01-23T01:42:07.436060206Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:42:07.439248 containerd[1583]: time="2026-01-23T01:42:07.438942234Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:42:07.439248 containerd[1583]: time="2026-01-23T01:42:07.439016180Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:42:07.440787 kubelet[2830]: E0123 01:42:07.440435 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:42:07.441744 kubelet[2830]: E0123 01:42:07.441230 2830 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:42:07.442756 kubelet[2830]: E0123 01:42:07.442006 2830 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:807a26d448fa42adb3b80c712c77b43e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-85j8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54f6844c7b-qww2p_calico-system(af904215-7ebf-4966-8c59-420d5c45351b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:42:07.447782 containerd[1583]: time="2026-01-23T01:42:07.447366556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:42:07.510102 containerd[1583]: time="2026-01-23T01:42:07.509810666Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:42:07.512879 containerd[1583]: time="2026-01-23T01:42:07.512749331Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:42:07.512879 containerd[1583]: time="2026-01-23T01:42:07.512825733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:42:07.513086 kubelet[2830]: E0123 01:42:07.512950 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:42:07.513086 kubelet[2830]: E0123 01:42:07.512995 2830 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:42:07.513175 kubelet[2830]: E0123 01:42:07.513096 2830 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-85j8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54f6844c7b-qww2p_calico-system(af904215-7ebf-4966-8c59-420d5c45351b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:42:07.515171 kubelet[2830]: E0123 01:42:07.514797 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54f6844c7b-qww2p" podUID="af904215-7ebf-4966-8c59-420d5c45351b" Jan 23 01:42:10.368752 containerd[1583]: time="2026-01-23T01:42:10.368400747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:42:10.439427 containerd[1583]: time="2026-01-23T01:42:10.439258618Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:42:10.442033 containerd[1583]: time="2026-01-23T01:42:10.441768383Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:42:10.442033 containerd[1583]: time="2026-01-23T01:42:10.441834767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:42:10.442121 kubelet[2830]: E0123 01:42:10.442085 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:42:10.442414 kubelet[2830]: E0123 01:42:10.442123 2830 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:42:10.442414 kubelet[2830]: E0123 01:42:10.442282 2830 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrfkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6c9c68dbf8-nsnd4_calico-system(117ed452-382a-4cae-a50f-439078d719fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:42:10.444797 kubelet[2830]: E0123 01:42:10.444341 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c9c68dbf8-nsnd4" podUID="117ed452-382a-4cae-a50f-439078d719fb" Jan 23 01:42:10.444989 containerd[1583]: time="2026-01-23T01:42:10.442830662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:42:10.509475 containerd[1583]: time="2026-01-23T01:42:10.509155540Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:42:10.512167 containerd[1583]: time="2026-01-23T01:42:10.511835560Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:42:10.512323 containerd[1583]: time="2026-01-23T01:42:10.512284108Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:42:10.513011 kubelet[2830]: E0123 01:42:10.512834 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:42:10.513011 kubelet[2830]: E0123 01:42:10.512880 2830 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:42:10.513011 kubelet[2830]: E0123 01:42:10.512997 2830 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rgc28,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-msxcs_calico-system(3557eedc-6578-421a-8c65-fff9d3233af5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:42:10.514740 kubelet[2830]: E0123 01:42:10.514176 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-msxcs" podUID="3557eedc-6578-421a-8c65-fff9d3233af5" Jan 23 01:42:11.369400 containerd[1583]: time="2026-01-23T01:42:11.369164377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:42:11.436219 containerd[1583]: time="2026-01-23T01:42:11.436012254Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:42:11.439115 containerd[1583]: time="2026-01-23T01:42:11.439059813Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:42:11.439202 containerd[1583]: time="2026-01-23T01:42:11.439089783Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:42:11.439903 kubelet[2830]: E0123 01:42:11.439856 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:42:11.439973 kubelet[2830]: E0123 01:42:11.439916 2830 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:42:11.440349 kubelet[2830]: E0123 01:42:11.440062 2830 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gb8wt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7xq6t_calico-system(d4655da0-4d87-462c-8176-c9772e42f76a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:42:11.444113 containerd[1583]: time="2026-01-23T01:42:11.444087361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:42:11.524938 containerd[1583]: time="2026-01-23T01:42:11.524407599Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:42:11.527747 containerd[1583]: time="2026-01-23T01:42:11.527356528Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:42:11.527747 containerd[1583]: time="2026-01-23T01:42:11.527721655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:42:11.528030 kubelet[2830]: E0123 01:42:11.527964 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:42:11.528030 kubelet[2830]: E0123 01:42:11.528011 2830 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:42:11.528964 kubelet[2830]: E0123 01:42:11.528301 2830 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gb8wt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7xq6t_calico-system(d4655da0-4d87-462c-8176-c9772e42f76a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:42:11.529963 kubelet[2830]: E0123 01:42:11.529917 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7xq6t" podUID="d4655da0-4d87-462c-8176-c9772e42f76a" Jan 23 01:42:15.368473 containerd[1583]: time="2026-01-23T01:42:15.367937436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:42:15.433169 containerd[1583]: time="2026-01-23T01:42:15.432922296Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:42:15.435995 containerd[1583]: time="2026-01-23T01:42:15.435806158Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:42:15.435995 containerd[1583]: time="2026-01-23T01:42:15.435899632Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:42:15.436720 kubelet[2830]: E0123 01:42:15.436242 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:42:15.436720 kubelet[2830]: E0123 01:42:15.436378 2830 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:42:15.437075 kubelet[2830]: E0123 01:42:15.436770 2830 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8dtjn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-694fcf68f5-4q5l8_calico-apiserver(73bba584-49ff-4a6a-a59a-46cd1ea9004d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:42:15.438974 kubelet[2830]: E0123 01:42:15.438332 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-694fcf68f5-4q5l8" podUID="73bba584-49ff-4a6a-a59a-46cd1ea9004d" Jan 23 01:42:16.833855 systemd[1]: Started sshd@9-10.0.0.137:22-10.0.0.1:44730.service - OpenSSH per-connection server daemon (10.0.0.1:44730). Jan 23 01:42:17.000827 sshd[5239]: Accepted publickey for core from 10.0.0.1 port 44730 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:42:17.004144 sshd-session[5239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:42:17.018994 systemd-logind[1556]: New session 10 of user core. Jan 23 01:42:17.033103 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 01:42:17.278218 sshd[5242]: Connection closed by 10.0.0.1 port 44730 Jan 23 01:42:17.281955 sshd-session[5239]: pam_unix(sshd:session): session closed for user core Jan 23 01:42:17.293090 systemd[1]: sshd@9-10.0.0.137:22-10.0.0.1:44730.service: Deactivated successfully. Jan 23 01:42:17.296787 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 01:42:17.299829 systemd-logind[1556]: Session 10 logged out. Waiting for processes to exit. Jan 23 01:42:17.303086 systemd-logind[1556]: Removed session 10. Jan 23 01:42:19.368641 containerd[1583]: time="2026-01-23T01:42:19.368360020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:42:19.468479 containerd[1583]: time="2026-01-23T01:42:19.468162202Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:42:19.470362 containerd[1583]: time="2026-01-23T01:42:19.470105038Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:42:19.470362 containerd[1583]: time="2026-01-23T01:42:19.470251812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:42:19.471765 kubelet[2830]: E0123 01:42:19.471346 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:42:19.471765 kubelet[2830]: E0123 01:42:19.471465 2830 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:42:19.472301 kubelet[2830]: E0123 01:42:19.471870 2830 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gt7lg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-694fcf68f5-p2bxz_calico-apiserver(45f8db5c-10e9-4970-b59b-9e6ccdff633a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:42:19.474407 kubelet[2830]: E0123 01:42:19.474109 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-694fcf68f5-p2bxz" podUID="45f8db5c-10e9-4970-b59b-9e6ccdff633a" Jan 23 01:42:22.298061 systemd[1]: Started sshd@10-10.0.0.137:22-10.0.0.1:44744.service - OpenSSH per-connection server daemon (10.0.0.1:44744). Jan 23 01:42:22.371073 kubelet[2830]: E0123 01:42:22.370826 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c9c68dbf8-nsnd4" podUID="117ed452-382a-4cae-a50f-439078d719fb" Jan 23 01:42:22.373700 kubelet[2830]: E0123 01:42:22.373443 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54f6844c7b-qww2p" podUID="af904215-7ebf-4966-8c59-420d5c45351b" Jan 23 01:42:22.460606 sshd[5287]: Accepted publickey for core from 10.0.0.1 port 44744 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:42:22.463806 sshd-session[5287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:42:22.474188 systemd-logind[1556]: New session 11 of user core. Jan 23 01:42:22.488870 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 01:42:22.695763 sshd[5291]: Connection closed by 10.0.0.1 port 44744 Jan 23 01:42:22.696449 sshd-session[5287]: pam_unix(sshd:session): session closed for user core Jan 23 01:42:22.704049 systemd[1]: sshd@10-10.0.0.137:22-10.0.0.1:44744.service: Deactivated successfully. Jan 23 01:42:22.707860 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 01:42:22.711719 systemd-logind[1556]: Session 11 logged out. Waiting for processes to exit. Jan 23 01:42:22.715329 systemd-logind[1556]: Removed session 11. Jan 23 01:42:23.369042 kubelet[2830]: E0123 01:42:23.368943 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-msxcs" podUID="3557eedc-6578-421a-8c65-fff9d3233af5" Jan 23 01:42:26.371967 kubelet[2830]: E0123 01:42:26.371848 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7xq6t" podUID="d4655da0-4d87-462c-8176-c9772e42f76a" Jan 23 01:42:27.367281 kubelet[2830]: E0123 01:42:27.366887 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-694fcf68f5-4q5l8" podUID="73bba584-49ff-4a6a-a59a-46cd1ea9004d" Jan 23 01:42:27.712162 systemd[1]: Started sshd@11-10.0.0.137:22-10.0.0.1:57650.service - OpenSSH per-connection server daemon (10.0.0.1:57650). Jan 23 01:42:27.786889 sshd[5311]: Accepted publickey for core from 10.0.0.1 port 57650 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:42:27.788945 sshd-session[5311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:42:27.798386 systemd-logind[1556]: New session 12 of user core. Jan 23 01:42:27.809222 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 01:42:28.001442 sshd[5314]: Connection closed by 10.0.0.1 port 57650 Jan 23 01:42:28.002851 sshd-session[5311]: pam_unix(sshd:session): session closed for user core Jan 23 01:42:28.011835 systemd-logind[1556]: Session 12 logged out. Waiting for processes to exit. Jan 23 01:42:28.012100 systemd[1]: sshd@11-10.0.0.137:22-10.0.0.1:57650.service: Deactivated successfully. Jan 23 01:42:28.014986 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 01:42:28.020319 systemd-logind[1556]: Removed session 12. Jan 23 01:42:31.367294 kubelet[2830]: E0123 01:42:31.367155 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-694fcf68f5-p2bxz" podUID="45f8db5c-10e9-4970-b59b-9e6ccdff633a" Jan 23 01:42:33.020431 systemd[1]: Started sshd@12-10.0.0.137:22-10.0.0.1:56076.service - OpenSSH per-connection server daemon (10.0.0.1:56076). Jan 23 01:42:33.095894 sshd[5331]: Accepted publickey for core from 10.0.0.1 port 56076 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:42:33.098219 sshd-session[5331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:42:33.105894 systemd-logind[1556]: New session 13 of user core. Jan 23 01:42:33.116952 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 01:42:33.323736 sshd[5334]: Connection closed by 10.0.0.1 port 56076 Jan 23 01:42:33.323976 sshd-session[5331]: pam_unix(sshd:session): session closed for user core Jan 23 01:42:33.331690 systemd[1]: sshd@12-10.0.0.137:22-10.0.0.1:56076.service: Deactivated successfully. Jan 23 01:42:33.334395 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 01:42:33.336301 systemd-logind[1556]: Session 13 logged out. Waiting for processes to exit. Jan 23 01:42:33.339682 systemd-logind[1556]: Removed session 13. Jan 23 01:42:34.368880 containerd[1583]: time="2026-01-23T01:42:34.367754400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:42:34.442326 containerd[1583]: time="2026-01-23T01:42:34.442120279Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:42:34.444933 containerd[1583]: time="2026-01-23T01:42:34.444791532Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:42:34.444933 containerd[1583]: time="2026-01-23T01:42:34.444838825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:42:34.445862 kubelet[2830]: E0123 01:42:34.445367 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:42:34.445862 kubelet[2830]: E0123 01:42:34.445808 2830 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:42:34.446351 kubelet[2830]: E0123 01:42:34.445941 2830 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:807a26d448fa42adb3b80c712c77b43e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-85j8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54f6844c7b-qww2p_calico-system(af904215-7ebf-4966-8c59-420d5c45351b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:42:34.449703 containerd[1583]: time="2026-01-23T01:42:34.448936597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:42:35.560645 containerd[1583]: time="2026-01-23T01:42:35.560405511Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:42:35.562985 containerd[1583]: time="2026-01-23T01:42:35.562856683Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:42:35.563056 containerd[1583]: time="2026-01-23T01:42:35.562997069Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:42:35.563235 kubelet[2830]: E0123 01:42:35.563178 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:42:35.563235 kubelet[2830]: E0123 01:42:35.563230 2830 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:42:35.563725 containerd[1583]: time="2026-01-23T01:42:35.563653912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:42:35.564156 kubelet[2830]: E0123 01:42:35.564024 2830 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-85j8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54f6844c7b-qww2p_calico-system(af904215-7ebf-4966-8c59-420d5c45351b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:42:35.566083 kubelet[2830]: E0123 01:42:35.566019 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54f6844c7b-qww2p" podUID="af904215-7ebf-4966-8c59-420d5c45351b" Jan 23 01:42:35.627414 containerd[1583]: time="2026-01-23T01:42:35.627203372Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:42:35.629426 containerd[1583]: time="2026-01-23T01:42:35.629359746Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:42:35.629732 containerd[1583]: time="2026-01-23T01:42:35.629667439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:42:35.630227 kubelet[2830]: E0123 01:42:35.630038 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:42:35.630227 kubelet[2830]: E0123 01:42:35.630169 2830 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:42:35.630463 kubelet[2830]: E0123 01:42:35.630324 2830 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrfkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6c9c68dbf8-nsnd4_calico-system(117ed452-382a-4cae-a50f-439078d719fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:42:35.631993 kubelet[2830]: E0123 01:42:35.631847 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c9c68dbf8-nsnd4" podUID="117ed452-382a-4cae-a50f-439078d719fb" Jan 23 01:42:36.373230 containerd[1583]: time="2026-01-23T01:42:36.372830078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:42:36.443743 containerd[1583]: time="2026-01-23T01:42:36.443329055Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:42:36.445509 containerd[1583]: time="2026-01-23T01:42:36.445316875Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:42:36.445509 containerd[1583]: time="2026-01-23T01:42:36.445402075Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:42:36.445837 kubelet[2830]: E0123 01:42:36.445779 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:42:36.445837 kubelet[2830]: E0123 01:42:36.445837 2830 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:42:36.446145 kubelet[2830]: E0123 01:42:36.445959 2830 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rgc28,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-msxcs_calico-system(3557eedc-6578-421a-8c65-fff9d3233af5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:42:36.447671 kubelet[2830]: E0123 01:42:36.447442 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-msxcs" podUID="3557eedc-6578-421a-8c65-fff9d3233af5" Jan 23 01:42:38.340860 systemd[1]: Started sshd@13-10.0.0.137:22-10.0.0.1:56090.service - OpenSSH per-connection server daemon (10.0.0.1:56090). Jan 23 01:42:38.423927 sshd[5355]: Accepted publickey for core from 10.0.0.1 port 56090 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:42:38.426032 sshd-session[5355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:42:38.436382 systemd-logind[1556]: New session 14 of user core. Jan 23 01:42:38.444989 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 01:42:38.657227 sshd[5358]: Connection closed by 10.0.0.1 port 56090 Jan 23 01:42:38.658330 sshd-session[5355]: pam_unix(sshd:session): session closed for user core Jan 23 01:42:38.672293 systemd[1]: sshd@13-10.0.0.137:22-10.0.0.1:56090.service: Deactivated successfully. Jan 23 01:42:38.675833 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 01:42:38.678949 systemd-logind[1556]: Session 14 logged out. Waiting for processes to exit. Jan 23 01:42:38.682704 systemd-logind[1556]: Removed session 14. Jan 23 01:42:39.368831 containerd[1583]: time="2026-01-23T01:42:39.367113661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:42:39.439410 containerd[1583]: time="2026-01-23T01:42:39.438818485Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:42:39.441811 containerd[1583]: time="2026-01-23T01:42:39.441699526Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:42:39.441811 containerd[1583]: time="2026-01-23T01:42:39.441780247Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:42:39.442266 kubelet[2830]: E0123 01:42:39.442137 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:42:39.442266 kubelet[2830]: E0123 01:42:39.442199 2830 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:42:39.442950 kubelet[2830]: E0123 01:42:39.442337 2830 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gb8wt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7xq6t_calico-system(d4655da0-4d87-462c-8176-c9772e42f76a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:42:39.446635 containerd[1583]: time="2026-01-23T01:42:39.446378358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:42:39.508210 containerd[1583]: time="2026-01-23T01:42:39.507710315Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:42:39.511108 containerd[1583]: time="2026-01-23T01:42:39.510705106Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:42:39.511235 containerd[1583]: time="2026-01-23T01:42:39.510905320Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:42:39.511724 kubelet[2830]: E0123 01:42:39.511674 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:42:39.511724 kubelet[2830]: E0123 01:42:39.511722 2830 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:42:39.511911 kubelet[2830]: E0123 01:42:39.511854 2830 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gb8wt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7xq6t_calico-system(d4655da0-4d87-462c-8176-c9772e42f76a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:42:39.513728 kubelet[2830]: E0123 01:42:39.513350 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7xq6t" podUID="d4655da0-4d87-462c-8176-c9772e42f76a" Jan 23 01:42:42.368092 containerd[1583]: time="2026-01-23T01:42:42.367902498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:42:42.445417 containerd[1583]: time="2026-01-23T01:42:42.445273246Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:42:42.448289 containerd[1583]: time="2026-01-23T01:42:42.447949105Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:42:42.448289 containerd[1583]: time="2026-01-23T01:42:42.448089095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:42:42.449243 kubelet[2830]: E0123 01:42:42.448812 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:42:42.449243 kubelet[2830]: E0123 01:42:42.448871 2830 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:42:42.449243 kubelet[2830]: E0123 01:42:42.449031 2830 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8dtjn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-694fcf68f5-4q5l8_calico-apiserver(73bba584-49ff-4a6a-a59a-46cd1ea9004d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:42:42.450260 kubelet[2830]: E0123 01:42:42.450204 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-694fcf68f5-4q5l8" podUID="73bba584-49ff-4a6a-a59a-46cd1ea9004d" Jan 23 01:42:43.676224 systemd[1]: Started sshd@14-10.0.0.137:22-10.0.0.1:60126.service - OpenSSH per-connection server daemon (10.0.0.1:60126). Jan 23 01:42:43.783922 sshd[5374]: Accepted publickey for core from 10.0.0.1 port 60126 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:42:43.786896 sshd-session[5374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:42:43.797269 systemd-logind[1556]: New session 15 of user core. Jan 23 01:42:43.802916 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 01:42:43.982269 sshd[5377]: Connection closed by 10.0.0.1 port 60126 Jan 23 01:42:43.982991 sshd-session[5374]: pam_unix(sshd:session): session closed for user core Jan 23 01:42:43.994354 systemd[1]: sshd@14-10.0.0.137:22-10.0.0.1:60126.service: Deactivated successfully. Jan 23 01:42:43.997250 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 01:42:43.999331 systemd-logind[1556]: Session 15 logged out. Waiting for processes to exit. Jan 23 01:42:44.005119 systemd[1]: Started sshd@15-10.0.0.137:22-10.0.0.1:60134.service - OpenSSH per-connection server daemon (10.0.0.1:60134). Jan 23 01:42:44.008204 systemd-logind[1556]: Removed session 15. Jan 23 01:42:44.081007 sshd[5392]: Accepted publickey for core from 10.0.0.1 port 60134 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:42:44.083167 sshd-session[5392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:42:44.094097 systemd-logind[1556]: New session 16 of user core. Jan 23 01:42:44.106857 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 01:42:44.380826 sshd[5395]: Connection closed by 10.0.0.1 port 60134 Jan 23 01:42:44.382217 sshd-session[5392]: pam_unix(sshd:session): session closed for user core Jan 23 01:42:44.395788 systemd[1]: sshd@15-10.0.0.137:22-10.0.0.1:60134.service: Deactivated successfully. Jan 23 01:42:44.399197 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 01:42:44.407168 systemd-logind[1556]: Session 16 logged out. Waiting for processes to exit. Jan 23 01:42:44.412762 systemd[1]: Started sshd@16-10.0.0.137:22-10.0.0.1:60148.service - OpenSSH per-connection server daemon (10.0.0.1:60148). Jan 23 01:42:44.418314 systemd-logind[1556]: Removed session 16. Jan 23 01:42:44.521672 sshd[5406]: Accepted publickey for core from 10.0.0.1 port 60148 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:42:44.522265 sshd-session[5406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:42:44.538242 systemd-logind[1556]: New session 17 of user core. Jan 23 01:42:44.546859 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 01:42:44.749357 sshd[5409]: Connection closed by 10.0.0.1 port 60148 Jan 23 01:42:44.749864 sshd-session[5406]: pam_unix(sshd:session): session closed for user core Jan 23 01:42:44.756691 systemd[1]: sshd@16-10.0.0.137:22-10.0.0.1:60148.service: Deactivated successfully. Jan 23 01:42:44.759770 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 01:42:44.761280 systemd-logind[1556]: Session 17 logged out. Waiting for processes to exit. Jan 23 01:42:44.764304 systemd-logind[1556]: Removed session 17. Jan 23 01:42:46.374236 containerd[1583]: time="2026-01-23T01:42:46.373971242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:42:46.441112 containerd[1583]: time="2026-01-23T01:42:46.440924065Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:42:46.442773 containerd[1583]: time="2026-01-23T01:42:46.442661970Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:42:46.442848 containerd[1583]: time="2026-01-23T01:42:46.442807141Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:42:46.443523 kubelet[2830]: E0123 01:42:46.443059 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:42:46.443523 kubelet[2830]: E0123 01:42:46.443158 2830 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:42:46.443523 kubelet[2830]: E0123 01:42:46.443280 2830 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gt7lg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-694fcf68f5-p2bxz_calico-apiserver(45f8db5c-10e9-4970-b59b-9e6ccdff633a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:42:46.444869 kubelet[2830]: E0123 01:42:46.444818 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-694fcf68f5-p2bxz" podUID="45f8db5c-10e9-4970-b59b-9e6ccdff633a" Jan 23 01:42:47.367142 kubelet[2830]: E0123 01:42:47.367022 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c9c68dbf8-nsnd4" podUID="117ed452-382a-4cae-a50f-439078d719fb" Jan 23 01:42:47.371237 kubelet[2830]: E0123 01:42:47.370321 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54f6844c7b-qww2p" podUID="af904215-7ebf-4966-8c59-420d5c45351b" Jan 23 01:42:49.767129 systemd[1]: Started sshd@17-10.0.0.137:22-10.0.0.1:60154.service - OpenSSH per-connection server daemon (10.0.0.1:60154). Jan 23 01:42:49.865387 sshd[5450]: Accepted publickey for core from 10.0.0.1 port 60154 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:42:49.867711 sshd-session[5450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:42:49.877123 systemd-logind[1556]: New session 18 of user core. Jan 23 01:42:49.894891 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 01:42:50.091907 sshd[5453]: Connection closed by 10.0.0.1 port 60154 Jan 23 01:42:50.092441 sshd-session[5450]: pam_unix(sshd:session): session closed for user core Jan 23 01:42:50.103725 systemd[1]: sshd@17-10.0.0.137:22-10.0.0.1:60154.service: Deactivated successfully. Jan 23 01:42:50.108405 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 01:42:50.110648 systemd-logind[1556]: Session 18 logged out. Waiting for processes to exit. Jan 23 01:42:50.115294 systemd-logind[1556]: Removed session 18. Jan 23 01:42:50.370134 kubelet[2830]: E0123 01:42:50.369915 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7xq6t" podUID="d4655da0-4d87-462c-8176-c9772e42f76a" Jan 23 01:42:51.368657 kubelet[2830]: E0123 01:42:51.367867 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-msxcs" podUID="3557eedc-6578-421a-8c65-fff9d3233af5" Jan 23 01:42:53.368175 kubelet[2830]: E0123 01:42:53.367173 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-694fcf68f5-4q5l8" podUID="73bba584-49ff-4a6a-a59a-46cd1ea9004d" Jan 23 01:42:55.112047 systemd[1]: Started sshd@18-10.0.0.137:22-10.0.0.1:59914.service - OpenSSH per-connection server daemon (10.0.0.1:59914). Jan 23 01:42:55.195373 sshd[5468]: Accepted publickey for core from 10.0.0.1 port 59914 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:42:55.197299 sshd-session[5468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:42:55.206808 systemd-logind[1556]: New session 19 of user core. Jan 23 01:42:55.216827 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 01:42:55.383188 sshd[5471]: Connection closed by 10.0.0.1 port 59914 Jan 23 01:42:55.383677 sshd-session[5468]: pam_unix(sshd:session): session closed for user core Jan 23 01:42:55.390420 systemd[1]: sshd@18-10.0.0.137:22-10.0.0.1:59914.service: Deactivated successfully. Jan 23 01:42:55.395029 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 01:42:55.398953 systemd-logind[1556]: Session 19 logged out. Waiting for processes to exit. Jan 23 01:42:55.402353 systemd-logind[1556]: Removed session 19. Jan 23 01:42:57.368262 kubelet[2830]: E0123 01:42:57.368183 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-694fcf68f5-p2bxz" podUID="45f8db5c-10e9-4970-b59b-9e6ccdff633a" Jan 23 01:42:58.367827 kubelet[2830]: E0123 01:42:58.367744 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54f6844c7b-qww2p" podUID="af904215-7ebf-4966-8c59-420d5c45351b" Jan 23 01:43:00.403276 systemd[1]: Started sshd@19-10.0.0.137:22-10.0.0.1:59920.service - OpenSSH per-connection server daemon (10.0.0.1:59920). Jan 23 01:43:00.482384 sshd[5489]: Accepted publickey for core from 10.0.0.1 port 59920 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:43:00.485255 sshd-session[5489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:43:00.495412 systemd-logind[1556]: New session 20 of user core. Jan 23 01:43:00.503913 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 01:43:00.680262 sshd[5492]: Connection closed by 10.0.0.1 port 59920 Jan 23 01:43:00.680296 sshd-session[5489]: pam_unix(sshd:session): session closed for user core Jan 23 01:43:00.687023 systemd-logind[1556]: Session 20 logged out. Waiting for processes to exit. Jan 23 01:43:00.687848 systemd[1]: sshd@19-10.0.0.137:22-10.0.0.1:59920.service: Deactivated successfully. Jan 23 01:43:00.690409 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 01:43:00.694154 systemd-logind[1556]: Removed session 20. Jan 23 01:43:01.368334 kubelet[2830]: E0123 01:43:01.368072 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c9c68dbf8-nsnd4" podUID="117ed452-382a-4cae-a50f-439078d719fb" Jan 23 01:43:02.367242 kubelet[2830]: E0123 01:43:02.367057 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-msxcs" podUID="3557eedc-6578-421a-8c65-fff9d3233af5" Jan 23 01:43:03.370426 kubelet[2830]: E0123 01:43:03.370251 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7xq6t" podUID="d4655da0-4d87-462c-8176-c9772e42f76a" Jan 23 01:43:05.696718 systemd[1]: Started sshd@20-10.0.0.137:22-10.0.0.1:56378.service - OpenSSH per-connection server daemon (10.0.0.1:56378). Jan 23 01:43:05.768402 sshd[5507]: Accepted publickey for core from 10.0.0.1 port 56378 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:43:05.770427 sshd-session[5507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:43:05.777345 systemd-logind[1556]: New session 21 of user core. Jan 23 01:43:05.788799 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 01:43:05.950774 sshd[5510]: Connection closed by 10.0.0.1 port 56378 Jan 23 01:43:05.951007 sshd-session[5507]: pam_unix(sshd:session): session closed for user core Jan 23 01:43:05.956267 systemd[1]: sshd@20-10.0.0.137:22-10.0.0.1:56378.service: Deactivated successfully. Jan 23 01:43:05.959327 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 01:43:05.961301 systemd-logind[1556]: Session 21 logged out. Waiting for processes to exit. Jan 23 01:43:05.964257 systemd-logind[1556]: Removed session 21. Jan 23 01:43:08.370745 kubelet[2830]: E0123 01:43:08.370298 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-694fcf68f5-4q5l8" podUID="73bba584-49ff-4a6a-a59a-46cd1ea9004d" Jan 23 01:43:10.371380 kubelet[2830]: E0123 01:43:10.371332 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54f6844c7b-qww2p" podUID="af904215-7ebf-4966-8c59-420d5c45351b" Jan 23 01:43:10.970405 systemd[1]: Started sshd@21-10.0.0.137:22-10.0.0.1:56380.service - OpenSSH per-connection server daemon (10.0.0.1:56380). Jan 23 01:43:11.045949 sshd[5526]: Accepted publickey for core from 10.0.0.1 port 56380 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:43:11.047421 sshd-session[5526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:43:11.056735 systemd-logind[1556]: New session 22 of user core. Jan 23 01:43:11.069111 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 01:43:11.270436 sshd[5529]: Connection closed by 10.0.0.1 port 56380 Jan 23 01:43:11.271202 sshd-session[5526]: pam_unix(sshd:session): session closed for user core Jan 23 01:43:11.282706 systemd[1]: sshd@21-10.0.0.137:22-10.0.0.1:56380.service: Deactivated successfully. Jan 23 01:43:11.285032 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 01:43:11.286251 systemd-logind[1556]: Session 22 logged out. Waiting for processes to exit. Jan 23 01:43:11.290213 systemd[1]: Started sshd@22-10.0.0.137:22-10.0.0.1:56392.service - OpenSSH per-connection server daemon (10.0.0.1:56392). Jan 23 01:43:11.292743 systemd-logind[1556]: Removed session 22. Jan 23 01:43:11.364409 sshd[5542]: Accepted publickey for core from 10.0.0.1 port 56392 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:43:11.366077 sshd-session[5542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:43:11.376964 systemd-logind[1556]: New session 23 of user core. Jan 23 01:43:11.387859 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 01:43:11.807999 sshd[5546]: Connection closed by 10.0.0.1 port 56392 Jan 23 01:43:11.809205 sshd-session[5542]: pam_unix(sshd:session): session closed for user core Jan 23 01:43:11.821740 systemd[1]: Started sshd@23-10.0.0.137:22-10.0.0.1:56408.service - OpenSSH per-connection server daemon (10.0.0.1:56408). Jan 23 01:43:11.822275 systemd[1]: sshd@22-10.0.0.137:22-10.0.0.1:56392.service: Deactivated successfully. Jan 23 01:43:11.825363 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 01:43:11.828300 systemd-logind[1556]: Session 23 logged out. Waiting for processes to exit. Jan 23 01:43:11.834523 systemd-logind[1556]: Removed session 23. Jan 23 01:43:11.958409 sshd[5555]: Accepted publickey for core from 10.0.0.1 port 56408 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:43:11.960933 sshd-session[5555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:43:11.969778 systemd-logind[1556]: New session 24 of user core. Jan 23 01:43:11.981905 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 01:43:12.376714 kubelet[2830]: E0123 01:43:12.375987 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-694fcf68f5-p2bxz" podUID="45f8db5c-10e9-4970-b59b-9e6ccdff633a" Jan 23 01:43:12.833849 sshd[5561]: Connection closed by 10.0.0.1 port 56408 Jan 23 01:43:12.836158 sshd-session[5555]: pam_unix(sshd:session): session closed for user core Jan 23 01:43:12.851044 systemd[1]: Started sshd@24-10.0.0.137:22-10.0.0.1:50424.service - OpenSSH per-connection server daemon (10.0.0.1:50424). Jan 23 01:43:12.851742 systemd[1]: sshd@23-10.0.0.137:22-10.0.0.1:56408.service: Deactivated successfully. Jan 23 01:43:12.857246 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 01:43:12.865245 systemd-logind[1556]: Session 24 logged out. Waiting for processes to exit. Jan 23 01:43:12.874391 systemd-logind[1556]: Removed session 24. Jan 23 01:43:12.954724 sshd[5576]: Accepted publickey for core from 10.0.0.1 port 50424 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:43:12.956835 sshd-session[5576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:43:12.965395 systemd-logind[1556]: New session 25 of user core. Jan 23 01:43:12.973835 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 01:43:13.360742 sshd[5585]: Connection closed by 10.0.0.1 port 50424 Jan 23 01:43:13.361942 sshd-session[5576]: pam_unix(sshd:session): session closed for user core Jan 23 01:43:13.373122 systemd[1]: sshd@24-10.0.0.137:22-10.0.0.1:50424.service: Deactivated successfully. Jan 23 01:43:13.376313 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 01:43:13.380230 systemd-logind[1556]: Session 25 logged out. Waiting for processes to exit. Jan 23 01:43:13.390831 systemd[1]: Started sshd@25-10.0.0.137:22-10.0.0.1:50430.service - OpenSSH per-connection server daemon (10.0.0.1:50430). Jan 23 01:43:13.393918 systemd-logind[1556]: Removed session 25. Jan 23 01:43:13.496200 sshd[5602]: Accepted publickey for core from 10.0.0.1 port 50430 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:43:13.498190 sshd-session[5602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:43:13.515337 systemd-logind[1556]: New session 26 of user core. Jan 23 01:43:13.518880 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 01:43:13.711251 sshd[5607]: Connection closed by 10.0.0.1 port 50430 Jan 23 01:43:13.711847 sshd-session[5602]: pam_unix(sshd:session): session closed for user core Jan 23 01:43:13.718007 systemd[1]: sshd@25-10.0.0.137:22-10.0.0.1:50430.service: Deactivated successfully. Jan 23 01:43:13.720804 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 01:43:13.722650 systemd-logind[1556]: Session 26 logged out. Waiting for processes to exit. Jan 23 01:43:13.725144 systemd-logind[1556]: Removed session 26. Jan 23 01:43:15.367985 kubelet[2830]: E0123 01:43:15.367757 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c9c68dbf8-nsnd4" podUID="117ed452-382a-4cae-a50f-439078d719fb" Jan 23 01:43:16.368641 kubelet[2830]: E0123 01:43:16.368338 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7xq6t" podUID="d4655da0-4d87-462c-8176-c9772e42f76a" Jan 23 01:43:17.368750 containerd[1583]: time="2026-01-23T01:43:17.368664933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:43:17.458924 containerd[1583]: time="2026-01-23T01:43:17.458400768Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:43:17.461254 containerd[1583]: time="2026-01-23T01:43:17.461093706Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:43:17.461254 containerd[1583]: time="2026-01-23T01:43:17.461208531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:43:17.461967 kubelet[2830]: E0123 01:43:17.461767 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:43:17.461967 kubelet[2830]: E0123 01:43:17.461892 2830 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:43:17.462399 kubelet[2830]: E0123 01:43:17.462243 2830 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rgc28,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-msxcs_calico-system(3557eedc-6578-421a-8c65-fff9d3233af5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:43:17.464112 kubelet[2830]: E0123 01:43:17.463679 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-msxcs" podUID="3557eedc-6578-421a-8c65-fff9d3233af5" Jan 23 01:43:18.370854 kubelet[2830]: E0123 01:43:18.370428 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:43:18.727339 systemd[1]: Started sshd@26-10.0.0.137:22-10.0.0.1:50434.service - OpenSSH per-connection server daemon (10.0.0.1:50434). Jan 23 01:43:18.818245 sshd[5624]: Accepted publickey for core from 10.0.0.1 port 50434 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:43:18.820762 sshd-session[5624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:43:18.831666 systemd-logind[1556]: New session 27 of user core. Jan 23 01:43:18.836868 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 23 01:43:19.059036 sshd[5628]: Connection closed by 10.0.0.1 port 50434 Jan 23 01:43:19.059753 sshd-session[5624]: pam_unix(sshd:session): session closed for user core Jan 23 01:43:19.069241 systemd[1]: sshd@26-10.0.0.137:22-10.0.0.1:50434.service: Deactivated successfully. Jan 23 01:43:19.069845 systemd-logind[1556]: Session 27 logged out. Waiting for processes to exit. Jan 23 01:43:19.073984 systemd[1]: session-27.scope: Deactivated successfully. Jan 23 01:43:19.077272 systemd-logind[1556]: Removed session 27. Jan 23 01:43:21.368519 kubelet[2830]: E0123 01:43:21.368400 2830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:43:22.368002 kubelet[2830]: E0123 01:43:22.367340 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-694fcf68f5-4q5l8" podUID="73bba584-49ff-4a6a-a59a-46cd1ea9004d" Jan 23 01:43:24.074351 systemd[1]: Started sshd@27-10.0.0.137:22-10.0.0.1:34696.service - OpenSSH per-connection server daemon (10.0.0.1:34696). Jan 23 01:43:24.210072 sshd[5673]: Accepted publickey for core from 10.0.0.1 port 34696 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:43:24.212846 sshd-session[5673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:43:24.223393 systemd-logind[1556]: New session 28 of user core. Jan 23 01:43:24.229955 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 23 01:43:24.369698 containerd[1583]: time="2026-01-23T01:43:24.369280768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:43:24.419231 sshd[5676]: Connection closed by 10.0.0.1 port 34696 Jan 23 01:43:24.419898 sshd-session[5673]: pam_unix(sshd:session): session closed for user core Jan 23 01:43:24.426674 systemd[1]: sshd@27-10.0.0.137:22-10.0.0.1:34696.service: Deactivated successfully. Jan 23 01:43:24.430166 systemd[1]: session-28.scope: Deactivated successfully. Jan 23 01:43:24.432721 systemd-logind[1556]: Session 28 logged out. Waiting for processes to exit. Jan 23 01:43:24.435047 systemd-logind[1556]: Removed session 28. Jan 23 01:43:24.458348 containerd[1583]: time="2026-01-23T01:43:24.458059130Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:43:24.460042 containerd[1583]: time="2026-01-23T01:43:24.459963648Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:43:24.460241 containerd[1583]: time="2026-01-23T01:43:24.460040242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:43:24.460677 kubelet[2830]: E0123 01:43:24.460400 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:43:24.460966 kubelet[2830]: E0123 01:43:24.460698 2830 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:43:24.460966 kubelet[2830]: E0123 01:43:24.460804 2830 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:807a26d448fa42adb3b80c712c77b43e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-85j8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54f6844c7b-qww2p_calico-system(af904215-7ebf-4966-8c59-420d5c45351b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:43:24.465619 containerd[1583]: time="2026-01-23T01:43:24.465406003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:43:24.588064 containerd[1583]: time="2026-01-23T01:43:24.587804560Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:43:24.589740 containerd[1583]: time="2026-01-23T01:43:24.589323172Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:43:24.589740 containerd[1583]: time="2026-01-23T01:43:24.589372860Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:43:24.589878 kubelet[2830]: E0123 01:43:24.589833 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:43:24.590440 kubelet[2830]: E0123 01:43:24.590038 2830 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:43:24.590440 kubelet[2830]: E0123 01:43:24.590269 2830 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-85j8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54f6844c7b-qww2p_calico-system(af904215-7ebf-4966-8c59-420d5c45351b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:43:24.592759 kubelet[2830]: E0123 01:43:24.592424 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54f6844c7b-qww2p" podUID="af904215-7ebf-4966-8c59-420d5c45351b" Jan 23 01:43:27.368099 containerd[1583]: time="2026-01-23T01:43:27.367762917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:43:27.433732 containerd[1583]: time="2026-01-23T01:43:27.433179546Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:43:27.435119 containerd[1583]: time="2026-01-23T01:43:27.434974170Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:43:27.435186 containerd[1583]: time="2026-01-23T01:43:27.435006597Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:43:27.435531 kubelet[2830]: E0123 01:43:27.435390 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:43:27.435531 kubelet[2830]: E0123 01:43:27.435439 2830 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:43:27.436168 kubelet[2830]: E0123 01:43:27.436035 2830 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrfkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6c9c68dbf8-nsnd4_calico-system(117ed452-382a-4cae-a50f-439078d719fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:43:27.436412 containerd[1583]: time="2026-01-23T01:43:27.436251261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:43:27.437406 kubelet[2830]: E0123 01:43:27.437374 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6c9c68dbf8-nsnd4" podUID="117ed452-382a-4cae-a50f-439078d719fb" Jan 23 01:43:27.504802 containerd[1583]: time="2026-01-23T01:43:27.504691101Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:43:27.506815 containerd[1583]: time="2026-01-23T01:43:27.506684194Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:43:27.506815 containerd[1583]: time="2026-01-23T01:43:27.506805150Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:43:27.507070 kubelet[2830]: E0123 01:43:27.507000 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:43:27.507070 kubelet[2830]: E0123 01:43:27.507056 2830 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:43:27.507402 kubelet[2830]: E0123 01:43:27.507203 2830 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gt7lg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-694fcf68f5-p2bxz_calico-apiserver(45f8db5c-10e9-4970-b59b-9e6ccdff633a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:43:27.508921 kubelet[2830]: E0123 01:43:27.508803 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-694fcf68f5-p2bxz" podUID="45f8db5c-10e9-4970-b59b-9e6ccdff633a" Jan 23 01:43:29.442844 systemd[1]: Started sshd@28-10.0.0.137:22-10.0.0.1:34704.service - OpenSSH per-connection server daemon (10.0.0.1:34704). Jan 23 01:43:29.516174 sshd[5710]: Accepted publickey for core from 10.0.0.1 port 34704 ssh2: RSA SHA256:Xzw5kDPeow2kUH9VvLfyROOc93JCEvWih75HYqla8Hg Jan 23 01:43:29.518069 sshd-session[5710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:43:29.525396 systemd-logind[1556]: New session 29 of user core. Jan 23 01:43:29.541783 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 23 01:43:29.721717 sshd[5713]: Connection closed by 10.0.0.1 port 34704 Jan 23 01:43:29.722190 sshd-session[5710]: pam_unix(sshd:session): session closed for user core Jan 23 01:43:29.727669 systemd[1]: sshd@28-10.0.0.137:22-10.0.0.1:34704.service: Deactivated successfully. Jan 23 01:43:29.730154 systemd[1]: session-29.scope: Deactivated successfully. Jan 23 01:43:29.733050 systemd-logind[1556]: Session 29 logged out. Waiting for processes to exit. Jan 23 01:43:29.735746 systemd-logind[1556]: Removed session 29. Jan 23 01:43:30.368919 kubelet[2830]: E0123 01:43:30.368367 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-msxcs" podUID="3557eedc-6578-421a-8c65-fff9d3233af5" Jan 23 01:43:30.370023 containerd[1583]: time="2026-01-23T01:43:30.369164410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:43:30.454182 containerd[1583]: time="2026-01-23T01:43:30.453915605Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:43:30.456119 containerd[1583]: time="2026-01-23T01:43:30.456034268Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:43:30.456119 containerd[1583]: time="2026-01-23T01:43:30.456104891Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:43:30.456401 kubelet[2830]: E0123 01:43:30.456267 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:43:30.456401 kubelet[2830]: E0123 01:43:30.456363 2830 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:43:30.456724 kubelet[2830]: E0123 01:43:30.456655 2830 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gb8wt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7xq6t_calico-system(d4655da0-4d87-462c-8176-c9772e42f76a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:43:30.459764 containerd[1583]: time="2026-01-23T01:43:30.459204807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:43:30.519716 containerd[1583]: time="2026-01-23T01:43:30.519366467Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:43:30.521945 containerd[1583]: time="2026-01-23T01:43:30.521732798Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:43:30.521945 containerd[1583]: time="2026-01-23T01:43:30.521800350Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:43:30.522660 kubelet[2830]: E0123 01:43:30.522358 2830 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:43:30.522660 kubelet[2830]: E0123 01:43:30.522513 2830 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:43:30.523074 kubelet[2830]: E0123 01:43:30.522768 2830 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gb8wt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7xq6t_calico-system(d4655da0-4d87-462c-8176-c9772e42f76a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:43:30.525117 kubelet[2830]: E0123 01:43:30.524878 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7xq6t" podUID="d4655da0-4d87-462c-8176-c9772e42f76a"