Jan 23 00:56:08.996605 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 22:22:03 -00 2026 Jan 23 00:56:08.996638 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 00:56:08.996658 kernel: BIOS-provided physical RAM map: Jan 23 00:56:08.996667 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 23 00:56:08.996675 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 23 00:56:08.996686 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 23 00:56:08.996697 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 23 00:56:08.996708 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 23 00:56:08.996718 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 23 00:56:08.996726 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 23 00:56:08.996737 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 00:56:08.996752 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 23 00:56:08.996763 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 23 00:56:08.996771 kernel: NX (Execute Disable) protection: active Jan 23 00:56:08.996784 kernel: APIC: Static calls initialized Jan 23 00:56:08.996795 kernel: SMBIOS 2.8 present. Jan 23 00:56:08.996861 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 23 00:56:08.996874 kernel: DMI: Memory slots populated: 1/1 Jan 23 00:56:08.996885 kernel: Hypervisor detected: KVM Jan 23 00:56:08.996896 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 23 00:56:08.996906 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 00:56:08.996917 kernel: kvm-clock: using sched offset of 31481845515 cycles Jan 23 00:56:08.996927 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 00:56:08.996939 kernel: tsc: Detected 2445.424 MHz processor Jan 23 00:56:08.996950 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 00:56:08.996962 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 00:56:08.996978 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 23 00:56:08.996989 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 23 00:56:08.997000 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 00:56:08.997012 kernel: Using GB pages for direct mapping Jan 23 00:56:08.997021 kernel: ACPI: Early table checksum verification disabled Jan 23 00:56:08.997033 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 23 00:56:08.997043 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:56:08.997055 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:56:08.997065 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:56:08.997082 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 23 00:56:08.997093 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:56:08.997104 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:56:08.997114 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:56:08.997126 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:56:08.997143 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 23 00:56:08.997158 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 23 00:56:08.997169 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 23 00:56:08.997181 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 23 00:56:08.997193 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 23 00:56:08.997204 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 23 00:56:08.997215 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 23 00:56:08.997227 kernel: No NUMA configuration found Jan 23 00:56:08.997239 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 23 00:56:08.997254 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 23 00:56:08.997264 kernel: Zone ranges: Jan 23 00:56:08.997274 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 00:56:08.997285 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 23 00:56:08.997295 kernel: Normal empty Jan 23 00:56:08.997355 kernel: Device empty Jan 23 00:56:08.997365 kernel: Movable zone start for each node Jan 23 00:56:08.997375 kernel: Early memory node ranges Jan 23 00:56:08.997385 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 23 00:56:08.997399 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 23 00:56:08.997468 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 23 00:56:08.997480 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 00:56:08.997490 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 23 00:56:08.997527 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 23 00:56:08.997538 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 23 00:56:08.997548 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 00:56:08.997558 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 00:56:08.997568 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 23 00:56:08.997583 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 00:56:08.997593 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 00:56:08.997603 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 00:56:08.997614 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 00:56:08.997626 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 00:56:08.997637 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 23 00:56:08.997646 kernel: TSC deadline timer available Jan 23 00:56:08.997657 kernel: CPU topo: Max. logical packages: 1 Jan 23 00:56:08.997668 kernel: CPU topo: Max. logical dies: 1 Jan 23 00:56:08.997684 kernel: CPU topo: Max. dies per package: 1 Jan 23 00:56:08.997696 kernel: CPU topo: Max. threads per core: 1 Jan 23 00:56:08.997706 kernel: CPU topo: Num. cores per package: 4 Jan 23 00:56:08.997717 kernel: CPU topo: Num. threads per package: 4 Jan 23 00:56:08.997729 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 23 00:56:08.997741 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 00:56:08.997751 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 23 00:56:08.997764 kernel: kvm-guest: setup PV sched yield Jan 23 00:56:08.997775 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 23 00:56:08.997785 kernel: Booting paravirtualized kernel on KVM Jan 23 00:56:08.997804 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 00:56:08.997814 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 23 00:56:08.997827 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 23 00:56:08.997838 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 23 00:56:08.997851 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 23 00:56:08.997860 kernel: kvm-guest: PV spinlocks enabled Jan 23 00:56:08.997872 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 00:56:08.997885 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 00:56:08.997903 kernel: random: crng init done Jan 23 00:56:08.997913 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 00:56:08.997925 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 00:56:08.997936 kernel: Fallback order for Node 0: 0 Jan 23 00:56:08.997949 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 23 00:56:08.997958 kernel: Policy zone: DMA32 Jan 23 00:56:08.997971 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 00:56:08.997982 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 23 00:56:08.997995 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 00:56:08.998009 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 00:56:08.998022 kernel: Dynamic Preempt: voluntary Jan 23 00:56:08.998033 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 00:56:08.998051 kernel: rcu: RCU event tracing is enabled. Jan 23 00:56:08.998063 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 23 00:56:08.998076 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 00:56:08.998131 kernel: Rude variant of Tasks RCU enabled. Jan 23 00:56:08.998145 kernel: Tracing variant of Tasks RCU enabled. Jan 23 00:56:08.998155 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 00:56:08.998172 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 23 00:56:08.998184 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 00:56:08.998196 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 00:56:08.998207 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 00:56:08.998218 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 23 00:56:08.998230 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 00:56:08.998255 kernel: Console: colour VGA+ 80x25 Jan 23 00:56:08.998271 kernel: printk: legacy console [ttyS0] enabled Jan 23 00:56:08.998283 kernel: ACPI: Core revision 20240827 Jan 23 00:56:08.998296 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 23 00:56:08.998364 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 00:56:08.998377 kernel: x2apic enabled Jan 23 00:56:08.998395 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 00:56:08.998488 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 23 00:56:08.998503 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 23 00:56:08.998516 kernel: kvm-guest: setup PV IPIs Jan 23 00:56:08.998528 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 23 00:56:08.998547 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd5e8294, max_idle_ns: 440795237246 ns Jan 23 00:56:08.998559 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Jan 23 00:56:08.998571 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 00:56:08.998583 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 23 00:56:08.998594 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 23 00:56:08.998607 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 00:56:08.998619 kernel: Spectre V2 : Mitigation: Retpolines Jan 23 00:56:08.998631 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 00:56:08.998648 kernel: Speculative Store Bypass: Vulnerable Jan 23 00:56:08.998661 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 23 00:56:08.998674 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 23 00:56:08.998685 kernel: active return thunk: srso_alias_return_thunk Jan 23 00:56:08.998698 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 23 00:56:08.998710 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 23 00:56:08.998723 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 23 00:56:08.998734 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 00:56:08.998746 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 00:56:08.998764 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 00:56:08.998775 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 00:56:08.998788 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 23 00:56:08.998801 kernel: Freeing SMP alternatives memory: 32K Jan 23 00:56:08.998811 kernel: pid_max: default: 32768 minimum: 301 Jan 23 00:56:08.998823 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 00:56:08.998834 kernel: landlock: Up and running. Jan 23 00:56:08.998844 kernel: SELinux: Initializing. Jan 23 00:56:08.998855 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 00:56:08.998870 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 00:56:08.998948 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 23 00:56:08.998960 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 23 00:56:08.998971 kernel: signal: max sigframe size: 1776 Jan 23 00:56:08.998981 kernel: rcu: Hierarchical SRCU implementation. Jan 23 00:56:08.998992 kernel: rcu: Max phase no-delay instances is 400. Jan 23 00:56:08.999003 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 00:56:08.999013 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 23 00:56:08.999024 kernel: smp: Bringing up secondary CPUs ... Jan 23 00:56:08.999038 kernel: smpboot: x86: Booting SMP configuration: Jan 23 00:56:08.999050 kernel: .... node #0, CPUs: #1 #2 #3 Jan 23 00:56:08.999062 kernel: smp: Brought up 1 node, 4 CPUs Jan 23 00:56:08.999071 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Jan 23 00:56:08.999082 kernel: Memory: 2420716K/2571752K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46196K init, 2564K bss, 145096K reserved, 0K cma-reserved) Jan 23 00:56:08.999092 kernel: devtmpfs: initialized Jan 23 00:56:08.999103 kernel: x86/mm: Memory block size: 128MB Jan 23 00:56:08.999113 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 00:56:08.999124 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 23 00:56:08.999138 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 00:56:08.999149 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 00:56:08.999159 kernel: audit: initializing netlink subsys (disabled) Jan 23 00:56:08.999170 kernel: audit: type=2000 audit(1769129757.029:1): state=initialized audit_enabled=0 res=1 Jan 23 00:56:08.999180 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 00:56:08.999190 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 00:56:08.999201 kernel: cpuidle: using governor menu Jan 23 00:56:08.999211 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 00:56:08.999222 kernel: dca service started, version 1.12.1 Jan 23 00:56:08.999235 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 23 00:56:08.999246 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 23 00:56:08.999256 kernel: PCI: Using configuration type 1 for base access Jan 23 00:56:08.999267 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 00:56:08.999280 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 00:56:08.999291 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 00:56:08.999505 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 00:56:08.999519 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 00:56:08.999534 kernel: ACPI: Added _OSI(Module Device) Jan 23 00:56:08.999544 kernel: ACPI: Added _OSI(Processor Device) Jan 23 00:56:08.999555 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 00:56:08.999565 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 00:56:08.999576 kernel: ACPI: Interpreter enabled Jan 23 00:56:08.999586 kernel: ACPI: PM: (supports S0 S3 S5) Jan 23 00:56:08.999596 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 00:56:08.999607 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 00:56:08.999617 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 00:56:08.999628 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 23 00:56:08.999641 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 00:56:09.001008 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 00:56:09.001219 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 23 00:56:09.001669 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 23 00:56:09.001689 kernel: PCI host bridge to bus 0000:00 Jan 23 00:56:09.002083 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 00:56:09.002350 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 00:56:09.002648 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 00:56:09.002897 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 23 00:56:09.003111 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 23 00:56:09.003363 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 23 00:56:09.003615 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 00:56:09.003965 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 23 00:56:09.004252 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 23 00:56:09.004557 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 23 00:56:09.004785 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 23 00:56:09.004966 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 23 00:56:09.005140 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 00:56:09.005591 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 23 00:56:09.005784 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 23 00:56:09.005966 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 23 00:56:09.006142 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 23 00:56:09.006390 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 23 00:56:09.006647 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 23 00:56:09.006825 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 23 00:56:09.006999 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 23 00:56:09.007257 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 23 00:56:09.007568 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 23 00:56:09.007746 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 23 00:56:09.007917 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 23 00:56:09.008089 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 23 00:56:09.008491 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 23 00:56:09.008678 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 23 00:56:09.009063 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 23 00:56:09.009243 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 23 00:56:09.009598 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 23 00:56:09.009960 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 23 00:56:09.010159 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 23 00:56:09.010177 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 00:56:09.010189 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 00:56:09.010207 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 00:56:09.010221 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 00:56:09.010234 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 23 00:56:09.010245 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 23 00:56:09.010255 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 23 00:56:09.010267 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 23 00:56:09.010279 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 23 00:56:09.010290 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 23 00:56:09.010365 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 23 00:56:09.010383 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 23 00:56:09.010397 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 23 00:56:09.010586 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 23 00:56:09.010607 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 23 00:56:09.010619 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 23 00:56:09.010632 kernel: iommu: Default domain type: Translated Jan 23 00:56:09.010643 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 00:56:09.010656 kernel: PCI: Using ACPI for IRQ routing Jan 23 00:56:09.010711 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 00:56:09.010732 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 23 00:56:09.010743 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 23 00:56:09.011097 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 23 00:56:09.011391 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 23 00:56:09.011658 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 00:56:09.011675 kernel: vgaarb: loaded Jan 23 00:56:09.011686 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 23 00:56:09.011697 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 23 00:56:09.011713 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 00:56:09.011724 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 00:56:09.011735 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 00:56:09.011745 kernel: pnp: PnP ACPI init Jan 23 00:56:09.012108 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 23 00:56:09.012126 kernel: pnp: PnP ACPI: found 6 devices Jan 23 00:56:09.012137 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 00:56:09.012148 kernel: NET: Registered PF_INET protocol family Jan 23 00:56:09.012159 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 00:56:09.012175 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 00:56:09.012187 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 00:56:09.012197 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 00:56:09.012208 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 00:56:09.012219 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 00:56:09.012231 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 00:56:09.012244 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 00:56:09.012254 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 00:56:09.012269 kernel: NET: Registered PF_XDP protocol family Jan 23 00:56:09.012660 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 00:56:09.012859 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 00:56:09.013090 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 00:56:09.013270 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 23 00:56:09.013597 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 23 00:56:09.013773 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 23 00:56:09.013788 kernel: PCI: CLS 0 bytes, default 64 Jan 23 00:56:09.013800 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd5e8294, max_idle_ns: 440795237246 ns Jan 23 00:56:09.013817 kernel: Initialise system trusted keyrings Jan 23 00:56:09.013829 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 00:56:09.013840 kernel: Key type asymmetric registered Jan 23 00:56:09.013852 kernel: Asymmetric key parser 'x509' registered Jan 23 00:56:09.013863 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 00:56:09.013875 kernel: io scheduler mq-deadline registered Jan 23 00:56:09.013886 kernel: io scheduler kyber registered Jan 23 00:56:09.013897 kernel: io scheduler bfq registered Jan 23 00:56:09.013909 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 00:56:09.013924 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 23 00:56:09.013936 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 23 00:56:09.013948 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 23 00:56:09.013959 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 00:56:09.013971 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 00:56:09.013982 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 00:56:09.013994 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 00:56:09.014005 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 00:56:09.014486 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 23 00:56:09.014692 kernel: rtc_cmos 00:04: registered as rtc0 Jan 23 00:56:09.014869 kernel: rtc_cmos 00:04: setting system clock to 2026-01-23T00:56:06 UTC (1769129766) Jan 23 00:56:09.014884 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jan 23 00:56:09.015057 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 23 00:56:09.015071 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 23 00:56:09.015083 kernel: NET: Registered PF_INET6 protocol family Jan 23 00:56:09.015094 kernel: Segment Routing with IPv6 Jan 23 00:56:09.015106 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 00:56:09.015122 kernel: NET: Registered PF_PACKET protocol family Jan 23 00:56:09.015133 kernel: Key type dns_resolver registered Jan 23 00:56:09.015144 kernel: IPI shorthand broadcast: enabled Jan 23 00:56:09.015156 kernel: sched_clock: Marking stable (8377030178, 1532112623)->(10947835974, -1038693173) Jan 23 00:56:09.015167 kernel: registered taskstats version 1 Jan 23 00:56:09.015179 kernel: Loading compiled-in X.509 certificates Jan 23 00:56:09.015190 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: ed54f39d0282729985c39b8ffa9938cacff38d8a' Jan 23 00:56:09.015202 kernel: Demotion targets for Node 0: null Jan 23 00:56:09.015213 kernel: Key type .fscrypt registered Jan 23 00:56:09.015228 kernel: Key type fscrypt-provisioning registered Jan 23 00:56:09.015239 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 00:56:09.015251 kernel: ima: Allocated hash algorithm: sha1 Jan 23 00:56:09.015262 kernel: ima: No architecture policies found Jan 23 00:56:09.015273 kernel: clk: Disabling unused clocks Jan 23 00:56:09.015285 kernel: Warning: unable to open an initial console. Jan 23 00:56:09.015357 kernel: Freeing unused kernel image (initmem) memory: 46196K Jan 23 00:56:09.015373 kernel: Write protecting the kernel read-only data: 40960k Jan 23 00:56:09.015388 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 00:56:09.015400 kernel: Run /init as init process Jan 23 00:56:09.015489 kernel: with arguments: Jan 23 00:56:09.015502 kernel: /init Jan 23 00:56:09.015513 kernel: with environment: Jan 23 00:56:09.015524 kernel: HOME=/ Jan 23 00:56:09.015535 kernel: TERM=linux Jan 23 00:56:09.015548 systemd[1]: Successfully made /usr/ read-only. Jan 23 00:56:09.015564 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 00:56:09.015581 systemd[1]: Detected virtualization kvm. Jan 23 00:56:09.015593 systemd[1]: Detected architecture x86-64. Jan 23 00:56:09.015605 systemd[1]: Running in initrd. Jan 23 00:56:09.015617 systemd[1]: No hostname configured, using default hostname. Jan 23 00:56:09.015629 systemd[1]: Hostname set to . Jan 23 00:56:09.015641 systemd[1]: Initializing machine ID from VM UUID. Jan 23 00:56:09.015653 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1559697941 wd_nsec: 1559697360 Jan 23 00:56:09.015669 systemd[1]: Queued start job for default target initrd.target. Jan 23 00:56:09.015697 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 00:56:09.015713 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 00:56:09.015726 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 00:56:09.015739 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 00:56:09.015752 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 00:56:09.015769 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 00:56:09.015783 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 00:56:09.015796 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 00:56:09.015809 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 00:56:09.015821 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 00:56:09.015834 systemd[1]: Reached target paths.target - Path Units. Jan 23 00:56:09.015846 systemd[1]: Reached target slices.target - Slice Units. Jan 23 00:56:09.015862 systemd[1]: Reached target swap.target - Swaps. Jan 23 00:56:09.015874 systemd[1]: Reached target timers.target - Timer Units. Jan 23 00:56:09.015887 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 00:56:09.015899 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 00:56:09.015912 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 00:56:09.015925 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 00:56:09.015937 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 00:56:09.015950 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 00:56:09.015962 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 00:56:09.015977 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 00:56:09.015990 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 00:56:09.016003 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 00:56:09.016015 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 00:56:09.016028 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 00:56:09.016041 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 00:56:09.016054 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 00:56:09.016066 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 00:56:09.016082 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:56:09.016095 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 00:56:09.016145 systemd-journald[203]: Collecting audit messages is disabled. Jan 23 00:56:09.016176 systemd-journald[203]: Journal started Jan 23 00:56:09.016205 systemd-journald[203]: Runtime Journal (/run/log/journal/bc25ef8806c34377b39189f44be4a9f4) is 6M, max 48.3M, 42.2M free. Jan 23 00:56:09.019860 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 00:56:09.024649 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 00:56:09.029662 systemd-modules-load[204]: Inserted module 'overlay' Jan 23 00:56:09.030648 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 00:56:09.062140 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 00:56:09.066623 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 00:56:09.111190 systemd-tmpfiles[213]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 00:56:09.117703 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 00:56:09.130248 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 00:56:09.151257 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 00:56:09.175571 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 00:56:09.178705 kernel: Bridge firewalling registered Jan 23 00:56:09.178479 systemd-modules-load[204]: Inserted module 'br_netfilter' Jan 23 00:56:09.181109 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 00:56:09.480574 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:56:09.490776 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 00:56:09.517249 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 00:56:09.525606 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 00:56:09.568392 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:56:09.571251 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 00:56:09.607142 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 00:56:09.621744 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 00:56:09.665248 systemd-resolved[236]: Positive Trust Anchors: Jan 23 00:56:09.665266 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 00:56:09.665372 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 00:56:09.680797 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 00:56:09.672392 systemd-resolved[236]: Defaulting to hostname 'linux'. Jan 23 00:56:09.675964 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 00:56:09.691722 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 00:56:09.910890 kernel: SCSI subsystem initialized Jan 23 00:56:09.925720 kernel: Loading iSCSI transport class v2.0-870. Jan 23 00:56:09.952660 kernel: iscsi: registered transport (tcp) Jan 23 00:56:09.986637 kernel: iscsi: registered transport (qla4xxx) Jan 23 00:56:09.986702 kernel: QLogic iSCSI HBA Driver Jan 23 00:56:10.034870 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 00:56:10.072271 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 00:56:10.075904 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 00:56:10.261497 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 00:56:10.278029 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 00:56:10.530862 kernel: raid6: avx2x4 gen() 17835 MB/s Jan 23 00:56:10.555952 kernel: raid6: avx2x2 gen() 19861 MB/s Jan 23 00:56:10.581366 kernel: raid6: avx2x1 gen() 12016 MB/s Jan 23 00:56:10.581805 kernel: raid6: using algorithm avx2x2 gen() 19861 MB/s Jan 23 00:56:10.618842 kernel: raid6: .... xor() 13253 MB/s, rmw enabled Jan 23 00:56:10.619094 kernel: raid6: using avx2x2 recovery algorithm Jan 23 00:56:10.675695 kernel: xor: automatically using best checksumming function avx Jan 23 00:56:11.094639 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 00:56:11.131879 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 00:56:11.140027 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 00:56:11.279881 systemd-udevd[452]: Using default interface naming scheme 'v255'. Jan 23 00:56:11.292893 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 00:56:11.330129 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 00:56:11.383880 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Jan 23 00:56:11.483206 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 00:56:11.492370 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 00:56:11.743493 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 00:56:11.771959 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 00:56:11.941922 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 23 00:56:11.958929 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 23 00:56:11.993706 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 00:56:11.993826 kernel: GPT:9289727 != 19775487 Jan 23 00:56:11.993843 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 00:56:11.993866 kernel: GPT:9289727 != 19775487 Jan 23 00:56:11.993880 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 00:56:11.993893 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 00:56:12.021528 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 00:56:12.021676 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 00:56:12.022093 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:56:12.041636 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:56:12.052230 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:56:12.065166 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 00:56:12.075609 kernel: libata version 3.00 loaded. Jan 23 00:56:12.134853 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 23 00:56:12.157531 kernel: ahci 0000:00:1f.2: version 3.0 Jan 23 00:56:12.164670 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 23 00:56:12.164777 kernel: AES CTR mode by8 optimization enabled Jan 23 00:56:12.248360 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 23 00:56:12.249792 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 23 00:56:12.250080 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 23 00:56:12.267669 kernel: scsi host0: ahci Jan 23 00:56:12.280487 kernel: scsi host1: ahci Jan 23 00:56:12.292490 kernel: scsi host2: ahci Jan 23 00:56:12.346108 kernel: scsi host3: ahci Jan 23 00:56:12.377617 kernel: scsi host4: ahci Jan 23 00:56:12.381682 kernel: scsi host5: ahci Jan 23 00:56:12.384993 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Jan 23 00:56:12.385017 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Jan 23 00:56:12.385034 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Jan 23 00:56:12.385049 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Jan 23 00:56:12.385066 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Jan 23 00:56:12.385080 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Jan 23 00:56:12.456138 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 23 00:56:12.675148 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:56:12.697617 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 23 00:56:12.709811 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 23 00:56:12.727810 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 23 00:56:12.727864 kernel: ata3.00: LPM support broken, forcing max_power Jan 23 00:56:12.727624 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 23 00:56:12.771540 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 23 00:56:12.771581 kernel: ata3.00: applying bridge limits Jan 23 00:56:12.771597 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 23 00:56:12.771612 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 23 00:56:12.771626 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 23 00:56:12.771641 kernel: ata3.00: LPM support broken, forcing max_power Jan 23 00:56:12.771657 kernel: ata3.00: configured for UDMA/100 Jan 23 00:56:12.777971 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 23 00:56:12.795124 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 23 00:56:12.796736 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 23 00:56:12.852882 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 00:56:12.873909 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 00:56:12.943403 disk-uuid[618]: Primary Header is updated. Jan 23 00:56:12.943403 disk-uuid[618]: Secondary Entries is updated. Jan 23 00:56:12.943403 disk-uuid[618]: Secondary Header is updated. Jan 23 00:56:12.964592 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 23 00:56:12.966651 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 00:56:12.972500 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 00:56:12.990126 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 23 00:56:12.997709 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 00:56:13.600857 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 00:56:13.631833 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 00:56:13.632047 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 00:56:13.639825 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 00:56:13.663926 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 00:56:13.720129 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 00:56:14.007783 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 00:56:14.012453 disk-uuid[619]: The operation has completed successfully. Jan 23 00:56:14.099101 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 00:56:14.104023 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 00:56:14.217594 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 00:56:14.324237 sh[648]: Success Jan 23 00:56:14.428205 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 00:56:14.428403 kernel: device-mapper: uevent: version 1.0.3 Jan 23 00:56:14.428811 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 00:56:14.481606 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 23 00:56:14.632502 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 00:56:14.642616 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 00:56:14.745783 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 00:56:14.769288 kernel: BTRFS: device fsid f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (660) Jan 23 00:56:14.776872 kernel: BTRFS info (device dm-0): first mount of filesystem f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 Jan 23 00:56:14.785652 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 00:56:14.864570 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 00:56:14.864955 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 00:56:14.875899 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 00:56:14.877054 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 00:56:14.888257 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 00:56:14.898002 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 00:56:14.944727 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 00:56:15.034811 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (689) Jan 23 00:56:15.044900 kernel: BTRFS info (device vda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 00:56:15.044970 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 00:56:15.070466 kernel: BTRFS info (device vda6): turning on async discard Jan 23 00:56:15.070665 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 00:56:15.089531 kernel: BTRFS info (device vda6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 00:56:15.098940 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 00:56:15.137816 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 00:56:15.331674 kernel: hrtimer: interrupt took 3655726 ns Jan 23 00:56:16.369833 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 00:56:16.388747 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 00:56:16.534841 ignition[742]: Ignition 2.22.0 Jan 23 00:56:16.534918 ignition[742]: Stage: fetch-offline Jan 23 00:56:16.535066 ignition[742]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:56:16.535080 ignition[742]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 00:56:16.535888 ignition[742]: parsed url from cmdline: "" Jan 23 00:56:16.535896 ignition[742]: no config URL provided Jan 23 00:56:16.535927 ignition[742]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 00:56:16.535943 ignition[742]: no config at "/usr/lib/ignition/user.ign" Jan 23 00:56:16.536093 ignition[742]: op(1): [started] loading QEMU firmware config module Jan 23 00:56:16.571703 systemd-networkd[834]: lo: Link UP Jan 23 00:56:16.536101 ignition[742]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 23 00:56:16.571709 systemd-networkd[834]: lo: Gained carrier Jan 23 00:56:16.600817 ignition[742]: op(1): [finished] loading QEMU firmware config module Jan 23 00:56:16.575076 systemd-networkd[834]: Enumeration completed Jan 23 00:56:16.575605 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 00:56:16.577148 systemd-networkd[834]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:56:16.577156 systemd-networkd[834]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 00:56:16.578197 systemd-networkd[834]: eth0: Link UP Jan 23 00:56:16.583897 systemd-networkd[834]: eth0: Gained carrier Jan 23 00:56:16.583913 systemd-networkd[834]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:56:16.589919 systemd[1]: Reached target network.target - Network. Jan 23 00:56:16.656599 systemd-networkd[834]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 23 00:56:16.965776 ignition[742]: parsing config with SHA512: 57b78d47c81e6fb9c41e6c36e692798eb35f75fb49efdfb25a3c71b85ea00984f503b7aaedc02099096b820ef25a7c09d5c325f508168d3a513ce5a4cc5bdfae Jan 23 00:56:17.276465 unknown[742]: fetched base config from "system" Jan 23 00:56:17.276505 unknown[742]: fetched user config from "qemu" Jan 23 00:56:17.279248 ignition[742]: fetch-offline: fetch-offline passed Jan 23 00:56:17.285509 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 00:56:17.279625 ignition[742]: Ignition finished successfully Jan 23 00:56:17.292526 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 23 00:56:17.320110 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 00:56:17.540095 ignition[842]: Ignition 2.22.0 Jan 23 00:56:17.540136 ignition[842]: Stage: kargs Jan 23 00:56:17.540302 ignition[842]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:56:17.540314 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 00:56:17.580731 ignition[842]: kargs: kargs passed Jan 23 00:56:17.580985 ignition[842]: Ignition finished successfully Jan 23 00:56:17.597934 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 00:56:17.644970 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 00:56:17.970513 systemd-networkd[834]: eth0: Gained IPv6LL Jan 23 00:56:18.281604 ignition[850]: Ignition 2.22.0 Jan 23 00:56:18.281660 ignition[850]: Stage: disks Jan 23 00:56:18.282011 ignition[850]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:56:18.282027 ignition[850]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 00:56:18.299042 ignition[850]: disks: disks passed Jan 23 00:56:18.299197 ignition[850]: Ignition finished successfully Jan 23 00:56:18.333694 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 00:56:18.341585 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 00:56:18.346976 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 00:56:18.347126 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 00:56:18.373389 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 00:56:18.431013 systemd[1]: Reached target basic.target - Basic System. Jan 23 00:56:18.456522 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 00:56:18.888105 systemd-fsck[860]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 23 00:56:18.906040 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 00:56:18.938782 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 00:56:19.536569 kernel: EXT4-fs (vda9): mounted filesystem 2036722e-4586-420e-8dc7-a3b65e840c36 r/w with ordered data mode. Quota mode: none. Jan 23 00:56:19.538287 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 00:56:19.547880 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 00:56:19.558933 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 00:56:19.565166 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 00:56:19.584840 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 00:56:19.584959 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 00:56:19.585003 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 00:56:19.640522 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (869) Jan 23 00:56:19.640587 kernel: BTRFS info (device vda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 00:56:19.642130 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 00:56:19.657184 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 00:56:19.660279 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 00:56:19.669824 kernel: BTRFS info (device vda6): turning on async discard Jan 23 00:56:19.669855 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 00:56:19.676009 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 00:56:20.078793 initrd-setup-root[893]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 00:56:20.130027 initrd-setup-root[900]: cut: /sysroot/etc/group: No such file or directory Jan 23 00:56:20.143084 initrd-setup-root[907]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 00:56:20.159960 initrd-setup-root[914]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 00:56:20.586050 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 00:56:20.631762 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 00:56:20.638251 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 00:56:20.683158 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 00:56:20.708137 kernel: BTRFS info (device vda6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 00:56:20.735986 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 00:56:21.066698 ignition[982]: INFO : Ignition 2.22.0 Jan 23 00:56:21.066698 ignition[982]: INFO : Stage: mount Jan 23 00:56:21.080994 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 00:56:21.080994 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 00:56:21.080994 ignition[982]: INFO : mount: mount passed Jan 23 00:56:21.080994 ignition[982]: INFO : Ignition finished successfully Jan 23 00:56:21.084733 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 00:56:21.142224 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 00:56:21.368701 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 00:56:21.567141 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (995) Jan 23 00:56:21.578574 kernel: BTRFS info (device vda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 00:56:21.578637 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 00:56:21.637280 kernel: BTRFS info (device vda6): turning on async discard Jan 23 00:56:21.637511 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 00:56:21.641219 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 00:56:22.140807 ignition[1012]: INFO : Ignition 2.22.0 Jan 23 00:56:22.140807 ignition[1012]: INFO : Stage: files Jan 23 00:56:22.140807 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 00:56:22.140807 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 00:56:22.164192 ignition[1012]: DEBUG : files: compiled without relabeling support, skipping Jan 23 00:56:22.164192 ignition[1012]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 00:56:22.164192 ignition[1012]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 00:56:22.222132 ignition[1012]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 00:56:22.232372 ignition[1012]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 00:56:22.241073 unknown[1012]: wrote ssh authorized keys file for user: core Jan 23 00:56:22.250232 ignition[1012]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 00:56:22.255996 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 00:56:22.255996 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 23 00:56:22.357153 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 00:56:23.478703 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 00:56:23.487173 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 00:56:23.510937 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 00:56:23.510937 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 00:56:23.547601 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 00:56:23.547601 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 00:56:23.547601 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 00:56:23.547601 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 00:56:23.547601 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 00:56:23.547601 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 00:56:23.547601 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 00:56:23.547601 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 00:56:23.547601 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 00:56:23.547601 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 00:56:23.673160 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 23 00:56:23.989125 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 00:56:29.676271 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 00:56:29.676271 ignition[1012]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 00:56:29.695049 ignition[1012]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 00:56:29.695049 ignition[1012]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 00:56:29.695049 ignition[1012]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 00:56:29.695049 ignition[1012]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 23 00:56:29.695049 ignition[1012]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 23 00:56:29.695049 ignition[1012]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 23 00:56:29.695049 ignition[1012]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 23 00:56:29.695049 ignition[1012]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 23 00:56:29.945651 ignition[1012]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 23 00:56:29.961833 ignition[1012]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 23 00:56:29.968016 ignition[1012]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 23 00:56:29.968016 ignition[1012]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 23 00:56:29.979536 ignition[1012]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 00:56:29.979536 ignition[1012]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 00:56:29.979536 ignition[1012]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 00:56:29.979536 ignition[1012]: INFO : files: files passed Jan 23 00:56:29.979536 ignition[1012]: INFO : Ignition finished successfully Jan 23 00:56:29.986898 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 00:56:30.034950 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 00:56:30.051912 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 00:56:30.082097 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 00:56:30.082749 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 00:56:30.097970 initrd-setup-root-after-ignition[1040]: grep: /sysroot/oem/oem-release: No such file or directory Jan 23 00:56:30.113281 initrd-setup-root-after-ignition[1043]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 00:56:30.113281 initrd-setup-root-after-ignition[1043]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 00:56:30.112058 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 00:56:30.150771 initrd-setup-root-after-ignition[1047]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 00:56:30.114297 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 00:56:30.150795 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 00:56:30.560091 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 00:56:30.561937 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 00:56:30.579983 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 00:56:30.613273 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 00:56:30.632911 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 00:56:30.638763 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 00:56:30.718500 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 00:56:30.739636 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 00:56:30.772809 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 00:56:30.779195 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 00:56:30.785078 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 00:56:30.812066 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 00:56:30.819527 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 00:56:30.834464 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 00:56:30.844636 systemd[1]: Stopped target basic.target - Basic System. Jan 23 00:56:30.849733 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 00:56:30.857926 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 00:56:30.861081 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 00:56:30.870949 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 00:56:30.901871 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 00:56:30.945972 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 00:56:30.955787 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 00:56:30.967581 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 00:56:31.005739 systemd[1]: Stopped target swap.target - Swaps. Jan 23 00:56:31.022126 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 00:56:31.045726 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 00:56:31.081999 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 00:56:31.116551 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 00:56:31.133566 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 00:56:31.135948 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 00:56:31.150233 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 00:56:31.151804 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 00:56:31.160799 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 00:56:31.160992 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 00:56:31.203039 systemd[1]: Stopped target paths.target - Path Units. Jan 23 00:56:31.217547 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 00:56:31.225795 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 00:56:31.256697 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 00:56:31.267832 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 00:56:31.268647 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 00:56:31.268886 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 00:56:31.342657 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 00:56:31.359549 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 00:56:31.477899 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 00:56:31.485830 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 00:56:31.525306 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 00:56:31.534975 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 00:56:31.640121 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 00:56:31.682236 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 00:56:31.730874 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 00:56:31.735766 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 00:56:31.764073 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 00:56:31.785232 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 00:56:31.933862 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 00:56:31.950545 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 00:56:32.161145 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 00:56:32.254683 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 00:56:32.254924 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 00:56:32.276883 ignition[1067]: INFO : Ignition 2.22.0 Jan 23 00:56:32.276883 ignition[1067]: INFO : Stage: umount Jan 23 00:56:32.286561 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 00:56:32.286561 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 00:56:32.286561 ignition[1067]: INFO : umount: umount passed Jan 23 00:56:32.286561 ignition[1067]: INFO : Ignition finished successfully Jan 23 00:56:32.285172 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 00:56:32.285503 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 00:56:32.291286 systemd[1]: Stopped target network.target - Network. Jan 23 00:56:32.357813 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 00:56:32.362212 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 00:56:32.371577 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 00:56:32.371687 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 00:56:32.381829 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 00:56:32.381956 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 00:56:32.393065 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 00:56:32.393170 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 00:56:32.436277 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 00:56:32.436482 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 00:56:32.436996 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 00:56:32.455705 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 00:56:32.488072 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 00:56:32.488321 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 00:56:32.531140 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 00:56:32.531680 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 00:56:32.532566 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 00:56:32.548670 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 00:56:32.550784 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 00:56:32.554701 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 00:56:32.554772 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 00:56:32.572575 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 00:56:32.579097 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 00:56:32.579192 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 00:56:32.588746 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 00:56:32.588866 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:56:32.624680 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 00:56:32.624853 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 00:56:32.643030 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 00:56:32.643159 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 00:56:32.662658 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 00:56:32.681984 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 00:56:32.682215 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 00:56:32.728321 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 00:56:32.729313 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 00:56:32.756232 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 00:56:32.756467 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 00:56:32.757579 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 00:56:32.760095 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 00:56:32.790525 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 00:56:32.791909 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 00:56:32.838036 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 00:56:32.838689 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 00:56:32.869599 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 00:56:32.869704 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 00:56:32.939085 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 00:56:32.954653 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 00:56:32.954917 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 00:56:32.988583 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 00:56:32.988690 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 00:56:33.045913 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 00:56:33.046173 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 00:56:33.079648 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 00:56:33.079990 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 00:56:33.089076 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 00:56:33.090842 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:56:33.133526 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 00:56:33.133797 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jan 23 00:56:33.133893 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 00:56:33.133973 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 00:56:33.135176 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 00:56:33.137288 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 00:56:33.143283 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 00:56:33.143592 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 00:56:33.146167 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 00:56:33.148295 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 00:56:33.256167 systemd[1]: Switching root. Jan 23 00:56:33.341313 systemd-journald[203]: Journal stopped Jan 23 00:56:37.845305 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Jan 23 00:56:37.845511 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 00:56:37.845545 kernel: SELinux: policy capability open_perms=1 Jan 23 00:56:37.845561 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 00:56:37.845628 kernel: SELinux: policy capability always_check_network=0 Jan 23 00:56:37.845655 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 00:56:37.845671 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 00:56:37.845685 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 00:56:37.845703 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 00:56:37.845719 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 00:56:37.845734 kernel: audit: type=1403 audit(1769129793.937:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 00:56:37.845751 systemd[1]: Successfully loaded SELinux policy in 163.407ms. Jan 23 00:56:37.845791 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 21.270ms. Jan 23 00:56:37.845809 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 00:56:37.845830 systemd[1]: Detected virtualization kvm. Jan 23 00:56:37.845848 systemd[1]: Detected architecture x86-64. Jan 23 00:56:37.845863 systemd[1]: Detected first boot. Jan 23 00:56:37.845880 systemd[1]: Initializing machine ID from VM UUID. Jan 23 00:56:37.845910 zram_generator::config[1112]: No configuration found. Jan 23 00:56:37.845928 kernel: Guest personality initialized and is inactive Jan 23 00:56:37.845943 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 00:56:37.845966 kernel: Initialized host personality Jan 23 00:56:37.845981 kernel: NET: Registered PF_VSOCK protocol family Jan 23 00:56:37.845997 systemd[1]: Populated /etc with preset unit settings. Jan 23 00:56:37.846016 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 00:56:37.846081 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 00:56:37.846100 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 00:56:37.846118 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 00:56:37.846135 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 00:56:37.846153 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 00:56:37.846175 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 00:56:37.846192 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 00:56:37.846209 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 00:56:37.846227 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 00:56:37.846251 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 00:56:37.846275 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 00:56:37.846292 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 00:56:37.846309 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 00:56:37.846329 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 00:56:37.846346 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 00:56:37.846363 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 00:56:37.846597 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 00:56:37.846619 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 00:56:37.846638 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 00:56:37.846656 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 00:56:37.846673 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 00:56:37.846697 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 00:56:37.846715 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 00:56:37.846733 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 00:56:37.846751 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 00:56:37.846771 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 00:56:37.846788 systemd[1]: Reached target slices.target - Slice Units. Jan 23 00:56:37.846803 systemd[1]: Reached target swap.target - Swaps. Jan 23 00:56:37.846819 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 00:56:37.846841 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 00:56:37.846863 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 00:56:37.846879 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 00:56:37.846897 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 00:56:37.846960 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 00:56:37.846978 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 00:56:37.846996 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 00:56:37.847014 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 00:56:37.847032 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 00:56:37.847051 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 00:56:37.847073 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 00:56:37.847090 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 00:56:37.847108 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 00:56:37.847126 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 00:56:37.847144 systemd[1]: Reached target machines.target - Containers. Jan 23 00:56:37.847162 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 00:56:37.847179 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 00:56:37.847197 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 00:56:37.847218 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 00:56:37.847235 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 00:56:37.847253 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 00:56:37.847271 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 00:56:37.847289 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 00:56:37.847306 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 00:56:37.847325 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 00:56:37.847461 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 00:56:37.847486 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 00:56:37.847508 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 00:56:37.847526 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 00:56:37.847545 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 00:56:37.847563 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 00:56:37.847580 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 00:56:37.847597 kernel: loop: module loaded Jan 23 00:56:37.847615 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 00:56:37.847632 kernel: fuse: init (API version 7.41) Jan 23 00:56:37.847649 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 00:56:37.847704 systemd-journald[1195]: Collecting audit messages is disabled. Jan 23 00:56:37.847734 systemd-journald[1195]: Journal started Jan 23 00:56:37.847766 systemd-journald[1195]: Runtime Journal (/run/log/journal/bc25ef8806c34377b39189f44be4a9f4) is 6M, max 48.3M, 42.2M free. Jan 23 00:56:36.644026 systemd[1]: Queued start job for default target multi-user.target. Jan 23 00:56:36.681337 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 23 00:56:36.683595 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 00:56:36.686473 systemd[1]: systemd-journald.service: Consumed 1.674s CPU time. Jan 23 00:56:37.860596 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 00:56:37.871486 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 00:56:37.890732 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 00:56:37.891028 systemd[1]: Stopped verity-setup.service. Jan 23 00:56:37.918789 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 00:56:37.930613 kernel: ACPI: bus type drm_connector registered Jan 23 00:56:37.937936 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 00:56:37.944302 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 00:56:37.951865 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 00:56:37.962953 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 00:56:37.972232 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 00:56:37.979822 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 00:56:37.988850 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 00:56:37.994313 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 00:56:38.002766 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 00:56:38.026852 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 00:56:38.027324 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 00:56:38.035809 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 00:56:38.037470 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 00:56:38.052784 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 00:56:38.053224 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 00:56:38.062550 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 00:56:38.062983 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 00:56:38.069956 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 00:56:38.079124 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 00:56:38.095761 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 00:56:38.096262 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 00:56:38.155187 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 00:56:38.167754 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 00:56:38.173842 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 00:56:38.180115 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 00:56:38.243736 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 00:56:38.260879 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 00:56:38.272221 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 00:56:38.277354 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 00:56:38.277525 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 00:56:38.285036 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 00:56:38.321306 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 00:56:38.330878 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 00:56:38.333322 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 00:56:38.342066 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 00:56:38.348690 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 00:56:38.352970 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 00:56:38.356352 systemd-journald[1195]: Time spent on flushing to /var/log/journal/bc25ef8806c34377b39189f44be4a9f4 is 47.468ms for 976 entries. Jan 23 00:56:38.356352 systemd-journald[1195]: System Journal (/var/log/journal/bc25ef8806c34377b39189f44be4a9f4) is 8M, max 195.6M, 187.6M free. Jan 23 00:56:38.447071 systemd-journald[1195]: Received client request to flush runtime journal. Jan 23 00:56:38.364075 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 00:56:38.368697 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 00:56:38.378739 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 00:56:38.394679 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 00:56:38.421240 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 00:56:38.428522 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 00:56:38.436209 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 00:56:38.445629 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 00:56:38.456842 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 00:56:38.477816 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 00:56:38.495798 kernel: loop0: detected capacity change from 0 to 110984 Jan 23 00:56:38.489852 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 00:56:39.131514 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 00:56:39.138022 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:56:39.165803 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Jan 23 00:56:39.165862 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Jan 23 00:56:39.177477 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 00:56:39.188500 kernel: loop1: detected capacity change from 0 to 128560 Jan 23 00:56:39.200075 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 00:56:39.262225 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 00:56:39.266791 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 00:56:39.870940 kernel: loop2: detected capacity change from 0 to 219144 Jan 23 00:56:39.974263 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 00:56:39.990238 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 00:56:40.044286 kernel: loop3: detected capacity change from 0 to 110984 Jan 23 00:56:40.212151 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Jan 23 00:56:40.212261 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Jan 23 00:56:40.249491 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 00:56:40.310340 kernel: loop4: detected capacity change from 0 to 128560 Jan 23 00:56:40.373576 kernel: loop5: detected capacity change from 0 to 219144 Jan 23 00:56:40.665635 (sd-merge)[1257]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 23 00:56:40.670970 (sd-merge)[1257]: Merged extensions into '/usr'. Jan 23 00:56:40.690962 systemd[1]: Reload requested from client PID 1231 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 00:56:40.691192 systemd[1]: Reloading... Jan 23 00:56:41.127350 zram_generator::config[1285]: No configuration found. Jan 23 00:56:42.929198 ldconfig[1226]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 00:56:43.027099 systemd[1]: Reloading finished in 2334 ms. Jan 23 00:56:43.072240 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 00:56:43.080350 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 00:56:43.086607 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 00:56:43.126979 systemd[1]: Starting ensure-sysext.service... Jan 23 00:56:43.135130 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 00:56:43.149320 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 00:56:43.192363 systemd[1]: Reload requested from client PID 1325 ('systemctl') (unit ensure-sysext.service)... Jan 23 00:56:43.192506 systemd[1]: Reloading... Jan 23 00:56:43.223617 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 00:56:43.225205 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 00:56:43.225951 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 00:56:43.226378 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 00:56:43.228052 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 00:56:43.228887 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. Jan 23 00:56:43.229070 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. Jan 23 00:56:43.236858 systemd-tmpfiles[1326]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 00:56:43.236902 systemd-tmpfiles[1326]: Skipping /boot Jan 23 00:56:43.250959 systemd-udevd[1327]: Using default interface naming scheme 'v255'. Jan 23 00:56:43.269022 systemd-tmpfiles[1326]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 00:56:43.269077 systemd-tmpfiles[1326]: Skipping /boot Jan 23 00:56:43.293562 zram_generator::config[1353]: No configuration found. Jan 23 00:56:43.571569 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 00:56:43.599585 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 23 00:56:43.600093 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 23 00:56:43.660492 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 23 00:56:43.678179 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 00:56:43.678736 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 00:56:43.686480 kernel: ACPI: button: Power Button [PWRF] Jan 23 00:56:43.686716 systemd[1]: Reloading finished in 493 ms. Jan 23 00:56:43.700135 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 00:56:43.724914 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 00:56:43.765383 systemd[1]: Finished ensure-sysext.service. Jan 23 00:56:43.842026 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 00:56:43.846229 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 00:56:43.853636 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 00:56:43.858184 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 00:56:43.991545 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 00:56:44.004770 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 00:56:44.011795 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 00:56:44.020724 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 00:56:44.026195 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 00:56:44.028034 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 00:56:44.033978 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 00:56:44.038349 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 00:56:44.056787 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 00:56:44.071194 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 00:56:44.086234 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 23 00:56:44.096654 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 00:56:44.107281 kernel: kvm_amd: TSC scaling supported Jan 23 00:56:44.107471 kernel: kvm_amd: Nested Virtualization enabled Jan 23 00:56:44.107521 kernel: kvm_amd: Nested Paging enabled Jan 23 00:56:44.109997 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:56:44.116515 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 23 00:56:44.116573 kernel: kvm_amd: PMU virtualization is disabled Jan 23 00:56:44.121978 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 00:56:44.179312 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 00:56:44.188742 augenrules[1477]: No rules Jan 23 00:56:44.195933 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 00:56:44.196979 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 00:56:44.203927 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 00:56:44.204381 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 00:56:44.209384 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 00:56:44.209974 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 00:56:44.216610 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 00:56:44.217104 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 00:56:44.224861 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 00:56:44.225102 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 00:56:44.229833 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 00:56:44.236311 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 00:56:44.260860 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 00:56:44.261160 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 00:56:44.266218 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 00:56:44.274508 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 00:56:44.274768 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 00:56:44.287312 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 00:56:44.310925 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 00:56:44.386623 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 00:56:44.397122 kernel: EDAC MC: Ver: 3.0.0 Jan 23 00:56:44.530927 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:56:44.575560 systemd-networkd[1459]: lo: Link UP Jan 23 00:56:44.575575 systemd-networkd[1459]: lo: Gained carrier Jan 23 00:56:44.578233 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 23 00:56:44.579751 systemd-networkd[1459]: Enumeration completed Jan 23 00:56:44.581362 systemd-networkd[1459]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:56:44.584719 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 00:56:44.585147 systemd-networkd[1459]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 00:56:44.586123 systemd-networkd[1459]: eth0: Link UP Jan 23 00:56:44.586567 systemd-networkd[1459]: eth0: Gained carrier Jan 23 00:56:44.586588 systemd-networkd[1459]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:56:44.591780 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 00:56:44.598196 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 00:56:44.604811 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 00:56:44.608103 systemd-resolved[1463]: Positive Trust Anchors: Jan 23 00:56:44.608149 systemd-resolved[1463]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 00:56:44.608191 systemd-resolved[1463]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 00:56:44.621618 systemd-networkd[1459]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 23 00:56:44.624798 systemd-resolved[1463]: Defaulting to hostname 'linux'. Jan 23 00:56:44.627064 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Jan 23 00:56:45.299623 systemd-timesyncd[1464]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 23 00:56:45.299690 systemd-timesyncd[1464]: Initial clock synchronization to Fri 2026-01-23 00:56:45.299475 UTC. Jan 23 00:56:45.299840 systemd-resolved[1463]: Clock change detected. Flushing caches. Jan 23 00:56:45.300432 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 00:56:45.309232 systemd[1]: Reached target network.target - Network. Jan 23 00:56:45.313220 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 00:56:45.317432 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 00:56:45.322057 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 00:56:45.327783 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 00:56:45.337160 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 00:56:45.342514 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 00:56:45.348862 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 00:56:45.360550 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 00:56:45.368089 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 00:56:45.368353 systemd[1]: Reached target paths.target - Path Units. Jan 23 00:56:45.374221 systemd[1]: Reached target timers.target - Timer Units. Jan 23 00:56:45.382798 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 00:56:45.392185 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 00:56:45.402370 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 00:56:45.410031 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 00:56:45.419781 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 00:56:45.430689 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 00:56:45.439381 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 00:56:45.447730 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 00:56:45.453662 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 00:56:45.460309 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 00:56:45.464720 systemd[1]: Reached target basic.target - Basic System. Jan 23 00:56:45.469521 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 00:56:45.471262 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 00:56:45.474104 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 00:56:45.481244 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 00:56:45.496799 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 00:56:45.508334 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 00:56:45.523216 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 00:56:45.528106 jq[1518]: false Jan 23 00:56:45.531720 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 00:56:45.534565 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 00:56:45.556102 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 00:56:45.567090 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 00:56:45.570378 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Refreshing passwd entry cache Jan 23 00:56:45.570391 oslogin_cache_refresh[1520]: Refreshing passwd entry cache Jan 23 00:56:45.577327 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 00:56:45.585077 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 00:56:45.586281 extend-filesystems[1519]: Found /dev/vda6 Jan 23 00:56:45.596076 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Failure getting users, quitting Jan 23 00:56:45.596076 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 00:56:45.596076 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Refreshing group entry cache Jan 23 00:56:45.595371 oslogin_cache_refresh[1520]: Failure getting users, quitting Jan 23 00:56:45.595430 oslogin_cache_refresh[1520]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 00:56:45.595496 oslogin_cache_refresh[1520]: Refreshing group entry cache Jan 23 00:56:45.598792 extend-filesystems[1519]: Found /dev/vda9 Jan 23 00:56:45.605133 extend-filesystems[1519]: Checking size of /dev/vda9 Jan 23 00:56:45.612616 oslogin_cache_refresh[1520]: Failure getting groups, quitting Jan 23 00:56:45.612850 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Failure getting groups, quitting Jan 23 00:56:45.612850 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 00:56:45.612632 oslogin_cache_refresh[1520]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 00:56:45.614596 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 00:56:45.622228 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 00:56:45.626525 extend-filesystems[1519]: Resized partition /dev/vda9 Jan 23 00:56:45.631239 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 00:56:45.633374 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 00:56:45.640581 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 00:56:45.646591 extend-filesystems[1540]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 00:56:45.666169 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 23 00:56:45.657236 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 00:56:45.666436 jq[1542]: true Jan 23 00:56:45.667033 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 00:56:45.667417 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 00:56:45.668127 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 00:56:45.668435 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 00:56:45.673481 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 00:56:45.673830 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 00:56:45.685450 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 00:56:45.686415 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 00:56:45.720225 update_engine[1541]: I20260123 00:56:45.718300 1541 main.cc:92] Flatcar Update Engine starting Jan 23 00:56:45.732774 (ntainerd)[1550]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 00:56:45.762082 jq[1548]: true Jan 23 00:56:45.765012 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 23 00:56:45.769251 tar[1546]: linux-amd64/LICENSE Jan 23 00:56:45.808171 tar[1546]: linux-amd64/helm Jan 23 00:56:45.808540 systemd-logind[1531]: Watching system buttons on /dev/input/event2 (Power Button) Jan 23 00:56:45.808622 systemd-logind[1531]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 00:56:45.813792 systemd-logind[1531]: New seat seat0. Jan 23 00:56:45.816408 extend-filesystems[1540]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 23 00:56:45.816408 extend-filesystems[1540]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 23 00:56:45.816408 extend-filesystems[1540]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 23 00:56:45.821795 dbus-daemon[1516]: [system] SELinux support is enabled Jan 23 00:56:45.849247 update_engine[1541]: I20260123 00:56:45.847003 1541 update_check_scheduler.cc:74] Next update check in 2m30s Jan 23 00:56:45.849292 bash[1578]: Updated "/home/core/.ssh/authorized_keys" Jan 23 00:56:45.850814 extend-filesystems[1519]: Resized filesystem in /dev/vda9 Jan 23 00:56:45.817317 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 00:56:45.832262 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 00:56:45.836774 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 00:56:45.838145 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 00:56:45.857082 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 00:56:45.865848 systemd[1]: Started update-engine.service - Update Engine. Jan 23 00:56:45.866543 dbus-daemon[1516]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 00:56:45.871811 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 23 00:56:45.872232 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 00:56:45.872425 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 00:56:45.877726 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 00:56:45.877880 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 00:56:45.886803 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 00:56:45.910900 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 00:56:45.937763 sshd_keygen[1547]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 00:56:45.975357 locksmithd[1582]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 00:56:46.002635 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 00:56:46.015693 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 00:56:46.027286 systemd[1]: Started sshd@0-10.0.0.13:22-10.0.0.1:41936.service - OpenSSH per-connection server daemon (10.0.0.1:41936). Jan 23 00:56:46.049520 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 00:56:46.050117 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 00:56:46.060253 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 00:56:46.094663 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 00:56:46.108407 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 00:56:46.118486 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 00:56:46.124683 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 00:56:46.160754 containerd[1550]: time="2026-01-23T00:56:46Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 00:56:46.161659 containerd[1550]: time="2026-01-23T00:56:46.161627307Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 00:56:46.168824 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 41936 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 00:56:46.170810 sshd-session[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:56:46.177809 containerd[1550]: time="2026-01-23T00:56:46.177638863Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.011µs" Jan 23 00:56:46.177809 containerd[1550]: time="2026-01-23T00:56:46.177680421Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 00:56:46.177809 containerd[1550]: time="2026-01-23T00:56:46.177704105Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 00:56:46.178180 containerd[1550]: time="2026-01-23T00:56:46.178096127Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 00:56:46.178180 containerd[1550]: time="2026-01-23T00:56:46.178162551Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 00:56:46.178274 containerd[1550]: time="2026-01-23T00:56:46.178205291Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 00:56:46.178873 containerd[1550]: time="2026-01-23T00:56:46.178349300Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 00:56:46.178873 containerd[1550]: time="2026-01-23T00:56:46.178372834Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 00:56:46.178873 containerd[1550]: time="2026-01-23T00:56:46.178787297Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 00:56:46.178873 containerd[1550]: time="2026-01-23T00:56:46.178806674Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 00:56:46.178873 containerd[1550]: time="2026-01-23T00:56:46.178828044Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 00:56:46.178873 containerd[1550]: time="2026-01-23T00:56:46.178840137Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 00:56:46.179121 containerd[1550]: time="2026-01-23T00:56:46.179100413Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 00:56:46.179523 containerd[1550]: time="2026-01-23T00:56:46.179449614Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 00:56:46.179569 containerd[1550]: time="2026-01-23T00:56:46.179543650Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 00:56:46.179569 containerd[1550]: time="2026-01-23T00:56:46.179563838Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 00:56:46.179705 containerd[1550]: time="2026-01-23T00:56:46.179648135Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 00:56:46.180265 containerd[1550]: time="2026-01-23T00:56:46.180178135Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 00:56:46.180311 containerd[1550]: time="2026-01-23T00:56:46.180277721Z" level=info msg="metadata content store policy set" policy=shared Jan 23 00:56:46.189998 containerd[1550]: time="2026-01-23T00:56:46.189668376Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 00:56:46.189998 containerd[1550]: time="2026-01-23T00:56:46.189766600Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 00:56:46.189998 containerd[1550]: time="2026-01-23T00:56:46.189783672Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 00:56:46.189998 containerd[1550]: time="2026-01-23T00:56:46.189806755Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 00:56:46.189998 containerd[1550]: time="2026-01-23T00:56:46.189820841Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 00:56:46.189998 containerd[1550]: time="2026-01-23T00:56:46.189829978Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 00:56:46.189998 containerd[1550]: time="2026-01-23T00:56:46.189847812Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 00:56:46.189998 containerd[1550]: time="2026-01-23T00:56:46.189865945Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 00:56:46.189998 containerd[1550]: time="2026-01-23T00:56:46.189881385Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 00:56:46.189998 containerd[1550]: time="2026-01-23T00:56:46.189890732Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 00:56:46.189998 containerd[1550]: time="2026-01-23T00:56:46.189899518Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 00:56:46.189998 containerd[1550]: time="2026-01-23T00:56:46.189911441Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 00:56:46.190871 containerd[1550]: time="2026-01-23T00:56:46.190700764Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 00:56:46.190871 containerd[1550]: time="2026-01-23T00:56:46.190728847Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 00:56:46.190871 containerd[1550]: time="2026-01-23T00:56:46.190743824Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 00:56:46.190871 containerd[1550]: time="2026-01-23T00:56:46.190754635Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 00:56:46.190871 containerd[1550]: time="2026-01-23T00:56:46.190764543Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 00:56:46.191344 containerd[1550]: time="2026-01-23T00:56:46.191273283Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 00:56:46.191390 containerd[1550]: time="2026-01-23T00:56:46.191365295Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 00:56:46.191390 containerd[1550]: time="2026-01-23T00:56:46.191385724Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 00:56:46.191460 containerd[1550]: time="2026-01-23T00:56:46.191402465Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 00:56:46.191460 containerd[1550]: time="2026-01-23T00:56:46.191415810Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 00:56:46.191460 containerd[1550]: time="2026-01-23T00:56:46.191427822Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 00:56:46.191533 containerd[1550]: time="2026-01-23T00:56:46.191483336Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 00:56:46.191533 containerd[1550]: time="2026-01-23T00:56:46.191508112Z" level=info msg="Start snapshots syncer" Jan 23 00:56:46.191583 containerd[1550]: time="2026-01-23T00:56:46.191542006Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 00:56:46.192337 containerd[1550]: time="2026-01-23T00:56:46.191897750Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 00:56:46.192337 containerd[1550]: time="2026-01-23T00:56:46.192228137Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 00:56:46.192614 containerd[1550]: time="2026-01-23T00:56:46.192335226Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 00:56:46.192614 containerd[1550]: time="2026-01-23T00:56:46.192577188Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 00:56:46.192667 containerd[1550]: time="2026-01-23T00:56:46.192616492Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 00:56:46.192667 containerd[1550]: time="2026-01-23T00:56:46.192631770Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 00:56:46.192667 containerd[1550]: time="2026-01-23T00:56:46.192645276Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 00:56:46.192667 containerd[1550]: time="2026-01-23T00:56:46.192659762Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 00:56:46.192770 containerd[1550]: time="2026-01-23T00:56:46.192673388Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 00:56:46.192770 containerd[1550]: time="2026-01-23T00:56:46.192686893Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 00:56:46.192770 containerd[1550]: time="2026-01-23T00:56:46.192714344Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 00:56:46.192770 containerd[1550]: time="2026-01-23T00:56:46.192730585Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 00:56:46.192770 containerd[1550]: time="2026-01-23T00:56:46.192745703Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 00:56:46.193336 containerd[1550]: time="2026-01-23T00:56:46.192841031Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 00:56:46.193336 containerd[1550]: time="2026-01-23T00:56:46.193098602Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 00:56:46.193336 containerd[1550]: time="2026-01-23T00:56:46.193117267Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 00:56:46.193336 containerd[1550]: time="2026-01-23T00:56:46.193133026Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 00:56:46.193336 containerd[1550]: time="2026-01-23T00:56:46.193145459Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 00:56:46.193336 containerd[1550]: time="2026-01-23T00:56:46.193159185Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 00:56:46.193336 containerd[1550]: time="2026-01-23T00:56:46.193183500Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 00:56:46.193336 containerd[1550]: time="2026-01-23T00:56:46.193206614Z" level=info msg="runtime interface created" Jan 23 00:56:46.193336 containerd[1550]: time="2026-01-23T00:56:46.193216693Z" level=info msg="created NRI interface" Jan 23 00:56:46.193336 containerd[1550]: time="2026-01-23T00:56:46.193236630Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 00:56:46.193336 containerd[1550]: time="2026-01-23T00:56:46.193255435Z" level=info msg="Connect containerd service" Jan 23 00:56:46.193336 containerd[1550]: time="2026-01-23T00:56:46.193284028Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 00:56:46.194294 systemd-logind[1531]: New session 1 of user core. Jan 23 00:56:46.197650 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 00:56:46.199757 containerd[1550]: time="2026-01-23T00:56:46.199484980Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 00:56:46.203408 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 00:56:46.260106 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 00:56:46.274862 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 00:56:46.295586 tar[1546]: linux-amd64/README.md Jan 23 00:56:46.303128 (systemd)[1626]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 00:56:46.307883 systemd-logind[1531]: New session c1 of user core. Jan 23 00:56:46.362403 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 00:56:46.404666 systemd-networkd[1459]: eth0: Gained IPv6LL Jan 23 00:56:46.423829 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 00:56:46.431819 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 00:56:46.437298 containerd[1550]: time="2026-01-23T00:56:46.437226514Z" level=info msg="Start subscribing containerd event" Jan 23 00:56:46.437624 containerd[1550]: time="2026-01-23T00:56:46.437381824Z" level=info msg="Start recovering state" Jan 23 00:56:46.438108 containerd[1550]: time="2026-01-23T00:56:46.437663232Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 00:56:46.438258 containerd[1550]: time="2026-01-23T00:56:46.438167734Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 00:56:46.441994 containerd[1550]: time="2026-01-23T00:56:46.440115982Z" level=info msg="Start event monitor" Jan 23 00:56:46.441994 containerd[1550]: time="2026-01-23T00:56:46.440145016Z" level=info msg="Start cni network conf syncer for default" Jan 23 00:56:46.441994 containerd[1550]: time="2026-01-23T00:56:46.440162539Z" level=info msg="Start streaming server" Jan 23 00:56:46.441994 containerd[1550]: time="2026-01-23T00:56:46.440185231Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 00:56:46.441994 containerd[1550]: time="2026-01-23T00:56:46.440197665Z" level=info msg="runtime interface starting up..." Jan 23 00:56:46.441994 containerd[1550]: time="2026-01-23T00:56:46.440210779Z" level=info msg="starting plugins..." Jan 23 00:56:46.441994 containerd[1550]: time="2026-01-23T00:56:46.440235275Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 00:56:46.440190 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 23 00:56:46.448867 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:56:46.454078 containerd[1550]: time="2026-01-23T00:56:46.454047329Z" level=info msg="containerd successfully booted in 0.294020s" Jan 23 00:56:46.460384 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 00:56:46.465704 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 00:56:46.503707 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 23 00:56:46.504356 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 23 00:56:46.511047 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 00:56:46.517614 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 00:56:46.577241 systemd[1626]: Queued start job for default target default.target. Jan 23 00:56:46.588128 systemd[1626]: Created slice app.slice - User Application Slice. Jan 23 00:56:46.588166 systemd[1626]: Reached target paths.target - Paths. Jan 23 00:56:46.588264 systemd[1626]: Reached target timers.target - Timers. Jan 23 00:56:46.590761 systemd[1626]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 00:56:46.641137 systemd[1626]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 00:56:46.641282 systemd[1626]: Reached target sockets.target - Sockets. Jan 23 00:56:46.641329 systemd[1626]: Reached target basic.target - Basic System. Jan 23 00:56:46.641381 systemd[1626]: Reached target default.target - Main User Target. Jan 23 00:56:46.641421 systemd[1626]: Startup finished in 315ms. Jan 23 00:56:46.641586 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 00:56:46.656512 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 00:56:46.742745 systemd[1]: Started sshd@1-10.0.0.13:22-10.0.0.1:41940.service - OpenSSH per-connection server daemon (10.0.0.1:41940). Jan 23 00:56:46.845360 sshd[1663]: Accepted publickey for core from 10.0.0.1 port 41940 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 00:56:46.847768 sshd-session[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:56:46.859071 systemd-logind[1531]: New session 2 of user core. Jan 23 00:56:46.867780 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 00:56:46.956362 sshd[1666]: Connection closed by 10.0.0.1 port 41940 Jan 23 00:56:46.960195 sshd-session[1663]: pam_unix(sshd:session): session closed for user core Jan 23 00:56:46.978389 systemd[1]: Started sshd@2-10.0.0.13:22-10.0.0.1:41950.service - OpenSSH per-connection server daemon (10.0.0.1:41950). Jan 23 00:56:46.983809 systemd[1]: sshd@1-10.0.0.13:22-10.0.0.1:41940.service: Deactivated successfully. Jan 23 00:56:46.986691 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 00:56:46.988146 systemd-logind[1531]: Session 2 logged out. Waiting for processes to exit. Jan 23 00:56:46.991873 systemd-logind[1531]: Removed session 2. Jan 23 00:56:48.239562 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 41950 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 00:56:48.260438 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:56:48.442158 systemd-logind[1531]: New session 3 of user core. Jan 23 00:56:48.499362 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 00:56:49.300548 sshd[1675]: Connection closed by 10.0.0.1 port 41950 Jan 23 00:56:49.312292 sshd-session[1669]: pam_unix(sshd:session): session closed for user core Jan 23 00:56:49.402571 systemd[1]: sshd@2-10.0.0.13:22-10.0.0.1:41950.service: Deactivated successfully. Jan 23 00:56:49.403362 systemd[1]: sshd@2-10.0.0.13:22-10.0.0.1:41950.service: Consumed 1.085s CPU time, 3.7M memory peak. Jan 23 00:56:49.408368 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 00:56:49.411411 systemd-logind[1531]: Session 3 logged out. Waiting for processes to exit. Jan 23 00:56:49.414469 systemd-logind[1531]: Removed session 3. Jan 23 00:56:52.614209 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:56:52.615191 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 00:56:52.615408 systemd[1]: Startup finished in 8.606s (kernel) + 26.931s (initrd) + 18.168s (userspace) = 53.707s. Jan 23 00:56:52.644753 (kubelet)[1689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:56:55.926360 kubelet[1689]: E0123 00:56:55.925210 1689 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:56:55.934360 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:56:55.934743 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:56:55.935920 systemd[1]: kubelet.service: Consumed 6.976s CPU time, 258.4M memory peak. Jan 23 00:56:59.296412 systemd[1]: Started sshd@3-10.0.0.13:22-10.0.0.1:49506.service - OpenSSH per-connection server daemon (10.0.0.1:49506). Jan 23 00:56:59.419503 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 49506 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 00:56:59.421653 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:56:59.432738 systemd-logind[1531]: New session 4 of user core. Jan 23 00:56:59.442290 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 00:56:59.510406 sshd[1701]: Connection closed by 10.0.0.1 port 49506 Jan 23 00:56:59.510677 sshd-session[1698]: pam_unix(sshd:session): session closed for user core Jan 23 00:56:59.527461 systemd[1]: sshd@3-10.0.0.13:22-10.0.0.1:49506.service: Deactivated successfully. Jan 23 00:56:59.530451 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 00:56:59.534580 systemd-logind[1531]: Session 4 logged out. Waiting for processes to exit. Jan 23 00:56:59.537135 systemd[1]: Started sshd@4-10.0.0.13:22-10.0.0.1:49518.service - OpenSSH per-connection server daemon (10.0.0.1:49518). Jan 23 00:56:59.540600 systemd-logind[1531]: Removed session 4. Jan 23 00:56:59.626052 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 49518 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 00:56:59.627856 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:56:59.641355 systemd-logind[1531]: New session 5 of user core. Jan 23 00:56:59.650292 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 00:56:59.754368 sshd[1710]: Connection closed by 10.0.0.1 port 49518 Jan 23 00:56:59.751331 sshd-session[1707]: pam_unix(sshd:session): session closed for user core Jan 23 00:56:59.797469 systemd[1]: sshd@4-10.0.0.13:22-10.0.0.1:49518.service: Deactivated successfully. Jan 23 00:56:59.800483 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 00:56:59.802113 systemd-logind[1531]: Session 5 logged out. Waiting for processes to exit. Jan 23 00:56:59.861356 systemd[1]: Started sshd@5-10.0.0.13:22-10.0.0.1:49520.service - OpenSSH per-connection server daemon (10.0.0.1:49520). Jan 23 00:56:59.864250 systemd-logind[1531]: Removed session 5. Jan 23 00:57:00.036807 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 49520 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 00:57:00.040805 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:57:00.083383 systemd-logind[1531]: New session 6 of user core. Jan 23 00:57:00.094585 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 00:57:00.192484 sshd[1719]: Connection closed by 10.0.0.1 port 49520 Jan 23 00:57:00.193177 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Jan 23 00:57:00.211280 systemd[1]: sshd@5-10.0.0.13:22-10.0.0.1:49520.service: Deactivated successfully. Jan 23 00:57:00.215698 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 00:57:00.221092 systemd-logind[1531]: Session 6 logged out. Waiting for processes to exit. Jan 23 00:57:00.228188 systemd[1]: Started sshd@6-10.0.0.13:22-10.0.0.1:49532.service - OpenSSH per-connection server daemon (10.0.0.1:49532). Jan 23 00:57:00.231316 systemd-logind[1531]: Removed session 6. Jan 23 00:57:00.318167 sshd[1725]: Accepted publickey for core from 10.0.0.1 port 49532 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 00:57:00.322360 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:57:00.344427 systemd-logind[1531]: New session 7 of user core. Jan 23 00:57:00.369846 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 00:57:00.508353 sudo[1729]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 00:57:00.509568 sudo[1729]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:57:00.577104 sudo[1729]: pam_unix(sudo:session): session closed for user root Jan 23 00:57:00.588486 sshd[1728]: Connection closed by 10.0.0.1 port 49532 Jan 23 00:57:00.592425 sshd-session[1725]: pam_unix(sshd:session): session closed for user core Jan 23 00:57:00.615813 systemd[1]: sshd@6-10.0.0.13:22-10.0.0.1:49532.service: Deactivated successfully. Jan 23 00:57:00.627745 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 00:57:00.632859 systemd-logind[1531]: Session 7 logged out. Waiting for processes to exit. Jan 23 00:57:00.639730 systemd[1]: Started sshd@7-10.0.0.13:22-10.0.0.1:49544.service - OpenSSH per-connection server daemon (10.0.0.1:49544). Jan 23 00:57:00.643843 systemd-logind[1531]: Removed session 7. Jan 23 00:57:00.902721 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 49544 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 00:57:00.928498 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:57:00.964296 systemd-logind[1531]: New session 8 of user core. Jan 23 00:57:01.009358 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 00:57:01.131426 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 00:57:01.132259 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:57:01.153287 sudo[1740]: pam_unix(sudo:session): session closed for user root Jan 23 00:57:01.175892 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 00:57:01.192620 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:57:01.237601 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 00:57:01.559761 augenrules[1762]: No rules Jan 23 00:57:01.582826 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 00:57:01.584135 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 00:57:01.593789 sudo[1739]: pam_unix(sudo:session): session closed for user root Jan 23 00:57:01.635711 sshd[1738]: Connection closed by 10.0.0.1 port 49544 Jan 23 00:57:01.637271 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Jan 23 00:57:01.709561 systemd[1]: sshd@7-10.0.0.13:22-10.0.0.1:49544.service: Deactivated successfully. Jan 23 00:57:01.739922 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 00:57:01.750416 systemd-logind[1531]: Session 8 logged out. Waiting for processes to exit. Jan 23 00:57:01.755777 systemd[1]: Started sshd@8-10.0.0.13:22-10.0.0.1:49616.service - OpenSSH per-connection server daemon (10.0.0.1:49616). Jan 23 00:57:01.842423 systemd-logind[1531]: Removed session 8. Jan 23 00:57:01.951310 sshd[1771]: Accepted publickey for core from 10.0.0.1 port 49616 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 00:57:01.953828 sshd-session[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:57:01.963543 systemd-logind[1531]: New session 9 of user core. Jan 23 00:57:01.976841 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 00:57:02.113623 sudo[1775]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 00:57:02.116176 sudo[1775]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:57:06.201305 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 00:57:06.256341 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:57:09.814465 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 00:57:09.835830 (dockerd)[1801]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 00:57:09.846618 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:57:09.864178 (kubelet)[1805]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:57:10.915366 kubelet[1805]: E0123 00:57:10.914788 1805 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:57:10.926047 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:57:10.926406 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:57:10.927227 systemd[1]: kubelet.service: Consumed 2.137s CPU time, 111.5M memory peak. Jan 23 00:57:12.465771 dockerd[1801]: time="2026-01-23T00:57:12.465394840Z" level=info msg="Starting up" Jan 23 00:57:12.471518 dockerd[1801]: time="2026-01-23T00:57:12.470895842Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 00:57:12.513079 dockerd[1801]: time="2026-01-23T00:57:12.512839300Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 00:57:12.653862 dockerd[1801]: time="2026-01-23T00:57:12.653306780Z" level=info msg="Loading containers: start." Jan 23 00:57:12.683243 kernel: Initializing XFRM netlink socket Jan 23 00:57:16.411398 systemd-networkd[1459]: docker0: Link UP Jan 23 00:57:16.421183 dockerd[1801]: time="2026-01-23T00:57:16.421042734Z" level=info msg="Loading containers: done." Jan 23 00:57:18.446764 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4120601502-merged.mount: Deactivated successfully. Jan 23 00:57:18.457884 dockerd[1801]: time="2026-01-23T00:57:18.457550006Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 00:57:18.458730 dockerd[1801]: time="2026-01-23T00:57:18.458649870Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 00:57:18.459649 dockerd[1801]: time="2026-01-23T00:57:18.459156185Z" level=info msg="Initializing buildkit" Jan 23 00:57:18.892133 dockerd[1801]: time="2026-01-23T00:57:18.891261880Z" level=info msg="Completed buildkit initialization" Jan 23 00:57:18.912469 dockerd[1801]: time="2026-01-23T00:57:18.912324305Z" level=info msg="Daemon has completed initialization" Jan 23 00:57:18.912871 dockerd[1801]: time="2026-01-23T00:57:18.912670326Z" level=info msg="API listen on /run/docker.sock" Jan 23 00:57:18.913471 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 00:57:21.187384 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 00:57:21.223494 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:57:23.311308 containerd[1550]: time="2026-01-23T00:57:23.311015553Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 23 00:57:23.686517 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:57:23.813625 (kubelet)[2039]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:57:24.341247 kubelet[2039]: E0123 00:57:24.340742 2039 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:57:24.352755 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:57:24.353188 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:57:24.354416 systemd[1]: kubelet.service: Consumed 1.743s CPU time, 109.3M memory peak. Jan 23 00:57:24.786389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1739528827.mount: Deactivated successfully. Jan 23 00:57:30.925781 update_engine[1541]: I20260123 00:57:30.913614 1541 update_attempter.cc:509] Updating boot flags... Jan 23 00:57:34.597885 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 00:57:34.604540 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:57:36.023451 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:57:36.039018 (kubelet)[2129]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:57:36.565105 containerd[1550]: time="2026-01-23T00:57:36.562346475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:57:36.565105 containerd[1550]: time="2026-01-23T00:57:36.564912263Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068073" Jan 23 00:57:36.585852 containerd[1550]: time="2026-01-23T00:57:36.584401885Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:57:36.596218 containerd[1550]: time="2026-01-23T00:57:36.596109791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:57:36.602984 containerd[1550]: time="2026-01-23T00:57:36.601922237Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 13.288641439s" Jan 23 00:57:36.602984 containerd[1550]: time="2026-01-23T00:57:36.602068359Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 23 00:57:36.618422 containerd[1550]: time="2026-01-23T00:57:36.617395028Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 23 00:57:36.701205 kubelet[2129]: E0123 00:57:36.699758 2129 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:57:36.707426 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:57:36.707821 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:57:36.708706 systemd[1]: kubelet.service: Consumed 1.685s CPU time, 110.4M memory peak. Jan 23 00:57:44.609506 containerd[1550]: time="2026-01-23T00:57:44.608896355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:57:44.611881 containerd[1550]: time="2026-01-23T00:57:44.611495454Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162440" Jan 23 00:57:44.613563 containerd[1550]: time="2026-01-23T00:57:44.613380218Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:57:44.624033 containerd[1550]: time="2026-01-23T00:57:44.623686726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:57:44.625846 containerd[1550]: time="2026-01-23T00:57:44.625638615Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 8.007786879s" Jan 23 00:57:44.625846 containerd[1550]: time="2026-01-23T00:57:44.625718694Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 23 00:57:44.629055 containerd[1550]: time="2026-01-23T00:57:44.629029339Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 23 00:57:46.854246 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 00:57:46.903904 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:57:48.223357 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:57:48.240713 (kubelet)[2153]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:57:48.886370 kubelet[2153]: E0123 00:57:48.886206 2153 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:57:48.893250 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:57:48.893777 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:57:48.895159 systemd[1]: kubelet.service: Consumed 1.651s CPU time, 108.6M memory peak. Jan 23 00:57:49.944007 containerd[1550]: time="2026-01-23T00:57:49.942861412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:57:49.946222 containerd[1550]: time="2026-01-23T00:57:49.944841341Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725927" Jan 23 00:57:49.949185 containerd[1550]: time="2026-01-23T00:57:49.948856068Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:57:49.954042 containerd[1550]: time="2026-01-23T00:57:49.953781977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:57:49.955187 containerd[1550]: time="2026-01-23T00:57:49.955086770Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 5.326022307s" Jan 23 00:57:49.955187 containerd[1550]: time="2026-01-23T00:57:49.955151832Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 23 00:57:49.957932 containerd[1550]: time="2026-01-23T00:57:49.957788044Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 23 00:57:52.721100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3444941650.mount: Deactivated successfully. Jan 23 00:57:54.378138 containerd[1550]: time="2026-01-23T00:57:54.377675911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:57:54.380237 containerd[1550]: time="2026-01-23T00:57:54.379754950Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965293" Jan 23 00:57:54.381995 containerd[1550]: time="2026-01-23T00:57:54.381548010Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:57:54.385530 containerd[1550]: time="2026-01-23T00:57:54.385423489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:57:54.386618 containerd[1550]: time="2026-01-23T00:57:54.386379339Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 4.428556139s" Jan 23 00:57:54.386618 containerd[1550]: time="2026-01-23T00:57:54.386424163Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 23 00:57:54.388816 containerd[1550]: time="2026-01-23T00:57:54.388604862Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 23 00:57:55.080043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount780936348.mount: Deactivated successfully. Jan 23 00:57:58.172465 containerd[1550]: time="2026-01-23T00:57:58.171653341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:57:58.175573 containerd[1550]: time="2026-01-23T00:57:58.173861179Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Jan 23 00:57:58.177638 containerd[1550]: time="2026-01-23T00:57:58.177531886Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:57:58.183385 containerd[1550]: time="2026-01-23T00:57:58.183130175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:57:58.185111 containerd[1550]: time="2026-01-23T00:57:58.184783390Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 3.796103748s" Jan 23 00:57:58.185111 containerd[1550]: time="2026-01-23T00:57:58.184866945Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 23 00:57:58.188502 containerd[1550]: time="2026-01-23T00:57:58.188189035Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 23 00:57:58.781150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3714547842.mount: Deactivated successfully. Jan 23 00:57:58.789175 containerd[1550]: time="2026-01-23T00:57:58.788850436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:57:58.790473 containerd[1550]: time="2026-01-23T00:57:58.790386865Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Jan 23 00:57:58.792556 containerd[1550]: time="2026-01-23T00:57:58.792410537Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:57:58.796793 containerd[1550]: time="2026-01-23T00:57:58.796600593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:57:58.798025 containerd[1550]: time="2026-01-23T00:57:58.797868948Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 609.642684ms" Jan 23 00:57:58.798123 containerd[1550]: time="2026-01-23T00:57:58.798024479Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 23 00:57:58.800344 containerd[1550]: time="2026-01-23T00:57:58.800100257Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 23 00:57:59.088160 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 23 00:57:59.091353 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:57:59.589610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3430528444.mount: Deactivated successfully. Jan 23 00:57:59.660348 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:57:59.673868 (kubelet)[2238]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:57:59.938442 kubelet[2238]: E0123 00:57:59.935468 2238 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:57:59.942901 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:57:59.943279 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:57:59.944691 systemd[1]: kubelet.service: Consumed 578ms CPU time, 110.1M memory peak. Jan 23 00:58:06.032580 containerd[1550]: time="2026-01-23T00:58:06.032132856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:06.034459 containerd[1550]: time="2026-01-23T00:58:06.034354153Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166814" Jan 23 00:58:06.037280 containerd[1550]: time="2026-01-23T00:58:06.037158284Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:06.044854 containerd[1550]: time="2026-01-23T00:58:06.044643912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:06.046499 containerd[1550]: time="2026-01-23T00:58:06.046157356Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 7.245980435s" Jan 23 00:58:06.046499 containerd[1550]: time="2026-01-23T00:58:06.046298109Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 23 00:58:10.089779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 23 00:58:10.097038 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:58:10.465087 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:58:10.571073 (kubelet)[2327]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:58:10.960440 kubelet[2327]: E0123 00:58:10.958486 2327 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:58:10.970302 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:58:10.971319 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:58:10.973487 systemd[1]: kubelet.service: Consumed 678ms CPU time, 110.2M memory peak. Jan 23 00:58:11.945384 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:58:11.945692 systemd[1]: kubelet.service: Consumed 678ms CPU time, 110.2M memory peak. Jan 23 00:58:11.949612 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:58:12.002847 systemd[1]: Reload requested from client PID 2343 ('systemctl') (unit session-9.scope)... Jan 23 00:58:12.002911 systemd[1]: Reloading... Jan 23 00:58:12.125055 zram_generator::config[2395]: No configuration found. Jan 23 00:58:12.449311 systemd[1]: Reloading finished in 445 ms. Jan 23 00:58:12.547429 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 00:58:12.547618 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 00:58:12.548163 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:58:12.548248 systemd[1]: kubelet.service: Consumed 196ms CPU time, 98.2M memory peak. Jan 23 00:58:12.550721 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:58:14.263372 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:58:14.294892 (kubelet)[2434]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 00:58:14.726471 kubelet[2434]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 00:58:14.726471 kubelet[2434]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 00:58:14.726471 kubelet[2434]: I0123 00:58:14.725663 2434 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 00:58:15.945680 kubelet[2434]: I0123 00:58:15.945299 2434 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 00:58:15.945680 kubelet[2434]: I0123 00:58:15.945487 2434 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 00:58:15.945680 kubelet[2434]: I0123 00:58:15.945688 2434 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 00:58:15.945680 kubelet[2434]: I0123 00:58:15.945728 2434 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 00:58:15.951588 kubelet[2434]: I0123 00:58:15.949291 2434 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 00:58:16.128851 kubelet[2434]: E0123 00:58:16.128478 2434 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 00:58:16.130128 kubelet[2434]: I0123 00:58:16.129554 2434 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 00:58:16.160133 kubelet[2434]: I0123 00:58:16.159551 2434 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 00:58:16.219622 kubelet[2434]: I0123 00:58:16.217850 2434 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 00:58:16.219622 kubelet[2434]: I0123 00:58:16.219595 2434 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 00:58:16.220558 kubelet[2434]: I0123 00:58:16.219639 2434 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 00:58:16.220558 kubelet[2434]: I0123 00:58:16.220280 2434 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 00:58:16.220558 kubelet[2434]: I0123 00:58:16.220300 2434 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 00:58:16.221052 kubelet[2434]: I0123 00:58:16.220732 2434 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 00:58:16.234918 kubelet[2434]: I0123 00:58:16.234112 2434 state_mem.go:36] "Initialized new in-memory state store" Jan 23 00:58:16.236548 kubelet[2434]: I0123 00:58:16.235697 2434 kubelet.go:475] "Attempting to sync node with API server" Jan 23 00:58:16.237813 kubelet[2434]: I0123 00:58:16.236815 2434 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 00:58:16.237813 kubelet[2434]: I0123 00:58:16.237274 2434 kubelet.go:387] "Adding apiserver pod source" Jan 23 00:58:16.237813 kubelet[2434]: I0123 00:58:16.237431 2434 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 00:58:16.239274 kubelet[2434]: E0123 00:58:16.239147 2434 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 00:58:16.239558 kubelet[2434]: E0123 00:58:16.239405 2434 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 00:58:16.243885 kubelet[2434]: I0123 00:58:16.243801 2434 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 00:58:16.245345 kubelet[2434]: I0123 00:58:16.245272 2434 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 00:58:16.245345 kubelet[2434]: I0123 00:58:16.245340 2434 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 00:58:16.245802 kubelet[2434]: W0123 00:58:16.245722 2434 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 00:58:16.265511 kubelet[2434]: I0123 00:58:16.264291 2434 server.go:1262] "Started kubelet" Jan 23 00:58:16.289576 kubelet[2434]: I0123 00:58:16.273582 2434 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 00:58:16.289576 kubelet[2434]: I0123 00:58:16.286490 2434 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 00:58:16.289576 kubelet[2434]: I0123 00:58:16.287151 2434 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 00:58:16.291062 kubelet[2434]: I0123 00:58:16.289713 2434 server.go:310] "Adding debug handlers to kubelet server" Jan 23 00:58:16.291366 kubelet[2434]: I0123 00:58:16.291342 2434 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 00:58:16.291891 kubelet[2434]: I0123 00:58:16.291816 2434 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 00:58:16.292930 kubelet[2434]: I0123 00:58:16.292863 2434 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 00:58:16.293794 kubelet[2434]: E0123 00:58:16.291811 2434 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.13:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.13:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188d36447440a2c8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-23 00:58:16.262542024 +0000 UTC m=+1.956452445,LastTimestamp:2026-01-23 00:58:16.262542024 +0000 UTC m=+1.956452445,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 23 00:58:16.302255 kubelet[2434]: I0123 00:58:16.301676 2434 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 00:58:16.304715 kubelet[2434]: E0123 00:58:16.304654 2434 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 00:58:16.305697 kubelet[2434]: I0123 00:58:16.305600 2434 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 00:58:16.306063 kubelet[2434]: E0123 00:58:16.305686 2434 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="200ms" Jan 23 00:58:16.306297 kubelet[2434]: E0123 00:58:16.305922 2434 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 00:58:16.306847 kubelet[2434]: I0123 00:58:16.306825 2434 reconciler.go:29] "Reconciler: start to sync state" Jan 23 00:58:16.311821 kubelet[2434]: I0123 00:58:16.306887 2434 factory.go:223] Registration of the systemd container factory successfully Jan 23 00:58:16.313898 kubelet[2434]: I0123 00:58:16.313695 2434 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 00:58:16.314285 kubelet[2434]: E0123 00:58:16.313928 2434 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 00:58:16.317728 kubelet[2434]: I0123 00:58:16.317700 2434 factory.go:223] Registration of the containerd container factory successfully Jan 23 00:58:16.397655 kubelet[2434]: I0123 00:58:16.397431 2434 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 00:58:16.397655 kubelet[2434]: I0123 00:58:16.397484 2434 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 00:58:16.397655 kubelet[2434]: I0123 00:58:16.397551 2434 state_mem.go:36] "Initialized new in-memory state store" Jan 23 00:58:16.406007 kubelet[2434]: I0123 00:58:16.405169 2434 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 00:58:16.406007 kubelet[2434]: E0123 00:58:16.405541 2434 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 00:58:16.412130 kubelet[2434]: I0123 00:58:16.409071 2434 policy_none.go:49] "None policy: Start" Jan 23 00:58:16.412130 kubelet[2434]: I0123 00:58:16.409145 2434 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 00:58:16.412130 kubelet[2434]: I0123 00:58:16.409207 2434 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 00:58:16.427147 kubelet[2434]: I0123 00:58:16.421038 2434 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 00:58:16.427147 kubelet[2434]: I0123 00:58:16.426240 2434 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 00:58:16.427147 kubelet[2434]: I0123 00:58:16.426246 2434 policy_none.go:47] "Start" Jan 23 00:58:16.427147 kubelet[2434]: I0123 00:58:16.426832 2434 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 00:58:16.427980 kubelet[2434]: E0123 00:58:16.427593 2434 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 00:58:16.429216 kubelet[2434]: E0123 00:58:16.429141 2434 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 00:58:16.487225 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 00:58:16.507313 kubelet[2434]: E0123 00:58:16.506375 2434 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 00:58:16.509273 kubelet[2434]: E0123 00:58:16.508699 2434 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="400ms" Jan 23 00:58:16.531596 kubelet[2434]: E0123 00:58:16.530922 2434 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 00:58:16.541883 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 00:58:16.587658 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 00:58:16.607924 kubelet[2434]: E0123 00:58:16.607453 2434 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 00:58:16.609622 kubelet[2434]: E0123 00:58:16.609538 2434 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 00:58:16.610500 kubelet[2434]: I0123 00:58:16.610415 2434 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 00:58:16.610563 kubelet[2434]: I0123 00:58:16.610500 2434 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 00:58:16.620681 kubelet[2434]: I0123 00:58:16.620376 2434 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 00:58:16.620681 kubelet[2434]: E0123 00:58:16.620855 2434 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 00:58:16.621669 kubelet[2434]: E0123 00:58:16.621074 2434 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 23 00:58:16.715849 kubelet[2434]: I0123 00:58:16.715559 2434 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 00:58:16.716608 kubelet[2434]: E0123 00:58:16.716472 2434 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Jan 23 00:58:16.760524 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Jan 23 00:58:16.788508 kubelet[2434]: E0123 00:58:16.788303 2434 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 00:58:16.793800 systemd[1]: Created slice kubepods-burstable-pod8b6d4ba821a97bf3b86b268b688be491.slice - libcontainer container kubepods-burstable-pod8b6d4ba821a97bf3b86b268b688be491.slice. Jan 23 00:58:16.797437 kubelet[2434]: E0123 00:58:16.797395 2434 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 00:58:16.813608 kubelet[2434]: I0123 00:58:16.813093 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 00:58:16.813608 kubelet[2434]: I0123 00:58:16.813169 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 00:58:16.813608 kubelet[2434]: I0123 00:58:16.813200 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b6d4ba821a97bf3b86b268b688be491-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8b6d4ba821a97bf3b86b268b688be491\") " pod="kube-system/kube-apiserver-localhost" Jan 23 00:58:16.813608 kubelet[2434]: I0123 00:58:16.813256 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b6d4ba821a97bf3b86b268b688be491-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8b6d4ba821a97bf3b86b268b688be491\") " pod="kube-system/kube-apiserver-localhost" Jan 23 00:58:16.813608 kubelet[2434]: I0123 00:58:16.813284 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 00:58:16.813398 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Jan 23 00:58:16.814184 kubelet[2434]: I0123 00:58:16.813308 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 00:58:16.814184 kubelet[2434]: I0123 00:58:16.813334 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 23 00:58:16.814184 kubelet[2434]: I0123 00:58:16.813355 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b6d4ba821a97bf3b86b268b688be491-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8b6d4ba821a97bf3b86b268b688be491\") " pod="kube-system/kube-apiserver-localhost" Jan 23 00:58:16.814184 kubelet[2434]: I0123 00:58:16.813376 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 00:58:16.820783 kubelet[2434]: E0123 00:58:16.820602 2434 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 00:58:16.909954 kubelet[2434]: E0123 00:58:16.909906 2434 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="800ms" Jan 23 00:58:16.918816 kubelet[2434]: I0123 00:58:16.918788 2434 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 00:58:16.919268 kubelet[2434]: E0123 00:58:16.919222 2434 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Jan 23 00:58:17.095510 containerd[1550]: time="2026-01-23T00:58:17.095311021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Jan 23 00:58:17.102422 containerd[1550]: time="2026-01-23T00:58:17.102333841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8b6d4ba821a97bf3b86b268b688be491,Namespace:kube-system,Attempt:0,}" Jan 23 00:58:17.126556 containerd[1550]: time="2026-01-23T00:58:17.126433454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Jan 23 00:58:17.322169 kubelet[2434]: I0123 00:58:17.322050 2434 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 00:58:17.322817 kubelet[2434]: E0123 00:58:17.322520 2434 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Jan 23 00:58:17.445998 kubelet[2434]: E0123 00:58:17.445684 2434 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 00:58:17.507682 kubelet[2434]: E0123 00:58:17.507595 2434 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 00:58:17.516890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1283710096.mount: Deactivated successfully. Jan 23 00:58:17.525089 containerd[1550]: time="2026-01-23T00:58:17.525023781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 00:58:17.531052 containerd[1550]: time="2026-01-23T00:58:17.530977255Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 23 00:58:17.533872 containerd[1550]: time="2026-01-23T00:58:17.533735667Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 00:58:17.536537 containerd[1550]: time="2026-01-23T00:58:17.536393756Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 00:58:17.539408 containerd[1550]: time="2026-01-23T00:58:17.539259481Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 00:58:17.541000 containerd[1550]: time="2026-01-23T00:58:17.540826090Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 00:58:17.542193 containerd[1550]: time="2026-01-23T00:58:17.542158210Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 00:58:17.544032 containerd[1550]: time="2026-01-23T00:58:17.543895911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 00:58:17.544612 containerd[1550]: time="2026-01-23T00:58:17.544488680Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 440.406607ms" Jan 23 00:58:17.549295 containerd[1550]: time="2026-01-23T00:58:17.549255749Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 420.681589ms" Jan 23 00:58:17.552233 containerd[1550]: time="2026-01-23T00:58:17.552175725Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 452.924418ms" Jan 23 00:58:17.951845 kubelet[2434]: E0123 00:58:17.951486 2434 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 00:58:17.954184 kubelet[2434]: E0123 00:58:17.952432 2434 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="1.6s" Jan 23 00:58:17.960686 containerd[1550]: time="2026-01-23T00:58:17.960425661Z" level=info msg="connecting to shim 0225059d890f0e8ccadf9666906e2b9ed4642ec8f74a6ac3c763b20980c2981b" address="unix:///run/containerd/s/71386c0c6c4a81d81ac1b7c63b429dceea2ac898c636c161c5003c673c9b7852" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:58:17.994877 containerd[1550]: time="2026-01-23T00:58:17.994816749Z" level=info msg="connecting to shim abfbcd7b5d58a78a0faa6b97c578ca3a9d5283fa83789cee424d6c72a5009bff" address="unix:///run/containerd/s/680640c42752cd14307e86cbaaf598124e884206f1bc299587d27662fdf2983a" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:58:17.998827 kubelet[2434]: E0123 00:58:17.998572 2434 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 00:58:18.060360 containerd[1550]: time="2026-01-23T00:58:18.059228147Z" level=info msg="connecting to shim f37cf133c88f43e335cc525da98821f96f6ae4afbb3ced9c418786bc20262773" address="unix:///run/containerd/s/dc88bb8bf1d2f77f24e8c765c3950bb0896aada27280fbb128de00b5ed50b18b" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:58:18.207441 kubelet[2434]: E0123 00:58:18.205566 2434 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 00:58:18.227051 kubelet[2434]: I0123 00:58:18.226892 2434 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 00:58:18.228379 kubelet[2434]: E0123 00:58:18.228308 2434 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Jan 23 00:58:18.237547 systemd[1]: Started cri-containerd-0225059d890f0e8ccadf9666906e2b9ed4642ec8f74a6ac3c763b20980c2981b.scope - libcontainer container 0225059d890f0e8ccadf9666906e2b9ed4642ec8f74a6ac3c763b20980c2981b. Jan 23 00:58:18.272700 systemd[1]: Started cri-containerd-abfbcd7b5d58a78a0faa6b97c578ca3a9d5283fa83789cee424d6c72a5009bff.scope - libcontainer container abfbcd7b5d58a78a0faa6b97c578ca3a9d5283fa83789cee424d6c72a5009bff. Jan 23 00:58:18.340286 systemd[1]: Started cri-containerd-f37cf133c88f43e335cc525da98821f96f6ae4afbb3ced9c418786bc20262773.scope - libcontainer container f37cf133c88f43e335cc525da98821f96f6ae4afbb3ced9c418786bc20262773. Jan 23 00:58:18.753486 containerd[1550]: time="2026-01-23T00:58:18.749346011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8b6d4ba821a97bf3b86b268b688be491,Namespace:kube-system,Attempt:0,} returns sandbox id \"0225059d890f0e8ccadf9666906e2b9ed4642ec8f74a6ac3c763b20980c2981b\"" Jan 23 00:58:18.757832 containerd[1550]: time="2026-01-23T00:58:18.755412044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"abfbcd7b5d58a78a0faa6b97c578ca3a9d5283fa83789cee424d6c72a5009bff\"" Jan 23 00:58:18.827127 containerd[1550]: time="2026-01-23T00:58:18.826628724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"f37cf133c88f43e335cc525da98821f96f6ae4afbb3ced9c418786bc20262773\"" Jan 23 00:58:18.831396 containerd[1550]: time="2026-01-23T00:58:18.827691635Z" level=info msg="CreateContainer within sandbox \"abfbcd7b5d58a78a0faa6b97c578ca3a9d5283fa83789cee424d6c72a5009bff\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 00:58:18.846068 containerd[1550]: time="2026-01-23T00:58:18.845199779Z" level=info msg="CreateContainer within sandbox \"0225059d890f0e8ccadf9666906e2b9ed4642ec8f74a6ac3c763b20980c2981b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 00:58:19.046518 containerd[1550]: time="2026-01-23T00:58:19.036717846Z" level=info msg="CreateContainer within sandbox \"f37cf133c88f43e335cc525da98821f96f6ae4afbb3ced9c418786bc20262773\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 00:58:19.098079 containerd[1550]: time="2026-01-23T00:58:19.097845599Z" level=info msg="Container 99c925d4f549af35810a3e3dd67d40a8474bb60b97f0e6ea75e62a019a040618: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:58:19.101012 containerd[1550]: time="2026-01-23T00:58:19.100164919Z" level=info msg="Container 43e9b75dd60fc8232d668f98e56b85e178e434d4cfeb2e760ff0872875583023: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:58:19.101682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2201812141.mount: Deactivated successfully. Jan 23 00:58:19.117576 containerd[1550]: time="2026-01-23T00:58:19.117535236Z" level=info msg="CreateContainer within sandbox \"abfbcd7b5d58a78a0faa6b97c578ca3a9d5283fa83789cee424d6c72a5009bff\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"99c925d4f549af35810a3e3dd67d40a8474bb60b97f0e6ea75e62a019a040618\"" Jan 23 00:58:19.119130 containerd[1550]: time="2026-01-23T00:58:19.119102672Z" level=info msg="StartContainer for \"99c925d4f549af35810a3e3dd67d40a8474bb60b97f0e6ea75e62a019a040618\"" Jan 23 00:58:19.121042 containerd[1550]: time="2026-01-23T00:58:19.121008801Z" level=info msg="connecting to shim 99c925d4f549af35810a3e3dd67d40a8474bb60b97f0e6ea75e62a019a040618" address="unix:///run/containerd/s/680640c42752cd14307e86cbaaf598124e884206f1bc299587d27662fdf2983a" protocol=ttrpc version=3 Jan 23 00:58:19.129466 containerd[1550]: time="2026-01-23T00:58:19.129425163Z" level=info msg="Container 1922dc08dff260148aa606a913d2f20459165371b4381f478419688be74844f0: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:58:19.132639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2926784104.mount: Deactivated successfully. Jan 23 00:58:19.133146 containerd[1550]: time="2026-01-23T00:58:19.133076446Z" level=info msg="CreateContainer within sandbox \"0225059d890f0e8ccadf9666906e2b9ed4642ec8f74a6ac3c763b20980c2981b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"43e9b75dd60fc8232d668f98e56b85e178e434d4cfeb2e760ff0872875583023\"" Jan 23 00:58:19.136037 containerd[1550]: time="2026-01-23T00:58:19.135163414Z" level=info msg="StartContainer for \"43e9b75dd60fc8232d668f98e56b85e178e434d4cfeb2e760ff0872875583023\"" Jan 23 00:58:19.136853 containerd[1550]: time="2026-01-23T00:58:19.136811972Z" level=info msg="connecting to shim 43e9b75dd60fc8232d668f98e56b85e178e434d4cfeb2e760ff0872875583023" address="unix:///run/containerd/s/71386c0c6c4a81d81ac1b7c63b429dceea2ac898c636c161c5003c673c9b7852" protocol=ttrpc version=3 Jan 23 00:58:19.149353 containerd[1550]: time="2026-01-23T00:58:19.149249903Z" level=info msg="CreateContainer within sandbox \"f37cf133c88f43e335cc525da98821f96f6ae4afbb3ced9c418786bc20262773\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1922dc08dff260148aa606a913d2f20459165371b4381f478419688be74844f0\"" Jan 23 00:58:19.150458 containerd[1550]: time="2026-01-23T00:58:19.150317026Z" level=info msg="StartContainer for \"1922dc08dff260148aa606a913d2f20459165371b4381f478419688be74844f0\"" Jan 23 00:58:19.154546 containerd[1550]: time="2026-01-23T00:58:19.154463223Z" level=info msg="connecting to shim 1922dc08dff260148aa606a913d2f20459165371b4381f478419688be74844f0" address="unix:///run/containerd/s/dc88bb8bf1d2f77f24e8c765c3950bb0896aada27280fbb128de00b5ed50b18b" protocol=ttrpc version=3 Jan 23 00:58:19.169669 systemd[1]: Started cri-containerd-99c925d4f549af35810a3e3dd67d40a8474bb60b97f0e6ea75e62a019a040618.scope - libcontainer container 99c925d4f549af35810a3e3dd67d40a8474bb60b97f0e6ea75e62a019a040618. Jan 23 00:58:19.200201 systemd[1]: Started cri-containerd-43e9b75dd60fc8232d668f98e56b85e178e434d4cfeb2e760ff0872875583023.scope - libcontainer container 43e9b75dd60fc8232d668f98e56b85e178e434d4cfeb2e760ff0872875583023. Jan 23 00:58:19.212291 systemd[1]: Started cri-containerd-1922dc08dff260148aa606a913d2f20459165371b4381f478419688be74844f0.scope - libcontainer container 1922dc08dff260148aa606a913d2f20459165371b4381f478419688be74844f0. Jan 23 00:58:19.349653 containerd[1550]: time="2026-01-23T00:58:19.349514327Z" level=info msg="StartContainer for \"43e9b75dd60fc8232d668f98e56b85e178e434d4cfeb2e760ff0872875583023\" returns successfully" Jan 23 00:58:19.365342 containerd[1550]: time="2026-01-23T00:58:19.365209173Z" level=info msg="StartContainer for \"99c925d4f549af35810a3e3dd67d40a8474bb60b97f0e6ea75e62a019a040618\" returns successfully" Jan 23 00:58:19.408005 containerd[1550]: time="2026-01-23T00:58:19.405151600Z" level=info msg="StartContainer for \"1922dc08dff260148aa606a913d2f20459165371b4381f478419688be74844f0\" returns successfully" Jan 23 00:58:19.590893 kubelet[2434]: E0123 00:58:19.590581 2434 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="3.2s" Jan 23 00:58:19.628078 kubelet[2434]: E0123 00:58:19.604563 2434 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 00:58:19.685506 kubelet[2434]: E0123 00:58:19.685336 2434 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 00:58:19.691335 kubelet[2434]: E0123 00:58:19.691277 2434 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 00:58:19.697407 kubelet[2434]: E0123 00:58:19.697343 2434 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 00:58:19.837903 kubelet[2434]: I0123 00:58:19.837511 2434 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 00:58:20.706055 kubelet[2434]: E0123 00:58:20.705820 2434 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 00:58:20.709107 kubelet[2434]: E0123 00:58:20.708220 2434 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 00:58:24.738577 kubelet[2434]: E0123 00:58:24.738299 2434 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 00:58:24.921498 kubelet[2434]: E0123 00:58:24.921439 2434 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 00:58:26.516815 kubelet[2434]: E0123 00:58:26.516144 2434 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 23 00:58:26.631526 kubelet[2434]: E0123 00:58:26.630566 2434 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 23 00:58:26.685606 kubelet[2434]: E0123 00:58:26.685432 2434 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188d36447440a2c8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-23 00:58:16.262542024 +0000 UTC m=+1.956452445,LastTimestamp:2026-01-23 00:58:16.262542024 +0000 UTC m=+1.956452445,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 23 00:58:26.726711 kubelet[2434]: I0123 00:58:26.726422 2434 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 23 00:58:26.727375 kubelet[2434]: E0123 00:58:26.726721 2434 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 23 00:58:26.806423 kubelet[2434]: I0123 00:58:26.805910 2434 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 23 00:58:26.830297 kubelet[2434]: E0123 00:58:26.828369 2434 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 23 00:58:26.830297 kubelet[2434]: I0123 00:58:26.828411 2434 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 23 00:58:26.835907 kubelet[2434]: E0123 00:58:26.835184 2434 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 23 00:58:26.835907 kubelet[2434]: I0123 00:58:26.835211 2434 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 23 00:58:26.838313 kubelet[2434]: E0123 00:58:26.837889 2434 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 23 00:58:27.277371 kubelet[2434]: I0123 00:58:27.275849 2434 apiserver.go:52] "Watching apiserver" Jan 23 00:58:27.307529 kubelet[2434]: I0123 00:58:27.306419 2434 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 00:58:28.553351 kubelet[2434]: I0123 00:58:28.552285 2434 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 23 00:58:30.884263 systemd[1]: Reload requested from client PID 2722 ('systemctl') (unit session-9.scope)... Jan 23 00:58:30.884304 systemd[1]: Reloading... Jan 23 00:58:31.088164 zram_generator::config[2771]: No configuration found. Jan 23 00:58:31.596430 systemd[1]: Reloading finished in 711 ms. Jan 23 00:58:31.679334 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:58:31.712729 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 00:58:31.713480 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:58:31.713645 systemd[1]: kubelet.service: Consumed 5.367s CPU time, 126.6M memory peak. Jan 23 00:58:31.718323 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:58:32.061144 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:58:32.086787 (kubelet)[2810]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 00:58:32.227012 kubelet[2810]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 00:58:32.228701 kubelet[2810]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 00:58:32.228701 kubelet[2810]: I0123 00:58:32.228125 2810 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 00:58:32.239189 kubelet[2810]: I0123 00:58:32.238913 2810 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 00:58:32.239189 kubelet[2810]: I0123 00:58:32.239074 2810 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 00:58:32.239189 kubelet[2810]: I0123 00:58:32.239116 2810 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 00:58:32.239189 kubelet[2810]: I0123 00:58:32.239132 2810 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 00:58:32.240173 kubelet[2810]: I0123 00:58:32.239438 2810 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 00:58:32.241006 kubelet[2810]: I0123 00:58:32.240826 2810 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 00:58:32.260732 kubelet[2810]: I0123 00:58:32.260336 2810 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 00:58:32.275357 kubelet[2810]: I0123 00:58:32.275209 2810 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 00:58:32.285307 kubelet[2810]: I0123 00:58:32.285263 2810 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 00:58:32.285722 kubelet[2810]: I0123 00:58:32.285580 2810 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 00:58:32.285908 kubelet[2810]: I0123 00:58:32.285645 2810 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 00:58:32.286159 kubelet[2810]: I0123 00:58:32.285925 2810 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 00:58:32.286159 kubelet[2810]: I0123 00:58:32.285992 2810 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 00:58:32.286159 kubelet[2810]: I0123 00:58:32.286044 2810 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 00:58:32.287407 kubelet[2810]: I0123 00:58:32.287303 2810 state_mem.go:36] "Initialized new in-memory state store" Jan 23 00:58:32.287639 kubelet[2810]: I0123 00:58:32.287583 2810 kubelet.go:475] "Attempting to sync node with API server" Jan 23 00:58:32.287639 kubelet[2810]: I0123 00:58:32.287614 2810 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 00:58:32.287639 kubelet[2810]: I0123 00:58:32.287643 2810 kubelet.go:387] "Adding apiserver pod source" Jan 23 00:58:32.287766 kubelet[2810]: I0123 00:58:32.287672 2810 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 00:58:32.291993 kubelet[2810]: I0123 00:58:32.289773 2810 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 00:58:32.291993 kubelet[2810]: I0123 00:58:32.290444 2810 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 00:58:32.291993 kubelet[2810]: I0123 00:58:32.290468 2810 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 00:58:32.305447 kubelet[2810]: I0123 00:58:32.305135 2810 server.go:1262] "Started kubelet" Jan 23 00:58:32.305789 kubelet[2810]: I0123 00:58:32.305574 2810 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 00:58:32.307731 kubelet[2810]: I0123 00:58:32.307493 2810 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 00:58:32.307731 kubelet[2810]: I0123 00:58:32.307692 2810 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 00:58:32.308346 kubelet[2810]: I0123 00:58:32.308311 2810 server.go:310] "Adding debug handlers to kubelet server" Jan 23 00:58:32.309821 kubelet[2810]: I0123 00:58:32.309704 2810 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 00:58:32.329660 kubelet[2810]: I0123 00:58:32.329484 2810 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 00:58:32.332598 kubelet[2810]: I0123 00:58:32.331694 2810 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 00:58:32.332598 kubelet[2810]: I0123 00:58:32.331980 2810 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 00:58:32.332598 kubelet[2810]: I0123 00:58:32.332141 2810 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 00:58:32.332598 kubelet[2810]: I0123 00:58:32.332271 2810 reconciler.go:29] "Reconciler: start to sync state" Jan 23 00:58:32.332846 kubelet[2810]: E0123 00:58:32.332747 2810 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 00:58:32.337839 kubelet[2810]: E0123 00:58:32.336895 2810 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 00:58:32.337839 kubelet[2810]: I0123 00:58:32.337216 2810 factory.go:223] Registration of the systemd container factory successfully Jan 23 00:58:32.337839 kubelet[2810]: I0123 00:58:32.337315 2810 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 00:58:32.340295 kubelet[2810]: I0123 00:58:32.340222 2810 factory.go:223] Registration of the containerd container factory successfully Jan 23 00:58:32.386040 kubelet[2810]: I0123 00:58:32.385886 2810 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 00:58:32.389641 kubelet[2810]: I0123 00:58:32.389552 2810 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 00:58:32.389641 kubelet[2810]: I0123 00:58:32.389623 2810 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 00:58:32.390072 kubelet[2810]: I0123 00:58:32.389985 2810 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 00:58:32.390146 kubelet[2810]: E0123 00:58:32.390068 2810 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 00:58:32.408174 kubelet[2810]: I0123 00:58:32.407785 2810 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 00:58:32.408174 kubelet[2810]: I0123 00:58:32.407859 2810 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 00:58:32.408174 kubelet[2810]: I0123 00:58:32.407882 2810 state_mem.go:36] "Initialized new in-memory state store" Jan 23 00:58:32.408174 kubelet[2810]: I0123 00:58:32.408153 2810 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 00:58:32.408174 kubelet[2810]: I0123 00:58:32.408170 2810 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 00:58:32.408486 kubelet[2810]: I0123 00:58:32.408195 2810 policy_none.go:49] "None policy: Start" Jan 23 00:58:32.408486 kubelet[2810]: I0123 00:58:32.408209 2810 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 00:58:32.408486 kubelet[2810]: I0123 00:58:32.408222 2810 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 00:58:32.408486 kubelet[2810]: I0123 00:58:32.408310 2810 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 23 00:58:32.408486 kubelet[2810]: I0123 00:58:32.408318 2810 policy_none.go:47] "Start" Jan 23 00:58:32.431427 kubelet[2810]: E0123 00:58:32.430998 2810 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 00:58:32.431427 kubelet[2810]: I0123 00:58:32.431266 2810 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 00:58:32.431427 kubelet[2810]: I0123 00:58:32.431284 2810 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 00:58:32.433424 kubelet[2810]: E0123 00:58:32.433372 2810 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 00:58:32.437891 kubelet[2810]: I0123 00:58:32.437034 2810 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 00:58:32.491731 kubelet[2810]: I0123 00:58:32.491641 2810 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 23 00:58:32.495160 kubelet[2810]: I0123 00:58:32.495083 2810 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 23 00:58:32.498870 kubelet[2810]: I0123 00:58:32.497335 2810 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 23 00:58:32.515886 kubelet[2810]: E0123 00:58:32.515678 2810 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 23 00:58:32.534098 kubelet[2810]: I0123 00:58:32.534054 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 00:58:32.534347 kubelet[2810]: I0123 00:58:32.534145 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Jan 23 00:58:32.534892 kubelet[2810]: I0123 00:58:32.534581 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b6d4ba821a97bf3b86b268b688be491-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8b6d4ba821a97bf3b86b268b688be491\") " pod="kube-system/kube-apiserver-localhost" Jan 23 00:58:32.534892 kubelet[2810]: I0123 00:58:32.534602 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b6d4ba821a97bf3b86b268b688be491-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8b6d4ba821a97bf3b86b268b688be491\") " pod="kube-system/kube-apiserver-localhost" Jan 23 00:58:32.535356 kubelet[2810]: I0123 00:58:32.535208 2810 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 00:58:32.536200 kubelet[2810]: I0123 00:58:32.535824 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 00:58:32.536200 kubelet[2810]: I0123 00:58:32.535973 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 00:58:32.536200 kubelet[2810]: I0123 00:58:32.535995 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b6d4ba821a97bf3b86b268b688be491-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8b6d4ba821a97bf3b86b268b688be491\") " pod="kube-system/kube-apiserver-localhost" Jan 23 00:58:32.536200 kubelet[2810]: I0123 00:58:32.536145 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 00:58:32.536200 kubelet[2810]: I0123 00:58:32.536162 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 00:58:32.654786 kubelet[2810]: I0123 00:58:32.654421 2810 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 23 00:58:32.656396 kubelet[2810]: I0123 00:58:32.655683 2810 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 23 00:58:33.289602 kubelet[2810]: I0123 00:58:33.289434 2810 apiserver.go:52] "Watching apiserver" Jan 23 00:58:33.338550 kubelet[2810]: I0123 00:58:33.337495 2810 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 00:58:33.439254 kubelet[2810]: I0123 00:58:33.437882 2810 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 23 00:58:33.441251 kubelet[2810]: I0123 00:58:33.440901 2810 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 23 00:58:33.451311 kubelet[2810]: E0123 00:58:33.451137 2810 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 23 00:58:33.456752 kubelet[2810]: E0123 00:58:33.456664 2810 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 23 00:58:33.508035 kubelet[2810]: I0123 00:58:33.507867 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.507756604 podStartE2EDuration="1.507756604s" podCreationTimestamp="2026-01-23 00:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:58:33.505898011 +0000 UTC m=+1.400988074" watchObservedRunningTime="2026-01-23 00:58:33.507756604 +0000 UTC m=+1.402846647" Jan 23 00:58:33.525021 kubelet[2810]: I0123 00:58:33.524754 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.524733028 podStartE2EDuration="1.524733028s" podCreationTimestamp="2026-01-23 00:58:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:58:33.524434059 +0000 UTC m=+1.419524112" watchObservedRunningTime="2026-01-23 00:58:33.524733028 +0000 UTC m=+1.419823071" Jan 23 00:58:33.782225 kubelet[2810]: I0123 00:58:33.782122 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.7820806959999995 podStartE2EDuration="5.782080696s" podCreationTimestamp="2026-01-23 00:58:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:58:33.756050241 +0000 UTC m=+1.651140294" watchObservedRunningTime="2026-01-23 00:58:33.782080696 +0000 UTC m=+1.677170739" Jan 23 00:58:35.723089 kubelet[2810]: I0123 00:58:35.720341 2810 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 00:58:35.753102 kubelet[2810]: I0123 00:58:35.735742 2810 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 00:58:35.753559 containerd[1550]: time="2026-01-23T00:58:35.729383104Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 00:58:36.453144 systemd[1]: Created slice kubepods-besteffort-pod67edcaf2_77ff_4fea_baa1_2638499b37ed.slice - libcontainer container kubepods-besteffort-pod67edcaf2_77ff_4fea_baa1_2638499b37ed.slice. Jan 23 00:58:36.528391 kubelet[2810]: I0123 00:58:36.528051 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/67edcaf2-77ff-4fea-baa1-2638499b37ed-kube-proxy\") pod \"kube-proxy-npvss\" (UID: \"67edcaf2-77ff-4fea-baa1-2638499b37ed\") " pod="kube-system/kube-proxy-npvss" Jan 23 00:58:36.528391 kubelet[2810]: I0123 00:58:36.528110 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67edcaf2-77ff-4fea-baa1-2638499b37ed-xtables-lock\") pod \"kube-proxy-npvss\" (UID: \"67edcaf2-77ff-4fea-baa1-2638499b37ed\") " pod="kube-system/kube-proxy-npvss" Jan 23 00:58:36.528391 kubelet[2810]: I0123 00:58:36.528130 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67edcaf2-77ff-4fea-baa1-2638499b37ed-lib-modules\") pod \"kube-proxy-npvss\" (UID: \"67edcaf2-77ff-4fea-baa1-2638499b37ed\") " pod="kube-system/kube-proxy-npvss" Jan 23 00:58:36.528391 kubelet[2810]: I0123 00:58:36.528153 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrxl7\" (UniqueName: \"kubernetes.io/projected/67edcaf2-77ff-4fea-baa1-2638499b37ed-kube-api-access-xrxl7\") pod \"kube-proxy-npvss\" (UID: \"67edcaf2-77ff-4fea-baa1-2638499b37ed\") " pod="kube-system/kube-proxy-npvss" Jan 23 00:58:36.939475 containerd[1550]: time="2026-01-23T00:58:36.938691626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-npvss,Uid:67edcaf2-77ff-4fea-baa1-2638499b37ed,Namespace:kube-system,Attempt:0,}" Jan 23 00:58:37.044717 kubelet[2810]: I0123 00:58:37.044432 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78pcv\" (UniqueName: \"kubernetes.io/projected/517ad65d-4fce-406c-89d5-f563d26f076b-kube-api-access-78pcv\") pod \"tigera-operator-65cdcdfd6d-jsqwn\" (UID: \"517ad65d-4fce-406c-89d5-f563d26f076b\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-jsqwn" Jan 23 00:58:37.044717 kubelet[2810]: I0123 00:58:37.044595 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/517ad65d-4fce-406c-89d5-f563d26f076b-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-jsqwn\" (UID: \"517ad65d-4fce-406c-89d5-f563d26f076b\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-jsqwn" Jan 23 00:58:37.166910 systemd[1]: Created slice kubepods-besteffort-pod517ad65d_4fce_406c_89d5_f563d26f076b.slice - libcontainer container kubepods-besteffort-pod517ad65d_4fce_406c_89d5_f563d26f076b.slice. Jan 23 00:58:37.245541 containerd[1550]: time="2026-01-23T00:58:37.241777457Z" level=info msg="connecting to shim 9c95caf47b322cac2e6eb1ba6b3aaf8c5b313357ee0088919c32b6a052ef9ab2" address="unix:///run/containerd/s/a1f26f1b92c8f2075dbbc395d771d65a31668ab71d38f4e31582a89d6ae1d4b9" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:58:37.556803 containerd[1550]: time="2026-01-23T00:58:37.556296783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-jsqwn,Uid:517ad65d-4fce-406c-89d5-f563d26f076b,Namespace:tigera-operator,Attempt:0,}" Jan 23 00:58:37.846110 systemd[1]: Started cri-containerd-9c95caf47b322cac2e6eb1ba6b3aaf8c5b313357ee0088919c32b6a052ef9ab2.scope - libcontainer container 9c95caf47b322cac2e6eb1ba6b3aaf8c5b313357ee0088919c32b6a052ef9ab2. Jan 23 00:58:39.503416 containerd[1550]: time="2026-01-23T00:58:39.502859493Z" level=info msg="connecting to shim 9aba6b212e6eb17fbe6ebd5b313b8645f93c8282215de6e3194d39e1908d16dc" address="unix:///run/containerd/s/fcbe6fb33ea69c8df8ae4145d20ee23d88c73b87acd6cf911e42cc3101c3995a" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:58:41.923593 systemd[1]: Started cri-containerd-9aba6b212e6eb17fbe6ebd5b313b8645f93c8282215de6e3194d39e1908d16dc.scope - libcontainer container 9aba6b212e6eb17fbe6ebd5b313b8645f93c8282215de6e3194d39e1908d16dc. Jan 23 00:58:43.162809 kubelet[2810]: E0123 00:58:43.160693 2810 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.761s" Jan 23 00:58:43.408409 containerd[1550]: time="2026-01-23T00:58:43.408362405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-npvss,Uid:67edcaf2-77ff-4fea-baa1-2638499b37ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c95caf47b322cac2e6eb1ba6b3aaf8c5b313357ee0088919c32b6a052ef9ab2\"" Jan 23 00:58:43.427161 containerd[1550]: time="2026-01-23T00:58:43.426729403Z" level=info msg="CreateContainer within sandbox \"9c95caf47b322cac2e6eb1ba6b3aaf8c5b313357ee0088919c32b6a052ef9ab2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 00:58:43.459255 containerd[1550]: time="2026-01-23T00:58:43.456521735Z" level=info msg="Container 9c2b98a58e8b6ae4e5b28592e4c94e33965a70835f561fbd45b18137fdbe214e: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:58:43.462273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2070893762.mount: Deactivated successfully. Jan 23 00:58:43.505707 containerd[1550]: time="2026-01-23T00:58:43.505608685Z" level=info msg="CreateContainer within sandbox \"9c95caf47b322cac2e6eb1ba6b3aaf8c5b313357ee0088919c32b6a052ef9ab2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9c2b98a58e8b6ae4e5b28592e4c94e33965a70835f561fbd45b18137fdbe214e\"" Jan 23 00:58:43.510368 containerd[1550]: time="2026-01-23T00:58:43.510251726Z" level=info msg="StartContainer for \"9c2b98a58e8b6ae4e5b28592e4c94e33965a70835f561fbd45b18137fdbe214e\"" Jan 23 00:58:43.512870 containerd[1550]: time="2026-01-23T00:58:43.512624759Z" level=info msg="connecting to shim 9c2b98a58e8b6ae4e5b28592e4c94e33965a70835f561fbd45b18137fdbe214e" address="unix:///run/containerd/s/a1f26f1b92c8f2075dbbc395d771d65a31668ab71d38f4e31582a89d6ae1d4b9" protocol=ttrpc version=3 Jan 23 00:58:43.519508 containerd[1550]: time="2026-01-23T00:58:43.519343218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-jsqwn,Uid:517ad65d-4fce-406c-89d5-f563d26f076b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9aba6b212e6eb17fbe6ebd5b313b8645f93c8282215de6e3194d39e1908d16dc\"" Jan 23 00:58:43.527793 containerd[1550]: time="2026-01-23T00:58:43.527727108Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 00:58:43.550208 systemd[1]: Started cri-containerd-9c2b98a58e8b6ae4e5b28592e4c94e33965a70835f561fbd45b18137fdbe214e.scope - libcontainer container 9c2b98a58e8b6ae4e5b28592e4c94e33965a70835f561fbd45b18137fdbe214e. Jan 23 00:58:43.758019 containerd[1550]: time="2026-01-23T00:58:43.757491189Z" level=info msg="StartContainer for \"9c2b98a58e8b6ae4e5b28592e4c94e33965a70835f561fbd45b18137fdbe214e\" returns successfully" Jan 23 00:58:44.103139 kubelet[2810]: I0123 00:58:44.103017 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-npvss" podStartSLOduration=8.102992826 podStartE2EDuration="8.102992826s" podCreationTimestamp="2026-01-23 00:58:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:58:44.100181592 +0000 UTC m=+11.995271635" watchObservedRunningTime="2026-01-23 00:58:44.102992826 +0000 UTC m=+11.998082909" Jan 23 00:58:45.658741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3786263259.mount: Deactivated successfully. Jan 23 00:58:48.048972 containerd[1550]: time="2026-01-23T00:58:48.048835234Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:48.051090 containerd[1550]: time="2026-01-23T00:58:48.051003503Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 23 00:58:48.052911 containerd[1550]: time="2026-01-23T00:58:48.052818318Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:48.056815 containerd[1550]: time="2026-01-23T00:58:48.056737932Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:58:48.058124 containerd[1550]: time="2026-01-23T00:58:48.058074480Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 4.52999266s" Jan 23 00:58:48.058182 containerd[1550]: time="2026-01-23T00:58:48.058132259Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 23 00:58:48.077703 containerd[1550]: time="2026-01-23T00:58:48.077637558Z" level=info msg="CreateContainer within sandbox \"9aba6b212e6eb17fbe6ebd5b313b8645f93c8282215de6e3194d39e1908d16dc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 00:58:48.095604 containerd[1550]: time="2026-01-23T00:58:48.095531162Z" level=info msg="Container c33a6816940f58d07c24d3950f5972dc557dfe902aa0f3dfd9a22a6d3d286f55: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:58:48.106365 containerd[1550]: time="2026-01-23T00:58:48.106258433Z" level=info msg="CreateContainer within sandbox \"9aba6b212e6eb17fbe6ebd5b313b8645f93c8282215de6e3194d39e1908d16dc\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c33a6816940f58d07c24d3950f5972dc557dfe902aa0f3dfd9a22a6d3d286f55\"" Jan 23 00:58:48.107561 containerd[1550]: time="2026-01-23T00:58:48.107148521Z" level=info msg="StartContainer for \"c33a6816940f58d07c24d3950f5972dc557dfe902aa0f3dfd9a22a6d3d286f55\"" Jan 23 00:58:48.108556 containerd[1550]: time="2026-01-23T00:58:48.108525893Z" level=info msg="connecting to shim c33a6816940f58d07c24d3950f5972dc557dfe902aa0f3dfd9a22a6d3d286f55" address="unix:///run/containerd/s/fcbe6fb33ea69c8df8ae4145d20ee23d88c73b87acd6cf911e42cc3101c3995a" protocol=ttrpc version=3 Jan 23 00:58:48.239228 systemd[1]: Started cri-containerd-c33a6816940f58d07c24d3950f5972dc557dfe902aa0f3dfd9a22a6d3d286f55.scope - libcontainer container c33a6816940f58d07c24d3950f5972dc557dfe902aa0f3dfd9a22a6d3d286f55. Jan 23 00:58:48.342912 containerd[1550]: time="2026-01-23T00:58:48.342832886Z" level=info msg="StartContainer for \"c33a6816940f58d07c24d3950f5972dc557dfe902aa0f3dfd9a22a6d3d286f55\" returns successfully" Jan 23 00:58:55.005249 sudo[1775]: pam_unix(sudo:session): session closed for user root Jan 23 00:58:55.016752 sshd[1774]: Connection closed by 10.0.0.1 port 49616 Jan 23 00:58:55.027914 sshd-session[1771]: pam_unix(sshd:session): session closed for user core Jan 23 00:58:55.046274 systemd[1]: sshd@8-10.0.0.13:22-10.0.0.1:49616.service: Deactivated successfully. Jan 23 00:58:55.093466 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 00:58:55.108496 systemd[1]: session-9.scope: Consumed 20.518s CPU time, 231M memory peak. Jan 23 00:58:55.117138 systemd-logind[1531]: Session 9 logged out. Waiting for processes to exit. Jan 23 00:58:55.129555 systemd-logind[1531]: Removed session 9. Jan 23 00:58:58.512669 kubelet[2810]: E0123 00:58:58.512522 2810 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.016s" Jan 23 00:59:06.698847 kubelet[2810]: E0123 00:59:06.698443 2810 kubelet.go:2617] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.299s" Jan 23 00:59:15.470802 kubelet[2810]: I0123 00:59:15.467118 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-jsqwn" podStartSLOduration=34.924723905 podStartE2EDuration="39.466781478s" podCreationTimestamp="2026-01-23 00:58:36 +0000 UTC" firstStartedPulling="2026-01-23 00:58:43.524540167 +0000 UTC m=+11.419630210" lastFinishedPulling="2026-01-23 00:58:48.06659774 +0000 UTC m=+15.961687783" observedRunningTime="2026-01-23 00:58:49.164224933 +0000 UTC m=+17.059314975" watchObservedRunningTime="2026-01-23 00:59:15.466781478 +0000 UTC m=+43.361871522" Jan 23 00:59:15.540087 systemd[1]: Created slice kubepods-besteffort-podd04b5378_1dad_4b3c_a087_47550c6ac01d.slice - libcontainer container kubepods-besteffort-podd04b5378_1dad_4b3c_a087_47550c6ac01d.slice. Jan 23 00:59:15.580004 kubelet[2810]: I0123 00:59:15.579844 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d04b5378-1dad-4b3c-a087-47550c6ac01d-tigera-ca-bundle\") pod \"calico-typha-6fdb97c68d-djwd2\" (UID: \"d04b5378-1dad-4b3c-a087-47550c6ac01d\") " pod="calico-system/calico-typha-6fdb97c68d-djwd2" Jan 23 00:59:15.580004 kubelet[2810]: I0123 00:59:15.579902 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d04b5378-1dad-4b3c-a087-47550c6ac01d-typha-certs\") pod \"calico-typha-6fdb97c68d-djwd2\" (UID: \"d04b5378-1dad-4b3c-a087-47550c6ac01d\") " pod="calico-system/calico-typha-6fdb97c68d-djwd2" Jan 23 00:59:15.580004 kubelet[2810]: I0123 00:59:15.579922 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snktd\" (UniqueName: \"kubernetes.io/projected/d04b5378-1dad-4b3c-a087-47550c6ac01d-kube-api-access-snktd\") pod \"calico-typha-6fdb97c68d-djwd2\" (UID: \"d04b5378-1dad-4b3c-a087-47550c6ac01d\") " pod="calico-system/calico-typha-6fdb97c68d-djwd2" Jan 23 00:59:15.662780 systemd[1]: Created slice kubepods-besteffort-pod638d912e_3c76_451e_8b3f_ac2a7cf07c41.slice - libcontainer container kubepods-besteffort-pod638d912e_3c76_451e_8b3f_ac2a7cf07c41.slice. Jan 23 00:59:15.681040 kubelet[2810]: I0123 00:59:15.680904 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/638d912e-3c76-451e-8b3f-ac2a7cf07c41-lib-modules\") pod \"calico-node-tz4vt\" (UID: \"638d912e-3c76-451e-8b3f-ac2a7cf07c41\") " pod="calico-system/calico-node-tz4vt" Jan 23 00:59:15.682654 kubelet[2810]: I0123 00:59:15.682010 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/638d912e-3c76-451e-8b3f-ac2a7cf07c41-cni-bin-dir\") pod \"calico-node-tz4vt\" (UID: \"638d912e-3c76-451e-8b3f-ac2a7cf07c41\") " pod="calico-system/calico-node-tz4vt" Jan 23 00:59:15.682654 kubelet[2810]: I0123 00:59:15.682052 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g6hw\" (UniqueName: \"kubernetes.io/projected/638d912e-3c76-451e-8b3f-ac2a7cf07c41-kube-api-access-9g6hw\") pod \"calico-node-tz4vt\" (UID: \"638d912e-3c76-451e-8b3f-ac2a7cf07c41\") " pod="calico-system/calico-node-tz4vt" Jan 23 00:59:15.682654 kubelet[2810]: I0123 00:59:15.682083 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/638d912e-3c76-451e-8b3f-ac2a7cf07c41-var-lib-calico\") pod \"calico-node-tz4vt\" (UID: \"638d912e-3c76-451e-8b3f-ac2a7cf07c41\") " pod="calico-system/calico-node-tz4vt" Jan 23 00:59:15.682654 kubelet[2810]: I0123 00:59:15.682105 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/638d912e-3c76-451e-8b3f-ac2a7cf07c41-xtables-lock\") pod \"calico-node-tz4vt\" (UID: \"638d912e-3c76-451e-8b3f-ac2a7cf07c41\") " pod="calico-system/calico-node-tz4vt" Jan 23 00:59:15.682654 kubelet[2810]: I0123 00:59:15.682152 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/638d912e-3c76-451e-8b3f-ac2a7cf07c41-cni-log-dir\") pod \"calico-node-tz4vt\" (UID: \"638d912e-3c76-451e-8b3f-ac2a7cf07c41\") " pod="calico-system/calico-node-tz4vt" Jan 23 00:59:15.683598 kubelet[2810]: I0123 00:59:15.682176 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/638d912e-3c76-451e-8b3f-ac2a7cf07c41-cni-net-dir\") pod \"calico-node-tz4vt\" (UID: \"638d912e-3c76-451e-8b3f-ac2a7cf07c41\") " pod="calico-system/calico-node-tz4vt" Jan 23 00:59:15.683598 kubelet[2810]: I0123 00:59:15.682196 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/638d912e-3c76-451e-8b3f-ac2a7cf07c41-policysync\") pod \"calico-node-tz4vt\" (UID: \"638d912e-3c76-451e-8b3f-ac2a7cf07c41\") " pod="calico-system/calico-node-tz4vt" Jan 23 00:59:15.683598 kubelet[2810]: I0123 00:59:15.682219 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/638d912e-3c76-451e-8b3f-ac2a7cf07c41-var-run-calico\") pod \"calico-node-tz4vt\" (UID: \"638d912e-3c76-451e-8b3f-ac2a7cf07c41\") " pod="calico-system/calico-node-tz4vt" Jan 23 00:59:15.683598 kubelet[2810]: I0123 00:59:15.682244 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/638d912e-3c76-451e-8b3f-ac2a7cf07c41-flexvol-driver-host\") pod \"calico-node-tz4vt\" (UID: \"638d912e-3c76-451e-8b3f-ac2a7cf07c41\") " pod="calico-system/calico-node-tz4vt" Jan 23 00:59:15.683598 kubelet[2810]: I0123 00:59:15.682292 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/638d912e-3c76-451e-8b3f-ac2a7cf07c41-node-certs\") pod \"calico-node-tz4vt\" (UID: \"638d912e-3c76-451e-8b3f-ac2a7cf07c41\") " pod="calico-system/calico-node-tz4vt" Jan 23 00:59:15.684736 kubelet[2810]: I0123 00:59:15.682327 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/638d912e-3c76-451e-8b3f-ac2a7cf07c41-tigera-ca-bundle\") pod \"calico-node-tz4vt\" (UID: \"638d912e-3c76-451e-8b3f-ac2a7cf07c41\") " pod="calico-system/calico-node-tz4vt" Jan 23 00:59:15.809743 kubelet[2810]: E0123 00:59:15.809526 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:15.809743 kubelet[2810]: W0123 00:59:15.809622 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:15.809743 kubelet[2810]: E0123 00:59:15.809717 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:15.818867 kubelet[2810]: E0123 00:59:15.811576 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:15.818867 kubelet[2810]: W0123 00:59:15.811591 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:15.818867 kubelet[2810]: E0123 00:59:15.811612 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:15.843863 kubelet[2810]: E0123 00:59:15.843768 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qpc42" podUID="1a86b4da-5edc-4f85-b21e-20314381c9bb" Jan 23 00:59:15.854523 kubelet[2810]: E0123 00:59:15.854372 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:15.854523 kubelet[2810]: W0123 00:59:15.854497 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:15.854649 kubelet[2810]: E0123 00:59:15.854603 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:15.865030 containerd[1550]: time="2026-01-23T00:59:15.864808986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6fdb97c68d-djwd2,Uid:d04b5378-1dad-4b3c-a087-47550c6ac01d,Namespace:calico-system,Attempt:0,}" Jan 23 00:59:15.914461 update_engine[1541]: I20260123 00:59:15.914218 1541 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 23 00:59:15.916025 update_engine[1541]: I20260123 00:59:15.915347 1541 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 23 00:59:15.922257 update_engine[1541]: I20260123 00:59:15.919727 1541 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 23 00:59:15.926839 update_engine[1541]: I20260123 00:59:15.925877 1541 omaha_request_params.cc:62] Current group set to stable Jan 23 00:59:15.932889 update_engine[1541]: I20260123 00:59:15.931898 1541 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 23 00:59:15.936516 update_engine[1541]: I20260123 00:59:15.933400 1541 update_attempter.cc:643] Scheduling an action processor start. Jan 23 00:59:15.936516 update_engine[1541]: I20260123 00:59:15.934240 1541 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 23 00:59:15.941831 update_engine[1541]: I20260123 00:59:15.941795 1541 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 23 00:59:15.942256 update_engine[1541]: I20260123 00:59:15.942178 1541 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 23 00:59:15.942550 update_engine[1541]: I20260123 00:59:15.942524 1541 omaha_request_action.cc:272] Request: Jan 23 00:59:15.942550 update_engine[1541]: Jan 23 00:59:15.942550 update_engine[1541]: Jan 23 00:59:15.942550 update_engine[1541]: Jan 23 00:59:15.942550 update_engine[1541]: Jan 23 00:59:15.942550 update_engine[1541]: Jan 23 00:59:15.942550 update_engine[1541]: Jan 23 00:59:15.942550 update_engine[1541]: Jan 23 00:59:15.942550 update_engine[1541]: Jan 23 00:59:15.944929 update_engine[1541]: I20260123 00:59:15.943176 1541 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 00:59:15.952368 locksmithd[1582]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 23 00:59:15.959856 update_engine[1541]: I20260123 00:59:15.957252 1541 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 00:59:15.959856 update_engine[1541]: I20260123 00:59:15.959539 1541 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 00:59:15.961743 kubelet[2810]: E0123 00:59:15.961679 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:15.961743 kubelet[2810]: W0123 00:59:15.961716 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:15.962372 kubelet[2810]: E0123 00:59:15.961752 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:15.966039 kubelet[2810]: E0123 00:59:15.965189 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:15.989336 kubelet[2810]: W0123 00:59:15.966337 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:15.989336 kubelet[2810]: E0123 00:59:15.966373 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:15.989336 kubelet[2810]: E0123 00:59:15.987377 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:15.989336 kubelet[2810]: W0123 00:59:15.987510 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:15.989336 kubelet[2810]: E0123 00:59:15.987656 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:15.989336 kubelet[2810]: E0123 00:59:15.989549 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:15.989859 kubelet[2810]: W0123 00:59:15.989564 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:15.989859 kubelet[2810]: E0123 00:59:15.989656 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:15.992208 kubelet[2810]: E0123 00:59:15.990537 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:15.992208 kubelet[2810]: W0123 00:59:15.990551 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:15.992208 kubelet[2810]: E0123 00:59:15.990565 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:15.992368 update_engine[1541]: E20260123 00:59:15.986387 1541 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 00:59:16.001857 update_engine[1541]: I20260123 00:59:16.001755 1541 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 23 00:59:16.002426 kubelet[2810]: E0123 00:59:16.002399 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.002529 kubelet[2810]: W0123 00:59:16.002509 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.002653 kubelet[2810]: E0123 00:59:16.002636 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.003241 kubelet[2810]: E0123 00:59:16.003226 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.003315 kubelet[2810]: W0123 00:59:16.003303 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.003369 kubelet[2810]: E0123 00:59:16.003358 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.003651 kubelet[2810]: E0123 00:59:16.003639 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.003732 kubelet[2810]: W0123 00:59:16.003715 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.003855 kubelet[2810]: E0123 00:59:16.003839 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.004502 kubelet[2810]: E0123 00:59:16.004483 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.004597 kubelet[2810]: W0123 00:59:16.004583 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.004673 kubelet[2810]: E0123 00:59:16.004657 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.005332 kubelet[2810]: E0123 00:59:16.005314 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.005437 kubelet[2810]: W0123 00:59:16.005417 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.005518 kubelet[2810]: E0123 00:59:16.005499 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.006405 kubelet[2810]: E0123 00:59:16.006387 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.006551 kubelet[2810]: W0123 00:59:16.006534 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.006634 kubelet[2810]: E0123 00:59:16.006618 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.007294 kubelet[2810]: E0123 00:59:16.007273 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.008213 kubelet[2810]: W0123 00:59:16.007586 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.008213 kubelet[2810]: E0123 00:59:16.007609 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.008515 kubelet[2810]: E0123 00:59:16.008436 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.008807 kubelet[2810]: W0123 00:59:16.008750 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.009172 kubelet[2810]: E0123 00:59:16.009067 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.009471 kubelet[2810]: E0123 00:59:16.009456 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.009651 kubelet[2810]: W0123 00:59:16.009632 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.009734 kubelet[2810]: E0123 00:59:16.009720 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.010668 kubelet[2810]: E0123 00:59:16.010577 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.010668 kubelet[2810]: W0123 00:59:16.010596 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.010668 kubelet[2810]: E0123 00:59:16.010611 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.011738 kubelet[2810]: E0123 00:59:16.011575 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.011738 kubelet[2810]: W0123 00:59:16.011592 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.011738 kubelet[2810]: E0123 00:59:16.011606 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.015558 kubelet[2810]: E0123 00:59:16.015197 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.015558 kubelet[2810]: W0123 00:59:16.015240 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.015558 kubelet[2810]: E0123 00:59:16.015276 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.019048 kubelet[2810]: E0123 00:59:16.016115 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.019048 kubelet[2810]: W0123 00:59:16.016131 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.019048 kubelet[2810]: E0123 00:59:16.016147 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.019048 kubelet[2810]: E0123 00:59:16.017038 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.019048 kubelet[2810]: W0123 00:59:16.017201 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.019048 kubelet[2810]: E0123 00:59:16.017214 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.019048 kubelet[2810]: E0123 00:59:16.017623 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.019048 kubelet[2810]: W0123 00:59:16.017637 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.019048 kubelet[2810]: E0123 00:59:16.017704 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.019048 kubelet[2810]: E0123 00:59:16.018573 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.019408 kubelet[2810]: W0123 00:59:16.018589 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.019408 kubelet[2810]: E0123 00:59:16.018601 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.019408 kubelet[2810]: I0123 00:59:16.018643 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a86b4da-5edc-4f85-b21e-20314381c9bb-kubelet-dir\") pod \"csi-node-driver-qpc42\" (UID: \"1a86b4da-5edc-4f85-b21e-20314381c9bb\") " pod="calico-system/csi-node-driver-qpc42" Jan 23 00:59:16.019615 kubelet[2810]: E0123 00:59:16.019461 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.019893 kubelet[2810]: W0123 00:59:16.019823 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.019893 kubelet[2810]: E0123 00:59:16.019885 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.022697 kubelet[2810]: E0123 00:59:16.020515 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.022697 kubelet[2810]: W0123 00:59:16.020626 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.022697 kubelet[2810]: E0123 00:59:16.020640 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.022697 kubelet[2810]: E0123 00:59:16.021163 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.022697 kubelet[2810]: W0123 00:59:16.021175 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.022697 kubelet[2810]: E0123 00:59:16.021188 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.022697 kubelet[2810]: I0123 00:59:16.021280 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1a86b4da-5edc-4f85-b21e-20314381c9bb-registration-dir\") pod \"csi-node-driver-qpc42\" (UID: \"1a86b4da-5edc-4f85-b21e-20314381c9bb\") " pod="calico-system/csi-node-driver-qpc42" Jan 23 00:59:16.022697 kubelet[2810]: E0123 00:59:16.021914 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.022697 kubelet[2810]: W0123 00:59:16.021927 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.026107 kubelet[2810]: E0123 00:59:16.022141 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.026107 kubelet[2810]: I0123 00:59:16.022602 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1a86b4da-5edc-4f85-b21e-20314381c9bb-socket-dir\") pod \"csi-node-driver-qpc42\" (UID: \"1a86b4da-5edc-4f85-b21e-20314381c9bb\") " pod="calico-system/csi-node-driver-qpc42" Jan 23 00:59:16.026794 kubelet[2810]: E0123 00:59:16.026695 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.026794 kubelet[2810]: W0123 00:59:16.026746 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.026794 kubelet[2810]: E0123 00:59:16.026764 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.027212 kubelet[2810]: E0123 00:59:16.027140 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.027212 kubelet[2810]: W0123 00:59:16.027189 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.027212 kubelet[2810]: E0123 00:59:16.027203 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.028072 kubelet[2810]: E0123 00:59:16.027493 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.028072 kubelet[2810]: W0123 00:59:16.027534 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.028072 kubelet[2810]: E0123 00:59:16.027547 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.028072 kubelet[2810]: I0123 00:59:16.027572 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1a86b4da-5edc-4f85-b21e-20314381c9bb-varrun\") pod \"csi-node-driver-qpc42\" (UID: \"1a86b4da-5edc-4f85-b21e-20314381c9bb\") " pod="calico-system/csi-node-driver-qpc42" Jan 23 00:59:16.028072 kubelet[2810]: E0123 00:59:16.027817 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.028072 kubelet[2810]: W0123 00:59:16.027828 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.028072 kubelet[2810]: E0123 00:59:16.027842 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.028072 kubelet[2810]: I0123 00:59:16.027911 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4rnk\" (UniqueName: \"kubernetes.io/projected/1a86b4da-5edc-4f85-b21e-20314381c9bb-kube-api-access-m4rnk\") pod \"csi-node-driver-qpc42\" (UID: \"1a86b4da-5edc-4f85-b21e-20314381c9bb\") " pod="calico-system/csi-node-driver-qpc42" Jan 23 00:59:16.030114 kubelet[2810]: E0123 00:59:16.028704 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.030114 kubelet[2810]: W0123 00:59:16.028724 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.030114 kubelet[2810]: E0123 00:59:16.028740 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.030307 containerd[1550]: time="2026-01-23T00:59:16.027407801Z" level=info msg="connecting to shim 75924513032aed62a025af178c958c835e8772f4aba44bfc874251c1f2cdf08e" address="unix:///run/containerd/s/daad6a82c662f10e9e1305ff158469a76ba96f15a75b69cb62457631994b556a" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:59:16.033162 kubelet[2810]: E0123 00:59:16.030417 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.033162 kubelet[2810]: W0123 00:59:16.030476 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.033162 kubelet[2810]: E0123 00:59:16.030493 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.033162 kubelet[2810]: E0123 00:59:16.032778 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.033162 kubelet[2810]: W0123 00:59:16.032791 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.033162 kubelet[2810]: E0123 00:59:16.032805 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.033386 containerd[1550]: time="2026-01-23T00:59:16.027691659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tz4vt,Uid:638d912e-3c76-451e-8b3f-ac2a7cf07c41,Namespace:calico-system,Attempt:0,}" Jan 23 00:59:16.035735 kubelet[2810]: E0123 00:59:16.034523 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.035735 kubelet[2810]: W0123 00:59:16.035086 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.038321 kubelet[2810]: E0123 00:59:16.035237 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.038321 kubelet[2810]: E0123 00:59:16.037094 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.038321 kubelet[2810]: W0123 00:59:16.037108 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.038321 kubelet[2810]: E0123 00:59:16.037228 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.038321 kubelet[2810]: E0123 00:59:16.038176 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.038321 kubelet[2810]: W0123 00:59:16.038189 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.038321 kubelet[2810]: E0123 00:59:16.038201 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.119484 containerd[1550]: time="2026-01-23T00:59:16.118928425Z" level=info msg="connecting to shim 138d1a5d434826dd1682e991c4c00ec2226231123f2b93aa87c82208e1a2937a" address="unix:///run/containerd/s/382d24be9882fbd9c12a029aa7af39d3e94687a61d0ba79f18435b0d482d7124" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:59:16.128355 systemd[1]: Started cri-containerd-75924513032aed62a025af178c958c835e8772f4aba44bfc874251c1f2cdf08e.scope - libcontainer container 75924513032aed62a025af178c958c835e8772f4aba44bfc874251c1f2cdf08e. Jan 23 00:59:16.130644 kubelet[2810]: E0123 00:59:16.130570 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.130644 kubelet[2810]: W0123 00:59:16.130594 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.130644 kubelet[2810]: E0123 00:59:16.130618 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.131410 kubelet[2810]: E0123 00:59:16.131360 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.131410 kubelet[2810]: W0123 00:59:16.131375 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.131410 kubelet[2810]: E0123 00:59:16.131392 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.132339 kubelet[2810]: E0123 00:59:16.132287 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.132339 kubelet[2810]: W0123 00:59:16.132308 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.132339 kubelet[2810]: E0123 00:59:16.132321 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.133190 kubelet[2810]: E0123 00:59:16.133137 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.133190 kubelet[2810]: W0123 00:59:16.133154 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.133190 kubelet[2810]: E0123 00:59:16.133169 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.134128 kubelet[2810]: E0123 00:59:16.134090 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.134128 kubelet[2810]: W0123 00:59:16.134103 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.134128 kubelet[2810]: E0123 00:59:16.134114 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.135056 kubelet[2810]: E0123 00:59:16.135015 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.135056 kubelet[2810]: W0123 00:59:16.135030 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.135056 kubelet[2810]: E0123 00:59:16.135041 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.136494 kubelet[2810]: E0123 00:59:16.135631 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.136614 kubelet[2810]: W0123 00:59:16.136598 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.136665 kubelet[2810]: E0123 00:59:16.136654 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.137719 kubelet[2810]: E0123 00:59:16.137647 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.137719 kubelet[2810]: W0123 00:59:16.137697 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.137802 kubelet[2810]: E0123 00:59:16.137727 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.139338 kubelet[2810]: E0123 00:59:16.139296 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.139338 kubelet[2810]: W0123 00:59:16.139338 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.139444 kubelet[2810]: E0123 00:59:16.139357 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.139803 kubelet[2810]: E0123 00:59:16.139753 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.139803 kubelet[2810]: W0123 00:59:16.139796 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.140034 kubelet[2810]: E0123 00:59:16.139811 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.140601 kubelet[2810]: E0123 00:59:16.140539 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.140749 kubelet[2810]: W0123 00:59:16.140702 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.140749 kubelet[2810]: E0123 00:59:16.140727 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.141550 kubelet[2810]: E0123 00:59:16.141461 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.141550 kubelet[2810]: W0123 00:59:16.141505 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.141550 kubelet[2810]: E0123 00:59:16.141516 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.142523 kubelet[2810]: E0123 00:59:16.142415 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.142523 kubelet[2810]: W0123 00:59:16.142429 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.142606 kubelet[2810]: E0123 00:59:16.142551 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.150185 kubelet[2810]: E0123 00:59:16.150103 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.150185 kubelet[2810]: W0123 00:59:16.150132 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.150185 kubelet[2810]: E0123 00:59:16.150157 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.150883 kubelet[2810]: E0123 00:59:16.150710 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.150883 kubelet[2810]: W0123 00:59:16.150734 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.150883 kubelet[2810]: E0123 00:59:16.150751 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.152505 kubelet[2810]: E0123 00:59:16.152188 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.152505 kubelet[2810]: W0123 00:59:16.152202 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.152505 kubelet[2810]: E0123 00:59:16.152213 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.153406 kubelet[2810]: E0123 00:59:16.153337 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.153406 kubelet[2810]: W0123 00:59:16.153386 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.153406 kubelet[2810]: E0123 00:59:16.153407 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.154298 kubelet[2810]: E0123 00:59:16.154139 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.154298 kubelet[2810]: W0123 00:59:16.154156 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.154298 kubelet[2810]: E0123 00:59:16.154167 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.155052 kubelet[2810]: E0123 00:59:16.155028 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.155052 kubelet[2810]: W0123 00:59:16.155045 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.155139 kubelet[2810]: E0123 00:59:16.155056 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.157054 kubelet[2810]: E0123 00:59:16.156888 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.157054 kubelet[2810]: W0123 00:59:16.156923 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.157054 kubelet[2810]: E0123 00:59:16.156987 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.158034 kubelet[2810]: E0123 00:59:16.157791 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.158034 kubelet[2810]: W0123 00:59:16.157842 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.158034 kubelet[2810]: E0123 00:59:16.157857 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.158743 kubelet[2810]: E0123 00:59:16.158692 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.158743 kubelet[2810]: W0123 00:59:16.158714 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.158743 kubelet[2810]: E0123 00:59:16.158736 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.159389 kubelet[2810]: E0123 00:59:16.159333 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.159389 kubelet[2810]: W0123 00:59:16.159354 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.159389 kubelet[2810]: E0123 00:59:16.159368 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.161098 kubelet[2810]: E0123 00:59:16.159903 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.161098 kubelet[2810]: W0123 00:59:16.159916 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.161098 kubelet[2810]: E0123 00:59:16.159928 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.161098 kubelet[2810]: E0123 00:59:16.160514 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.161098 kubelet[2810]: W0123 00:59:16.160525 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.161098 kubelet[2810]: E0123 00:59:16.160535 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.161098 kubelet[2810]: E0123 00:59:16.161048 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:16.161098 kubelet[2810]: W0123 00:59:16.161060 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:16.161098 kubelet[2810]: E0123 00:59:16.161074 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:16.213490 systemd[1]: Started cri-containerd-138d1a5d434826dd1682e991c4c00ec2226231123f2b93aa87c82208e1a2937a.scope - libcontainer container 138d1a5d434826dd1682e991c4c00ec2226231123f2b93aa87c82208e1a2937a. Jan 23 00:59:16.289445 containerd[1550]: time="2026-01-23T00:59:16.289049375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6fdb97c68d-djwd2,Uid:d04b5378-1dad-4b3c-a087-47550c6ac01d,Namespace:calico-system,Attempt:0,} returns sandbox id \"75924513032aed62a025af178c958c835e8772f4aba44bfc874251c1f2cdf08e\"" Jan 23 00:59:16.293260 containerd[1550]: time="2026-01-23T00:59:16.293147848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 00:59:16.312626 containerd[1550]: time="2026-01-23T00:59:16.312487007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tz4vt,Uid:638d912e-3c76-451e-8b3f-ac2a7cf07c41,Namespace:calico-system,Attempt:0,} returns sandbox id \"138d1a5d434826dd1682e991c4c00ec2226231123f2b93aa87c82208e1a2937a\"" Jan 23 00:59:17.391479 kubelet[2810]: E0123 00:59:17.391350 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qpc42" podUID="1a86b4da-5edc-4f85-b21e-20314381c9bb" Jan 23 00:59:17.483889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount693332572.mount: Deactivated successfully. Jan 23 00:59:18.636305 containerd[1550]: time="2026-01-23T00:59:18.636205803Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:59:18.637558 containerd[1550]: time="2026-01-23T00:59:18.637518178Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 23 00:59:18.639442 containerd[1550]: time="2026-01-23T00:59:18.639381180Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:59:18.643206 containerd[1550]: time="2026-01-23T00:59:18.643099222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:59:18.644183 containerd[1550]: time="2026-01-23T00:59:18.644066457Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.350837558s" Jan 23 00:59:18.644183 containerd[1550]: time="2026-01-23T00:59:18.644132451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 23 00:59:18.645663 containerd[1550]: time="2026-01-23T00:59:18.645598145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 00:59:18.682821 containerd[1550]: time="2026-01-23T00:59:18.682706131Z" level=info msg="CreateContainer within sandbox \"75924513032aed62a025af178c958c835e8772f4aba44bfc874251c1f2cdf08e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 00:59:18.702175 containerd[1550]: time="2026-01-23T00:59:18.702063787Z" level=info msg="Container 9ca76fc2cf0d6a1a875fda57fc1da28dde2a863c9ae039f9b77a9637e56dfee1: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:59:18.717729 containerd[1550]: time="2026-01-23T00:59:18.717652319Z" level=info msg="CreateContainer within sandbox \"75924513032aed62a025af178c958c835e8772f4aba44bfc874251c1f2cdf08e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9ca76fc2cf0d6a1a875fda57fc1da28dde2a863c9ae039f9b77a9637e56dfee1\"" Jan 23 00:59:18.723763 containerd[1550]: time="2026-01-23T00:59:18.721294809Z" level=info msg="StartContainer for \"9ca76fc2cf0d6a1a875fda57fc1da28dde2a863c9ae039f9b77a9637e56dfee1\"" Jan 23 00:59:18.723763 containerd[1550]: time="2026-01-23T00:59:18.722807884Z" level=info msg="connecting to shim 9ca76fc2cf0d6a1a875fda57fc1da28dde2a863c9ae039f9b77a9637e56dfee1" address="unix:///run/containerd/s/daad6a82c662f10e9e1305ff158469a76ba96f15a75b69cb62457631994b556a" protocol=ttrpc version=3 Jan 23 00:59:18.757569 systemd[1]: Started cri-containerd-9ca76fc2cf0d6a1a875fda57fc1da28dde2a863c9ae039f9b77a9637e56dfee1.scope - libcontainer container 9ca76fc2cf0d6a1a875fda57fc1da28dde2a863c9ae039f9b77a9637e56dfee1. Jan 23 00:59:18.931047 containerd[1550]: time="2026-01-23T00:59:18.930278573Z" level=info msg="StartContainer for \"9ca76fc2cf0d6a1a875fda57fc1da28dde2a863c9ae039f9b77a9637e56dfee1\" returns successfully" Jan 23 00:59:19.393678 kubelet[2810]: E0123 00:59:19.392531 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qpc42" podUID="1a86b4da-5edc-4f85-b21e-20314381c9bb" Jan 23 00:59:19.698078 containerd[1550]: time="2026-01-23T00:59:19.697667434Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:59:19.701684 containerd[1550]: time="2026-01-23T00:59:19.701499212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 23 00:59:19.703545 containerd[1550]: time="2026-01-23T00:59:19.703458781Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:59:19.706524 containerd[1550]: time="2026-01-23T00:59:19.706446066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:59:19.707778 containerd[1550]: time="2026-01-23T00:59:19.707670773Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.062038426s" Jan 23 00:59:19.707778 containerd[1550]: time="2026-01-23T00:59:19.707764599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 23 00:59:19.719305 containerd[1550]: time="2026-01-23T00:59:19.718795535Z" level=info msg="CreateContainer within sandbox \"138d1a5d434826dd1682e991c4c00ec2226231123f2b93aa87c82208e1a2937a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 00:59:19.745382 containerd[1550]: time="2026-01-23T00:59:19.743496806Z" level=info msg="Container 24ef73b4f5850996a3e02f563f6a0a6a5d21b436e0ceda3f01d64a889a94a7a6: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:59:19.749045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2769182182.mount: Deactivated successfully. Jan 23 00:59:19.776529 containerd[1550]: time="2026-01-23T00:59:19.776365942Z" level=info msg="CreateContainer within sandbox \"138d1a5d434826dd1682e991c4c00ec2226231123f2b93aa87c82208e1a2937a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"24ef73b4f5850996a3e02f563f6a0a6a5d21b436e0ceda3f01d64a889a94a7a6\"" Jan 23 00:59:19.778483 containerd[1550]: time="2026-01-23T00:59:19.778452449Z" level=info msg="StartContainer for \"24ef73b4f5850996a3e02f563f6a0a6a5d21b436e0ceda3f01d64a889a94a7a6\"" Jan 23 00:59:19.782501 containerd[1550]: time="2026-01-23T00:59:19.782386597Z" level=info msg="connecting to shim 24ef73b4f5850996a3e02f563f6a0a6a5d21b436e0ceda3f01d64a889a94a7a6" address="unix:///run/containerd/s/382d24be9882fbd9c12a029aa7af39d3e94687a61d0ba79f18435b0d482d7124" protocol=ttrpc version=3 Jan 23 00:59:19.840748 systemd[1]: Started cri-containerd-24ef73b4f5850996a3e02f563f6a0a6a5d21b436e0ceda3f01d64a889a94a7a6.scope - libcontainer container 24ef73b4f5850996a3e02f563f6a0a6a5d21b436e0ceda3f01d64a889a94a7a6. Jan 23 00:59:19.960767 kubelet[2810]: E0123 00:59:19.960058 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:19.960767 kubelet[2810]: W0123 00:59:19.960182 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:19.960767 kubelet[2810]: E0123 00:59:19.960213 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:19.962267 kubelet[2810]: E0123 00:59:19.961743 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:19.962267 kubelet[2810]: W0123 00:59:19.961762 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:19.962267 kubelet[2810]: E0123 00:59:19.961834 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:19.962754 kubelet[2810]: E0123 00:59:19.962690 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:19.962754 kubelet[2810]: W0123 00:59:19.962727 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:19.962754 kubelet[2810]: E0123 00:59:19.962745 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:19.963547 kubelet[2810]: E0123 00:59:19.963341 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:19.963547 kubelet[2810]: W0123 00:59:19.963380 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:19.963547 kubelet[2810]: E0123 00:59:19.963396 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:19.965086 kubelet[2810]: E0123 00:59:19.964861 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:19.965086 kubelet[2810]: W0123 00:59:19.964897 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:19.965086 kubelet[2810]: E0123 00:59:19.964912 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:19.965455 kubelet[2810]: E0123 00:59:19.965361 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:19.965455 kubelet[2810]: W0123 00:59:19.965389 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:19.965455 kubelet[2810]: E0123 00:59:19.965423 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:19.966581 kubelet[2810]: E0123 00:59:19.966557 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:19.966581 kubelet[2810]: W0123 00:59:19.966575 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:19.966788 kubelet[2810]: E0123 00:59:19.966588 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:19.967534 kubelet[2810]: E0123 00:59:19.967495 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:19.967534 kubelet[2810]: W0123 00:59:19.967518 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:19.967534 kubelet[2810]: E0123 00:59:19.967532 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:19.968506 kubelet[2810]: E0123 00:59:19.967835 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:19.968506 kubelet[2810]: W0123 00:59:19.967849 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:19.968506 kubelet[2810]: E0123 00:59:19.967861 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:19.968506 kubelet[2810]: E0123 00:59:19.968433 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:19.968506 kubelet[2810]: W0123 00:59:19.968444 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:19.968506 kubelet[2810]: E0123 00:59:19.968458 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:19.975233 kubelet[2810]: E0123 00:59:19.968704 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:19.975233 kubelet[2810]: W0123 00:59:19.968715 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:19.975233 kubelet[2810]: E0123 00:59:19.968734 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:19.975233 kubelet[2810]: E0123 00:59:19.969789 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:19.975233 kubelet[2810]: W0123 00:59:19.969802 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:19.975233 kubelet[2810]: E0123 00:59:19.969814 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:19.975233 kubelet[2810]: E0123 00:59:19.970508 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:19.975233 kubelet[2810]: W0123 00:59:19.970521 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:19.975233 kubelet[2810]: E0123 00:59:19.970534 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:19.975233 kubelet[2810]: E0123 00:59:19.971661 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:19.975605 kubelet[2810]: W0123 00:59:19.971677 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:19.975605 kubelet[2810]: E0123 00:59:19.971691 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:19.975605 kubelet[2810]: E0123 00:59:19.972166 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:19.975605 kubelet[2810]: W0123 00:59:19.972178 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:19.975605 kubelet[2810]: E0123 00:59:19.972192 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:19.980151 kubelet[2810]: I0123 00:59:19.978380 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6fdb97c68d-djwd2" podStartSLOduration=2.625462536 podStartE2EDuration="4.978363153s" podCreationTimestamp="2026-01-23 00:59:15 +0000 UTC" firstStartedPulling="2026-01-23 00:59:16.292614682 +0000 UTC m=+44.187704725" lastFinishedPulling="2026-01-23 00:59:18.645515298 +0000 UTC m=+46.540605342" observedRunningTime="2026-01-23 00:59:19.978259164 +0000 UTC m=+47.873349227" watchObservedRunningTime="2026-01-23 00:59:19.978363153 +0000 UTC m=+47.873453206" Jan 23 00:59:19.997805 kubelet[2810]: E0123 00:59:19.997660 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:19.997805 kubelet[2810]: W0123 00:59:19.997732 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:19.997805 kubelet[2810]: E0123 00:59:19.997769 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:19.999408 kubelet[2810]: E0123 00:59:19.998898 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:19.999408 kubelet[2810]: W0123 00:59:19.998912 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:19.999408 kubelet[2810]: E0123 00:59:19.999095 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:19.999704 kubelet[2810]: E0123 00:59:19.999582 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:19.999704 kubelet[2810]: W0123 00:59:19.999668 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:19.999704 kubelet[2810]: E0123 00:59:19.999689 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:20.000445 kubelet[2810]: E0123 00:59:20.000404 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:20.000445 kubelet[2810]: W0123 00:59:20.000437 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:20.000445 kubelet[2810]: E0123 00:59:20.000452 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:20.001093 kubelet[2810]: E0123 00:59:20.000760 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:20.001093 kubelet[2810]: W0123 00:59:20.000773 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:20.001093 kubelet[2810]: E0123 00:59:20.000785 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:20.001867 kubelet[2810]: E0123 00:59:20.001762 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:20.002092 kubelet[2810]: W0123 00:59:20.001783 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:20.002438 kubelet[2810]: E0123 00:59:20.002214 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:20.003474 kubelet[2810]: E0123 00:59:20.003225 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:20.003619 kubelet[2810]: W0123 00:59:20.003598 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:20.004087 kubelet[2810]: E0123 00:59:20.004057 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:20.005426 kubelet[2810]: E0123 00:59:20.005371 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:20.005426 kubelet[2810]: W0123 00:59:20.005388 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:20.005426 kubelet[2810]: E0123 00:59:20.005406 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:20.007599 kubelet[2810]: E0123 00:59:20.007547 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:20.007599 kubelet[2810]: W0123 00:59:20.007565 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:20.007599 kubelet[2810]: E0123 00:59:20.007579 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:20.009033 kubelet[2810]: E0123 00:59:20.008730 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:20.009033 kubelet[2810]: W0123 00:59:20.008746 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:20.009033 kubelet[2810]: E0123 00:59:20.008757 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:20.009290 kubelet[2810]: E0123 00:59:20.009274 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:20.009382 kubelet[2810]: W0123 00:59:20.009366 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:20.009465 kubelet[2810]: E0123 00:59:20.009449 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:20.010141 kubelet[2810]: E0123 00:59:20.010118 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:20.010242 kubelet[2810]: W0123 00:59:20.010223 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:20.011593 kubelet[2810]: E0123 00:59:20.010303 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:20.012550 kubelet[2810]: E0123 00:59:20.012434 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:20.012884 kubelet[2810]: W0123 00:59:20.012694 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:20.012884 kubelet[2810]: E0123 00:59:20.012715 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:20.016062 kubelet[2810]: E0123 00:59:20.015480 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:20.016155 kubelet[2810]: W0123 00:59:20.016138 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:20.016333 kubelet[2810]: E0123 00:59:20.016227 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:20.017053 kubelet[2810]: E0123 00:59:20.016901 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:20.017053 kubelet[2810]: W0123 00:59:20.016914 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:20.017053 kubelet[2810]: E0123 00:59:20.016925 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:20.018477 kubelet[2810]: E0123 00:59:20.018322 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:20.018477 kubelet[2810]: W0123 00:59:20.018374 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:20.018477 kubelet[2810]: E0123 00:59:20.018392 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:20.019798 kubelet[2810]: E0123 00:59:20.019781 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:20.020213 kubelet[2810]: W0123 00:59:20.020092 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:20.020599 kubelet[2810]: E0123 00:59:20.020339 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:20.020864 kubelet[2810]: E0123 00:59:20.020816 2810 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:59:20.020864 kubelet[2810]: W0123 00:59:20.020833 2810 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:59:20.020864 kubelet[2810]: E0123 00:59:20.020846 2810 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:59:20.048828 containerd[1550]: time="2026-01-23T00:59:20.048748940Z" level=info msg="StartContainer for \"24ef73b4f5850996a3e02f563f6a0a6a5d21b436e0ceda3f01d64a889a94a7a6\" returns successfully" Jan 23 00:59:20.095275 systemd[1]: cri-containerd-24ef73b4f5850996a3e02f563f6a0a6a5d21b436e0ceda3f01d64a889a94a7a6.scope: Deactivated successfully. Jan 23 00:59:20.096413 systemd[1]: cri-containerd-24ef73b4f5850996a3e02f563f6a0a6a5d21b436e0ceda3f01d64a889a94a7a6.scope: Consumed 101ms CPU time, 6.3M memory peak, 3.3M written to disk. Jan 23 00:59:20.098874 containerd[1550]: time="2026-01-23T00:59:20.098711457Z" level=info msg="received container exit event container_id:\"24ef73b4f5850996a3e02f563f6a0a6a5d21b436e0ceda3f01d64a889a94a7a6\" id:\"24ef73b4f5850996a3e02f563f6a0a6a5d21b436e0ceda3f01d64a889a94a7a6\" pid:3480 exited_at:{seconds:1769129960 nanos:97470974}" Jan 23 00:59:20.141845 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24ef73b4f5850996a3e02f563f6a0a6a5d21b436e0ceda3f01d64a889a94a7a6-rootfs.mount: Deactivated successfully. Jan 23 00:59:20.963043 containerd[1550]: time="2026-01-23T00:59:20.962623309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 00:59:21.390611 kubelet[2810]: E0123 00:59:21.390535 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qpc42" podUID="1a86b4da-5edc-4f85-b21e-20314381c9bb" Jan 23 00:59:23.392581 kubelet[2810]: E0123 00:59:23.392319 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qpc42" podUID="1a86b4da-5edc-4f85-b21e-20314381c9bb" Jan 23 00:59:24.600615 containerd[1550]: time="2026-01-23T00:59:24.599700608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:59:24.601573 containerd[1550]: time="2026-01-23T00:59:24.601503255Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 23 00:59:24.604119 containerd[1550]: time="2026-01-23T00:59:24.603900022Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:59:24.607870 containerd[1550]: time="2026-01-23T00:59:24.607798881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:59:24.608906 containerd[1550]: time="2026-01-23T00:59:24.608834605Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.646164408s" Jan 23 00:59:24.608906 containerd[1550]: time="2026-01-23T00:59:24.608897302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 23 00:59:24.622853 containerd[1550]: time="2026-01-23T00:59:24.622154378Z" level=info msg="CreateContainer within sandbox \"138d1a5d434826dd1682e991c4c00ec2226231123f2b93aa87c82208e1a2937a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 00:59:24.654100 containerd[1550]: time="2026-01-23T00:59:24.653460683Z" level=info msg="Container aa022357590977d6e1286136a2f67a6b819dc706126a9d2f693536ef5af06c25: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:59:24.675256 containerd[1550]: time="2026-01-23T00:59:24.675147968Z" level=info msg="CreateContainer within sandbox \"138d1a5d434826dd1682e991c4c00ec2226231123f2b93aa87c82208e1a2937a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"aa022357590977d6e1286136a2f67a6b819dc706126a9d2f693536ef5af06c25\"" Jan 23 00:59:24.677780 containerd[1550]: time="2026-01-23T00:59:24.677744891Z" level=info msg="StartContainer for \"aa022357590977d6e1286136a2f67a6b819dc706126a9d2f693536ef5af06c25\"" Jan 23 00:59:24.683640 containerd[1550]: time="2026-01-23T00:59:24.683538612Z" level=info msg="connecting to shim aa022357590977d6e1286136a2f67a6b819dc706126a9d2f693536ef5af06c25" address="unix:///run/containerd/s/382d24be9882fbd9c12a029aa7af39d3e94687a61d0ba79f18435b0d482d7124" protocol=ttrpc version=3 Jan 23 00:59:24.779446 systemd[1]: Started cri-containerd-aa022357590977d6e1286136a2f67a6b819dc706126a9d2f693536ef5af06c25.scope - libcontainer container aa022357590977d6e1286136a2f67a6b819dc706126a9d2f693536ef5af06c25. Jan 23 00:59:24.988443 containerd[1550]: time="2026-01-23T00:59:24.987166306Z" level=info msg="StartContainer for \"aa022357590977d6e1286136a2f67a6b819dc706126a9d2f693536ef5af06c25\" returns successfully" Jan 23 00:59:25.395427 kubelet[2810]: E0123 00:59:25.395119 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qpc42" podUID="1a86b4da-5edc-4f85-b21e-20314381c9bb" Jan 23 00:59:25.909458 update_engine[1541]: I20260123 00:59:25.909073 1541 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 00:59:25.909458 update_engine[1541]: I20260123 00:59:25.909250 1541 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 00:59:25.910411 update_engine[1541]: I20260123 00:59:25.909814 1541 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 00:59:25.930112 update_engine[1541]: E20260123 00:59:25.929899 1541 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 00:59:25.930278 update_engine[1541]: I20260123 00:59:25.930166 1541 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 23 00:59:26.534230 systemd[1]: cri-containerd-aa022357590977d6e1286136a2f67a6b819dc706126a9d2f693536ef5af06c25.scope: Deactivated successfully. Jan 23 00:59:26.534719 systemd[1]: cri-containerd-aa022357590977d6e1286136a2f67a6b819dc706126a9d2f693536ef5af06c25.scope: Consumed 1.351s CPU time, 176.1M memory peak, 3.8M read from disk, 171.3M written to disk. Jan 23 00:59:26.540360 containerd[1550]: time="2026-01-23T00:59:26.539009338Z" level=info msg="received container exit event container_id:\"aa022357590977d6e1286136a2f67a6b819dc706126a9d2f693536ef5af06c25\" id:\"aa022357590977d6e1286136a2f67a6b819dc706126a9d2f693536ef5af06c25\" pid:3580 exited_at:{seconds:1769129966 nanos:538085433}" Jan 23 00:59:26.566290 kubelet[2810]: I0123 00:59:26.566205 2810 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 23 00:59:26.603893 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa022357590977d6e1286136a2f67a6b819dc706126a9d2f693536ef5af06c25-rootfs.mount: Deactivated successfully. Jan 23 00:59:26.786398 kubelet[2810]: I0123 00:59:26.786229 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tgcv\" (UniqueName: \"kubernetes.io/projected/655c83b6-f33b-4c1f-8ca9-c00c869c6e41-kube-api-access-7tgcv\") pod \"calico-kube-controllers-558649896b-xvhfg\" (UID: \"655c83b6-f33b-4c1f-8ca9-c00c869c6e41\") " pod="calico-system/calico-kube-controllers-558649896b-xvhfg" Jan 23 00:59:26.786398 kubelet[2810]: I0123 00:59:26.786312 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm65j\" (UniqueName: \"kubernetes.io/projected/cba0d29e-89d0-474c-bb48-ac261d9e3439-kube-api-access-sm65j\") pod \"coredns-66bc5c9577-ht42l\" (UID: \"cba0d29e-89d0-474c-bb48-ac261d9e3439\") " pod="kube-system/coredns-66bc5c9577-ht42l" Jan 23 00:59:26.786398 kubelet[2810]: I0123 00:59:26.786347 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/655c83b6-f33b-4c1f-8ca9-c00c869c6e41-tigera-ca-bundle\") pod \"calico-kube-controllers-558649896b-xvhfg\" (UID: \"655c83b6-f33b-4c1f-8ca9-c00c869c6e41\") " pod="calico-system/calico-kube-controllers-558649896b-xvhfg" Jan 23 00:59:26.786398 kubelet[2810]: I0123 00:59:26.786371 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cba0d29e-89d0-474c-bb48-ac261d9e3439-config-volume\") pod \"coredns-66bc5c9577-ht42l\" (UID: \"cba0d29e-89d0-474c-bb48-ac261d9e3439\") " pod="kube-system/coredns-66bc5c9577-ht42l" Jan 23 00:59:26.794313 systemd[1]: Created slice kubepods-besteffort-pod655c83b6_f33b_4c1f_8ca9_c00c869c6e41.slice - libcontainer container kubepods-besteffort-pod655c83b6_f33b_4c1f_8ca9_c00c869c6e41.slice. Jan 23 00:59:26.810116 systemd[1]: Created slice kubepods-burstable-podcba0d29e_89d0_474c_bb48_ac261d9e3439.slice - libcontainer container kubepods-burstable-podcba0d29e_89d0_474c_bb48_ac261d9e3439.slice. Jan 23 00:59:26.826632 systemd[1]: Created slice kubepods-burstable-podb6b8d32b_ca26_41e6_a351_31a3afa9d455.slice - libcontainer container kubepods-burstable-podb6b8d32b_ca26_41e6_a351_31a3afa9d455.slice. Jan 23 00:59:26.843820 systemd[1]: Created slice kubepods-besteffort-pod7c8b37a9_79e1_44f6_bd0d_7ff95f46b169.slice - libcontainer container kubepods-besteffort-pod7c8b37a9_79e1_44f6_bd0d_7ff95f46b169.slice. Jan 23 00:59:26.857855 systemd[1]: Created slice kubepods-besteffort-pod43accc0b_89ee_4b5d_a714_8b1afe2391c5.slice - libcontainer container kubepods-besteffort-pod43accc0b_89ee_4b5d_a714_8b1afe2391c5.slice. Jan 23 00:59:26.884630 systemd[1]: Created slice kubepods-besteffort-podc12890d6_bb1a_45d9_90e6_7033e466e51a.slice - libcontainer container kubepods-besteffort-podc12890d6_bb1a_45d9_90e6_7033e466e51a.slice. Jan 23 00:59:26.889066 kubelet[2810]: I0123 00:59:26.888160 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43accc0b-89ee-4b5d-a714-8b1afe2391c5-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-j5rv6\" (UID: \"43accc0b-89ee-4b5d-a714-8b1afe2391c5\") " pod="calico-system/goldmane-7c778bb748-j5rv6" Jan 23 00:59:26.889066 kubelet[2810]: I0123 00:59:26.888217 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c12890d6-bb1a-45d9-90e6-7033e466e51a-whisker-backend-key-pair\") pod \"whisker-54d6f75d96-dvpcq\" (UID: \"c12890d6-bb1a-45d9-90e6-7033e466e51a\") " pod="calico-system/whisker-54d6f75d96-dvpcq" Jan 23 00:59:26.889066 kubelet[2810]: I0123 00:59:26.888246 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mgtp\" (UniqueName: \"kubernetes.io/projected/479d141d-917c-42c5-8315-9e3283f05aa9-kube-api-access-6mgtp\") pod \"calico-apiserver-7cbc9d4d7d-jwv45\" (UID: \"479d141d-917c-42c5-8315-9e3283f05aa9\") " pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-jwv45" Jan 23 00:59:26.889066 kubelet[2810]: I0123 00:59:26.888275 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6b8d32b-ca26-41e6-a351-31a3afa9d455-config-volume\") pod \"coredns-66bc5c9577-mv9zp\" (UID: \"b6b8d32b-ca26-41e6-a351-31a3afa9d455\") " pod="kube-system/coredns-66bc5c9577-mv9zp" Jan 23 00:59:26.889066 kubelet[2810]: I0123 00:59:26.888326 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/479d141d-917c-42c5-8315-9e3283f05aa9-calico-apiserver-certs\") pod \"calico-apiserver-7cbc9d4d7d-jwv45\" (UID: \"479d141d-917c-42c5-8315-9e3283f05aa9\") " pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-jwv45" Jan 23 00:59:26.889384 kubelet[2810]: I0123 00:59:26.888353 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brvjf\" (UniqueName: \"kubernetes.io/projected/7c8b37a9-79e1-44f6-bd0d-7ff95f46b169-kube-api-access-brvjf\") pod \"calico-apiserver-7cbc9d4d7d-44t6d\" (UID: \"7c8b37a9-79e1-44f6-bd0d-7ff95f46b169\") " pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-44t6d" Jan 23 00:59:26.889384 kubelet[2810]: I0123 00:59:26.888386 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43accc0b-89ee-4b5d-a714-8b1afe2391c5-config\") pod \"goldmane-7c778bb748-j5rv6\" (UID: \"43accc0b-89ee-4b5d-a714-8b1afe2391c5\") " pod="calico-system/goldmane-7c778bb748-j5rv6" Jan 23 00:59:26.889384 kubelet[2810]: I0123 00:59:26.888416 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7c8b37a9-79e1-44f6-bd0d-7ff95f46b169-calico-apiserver-certs\") pod \"calico-apiserver-7cbc9d4d7d-44t6d\" (UID: \"7c8b37a9-79e1-44f6-bd0d-7ff95f46b169\") " pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-44t6d" Jan 23 00:59:26.889384 kubelet[2810]: I0123 00:59:26.888448 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/43accc0b-89ee-4b5d-a714-8b1afe2391c5-goldmane-key-pair\") pod \"goldmane-7c778bb748-j5rv6\" (UID: \"43accc0b-89ee-4b5d-a714-8b1afe2391c5\") " pod="calico-system/goldmane-7c778bb748-j5rv6" Jan 23 00:59:26.889384 kubelet[2810]: I0123 00:59:26.888471 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhrlw\" (UniqueName: \"kubernetes.io/projected/43accc0b-89ee-4b5d-a714-8b1afe2391c5-kube-api-access-zhrlw\") pod \"goldmane-7c778bb748-j5rv6\" (UID: \"43accc0b-89ee-4b5d-a714-8b1afe2391c5\") " pod="calico-system/goldmane-7c778bb748-j5rv6" Jan 23 00:59:26.889647 kubelet[2810]: I0123 00:59:26.888496 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx49r\" (UniqueName: \"kubernetes.io/projected/c12890d6-bb1a-45d9-90e6-7033e466e51a-kube-api-access-kx49r\") pod \"whisker-54d6f75d96-dvpcq\" (UID: \"c12890d6-bb1a-45d9-90e6-7033e466e51a\") " pod="calico-system/whisker-54d6f75d96-dvpcq" Jan 23 00:59:26.889647 kubelet[2810]: I0123 00:59:26.888561 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwnpv\" (UniqueName: \"kubernetes.io/projected/b6b8d32b-ca26-41e6-a351-31a3afa9d455-kube-api-access-cwnpv\") pod \"coredns-66bc5c9577-mv9zp\" (UID: \"b6b8d32b-ca26-41e6-a351-31a3afa9d455\") " pod="kube-system/coredns-66bc5c9577-mv9zp" Jan 23 00:59:26.889647 kubelet[2810]: I0123 00:59:26.888597 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c12890d6-bb1a-45d9-90e6-7033e466e51a-whisker-ca-bundle\") pod \"whisker-54d6f75d96-dvpcq\" (UID: \"c12890d6-bb1a-45d9-90e6-7033e466e51a\") " pod="calico-system/whisker-54d6f75d96-dvpcq" Jan 23 00:59:26.919901 systemd[1]: Created slice kubepods-besteffort-pod479d141d_917c_42c5_8315_9e3283f05aa9.slice - libcontainer container kubepods-besteffort-pod479d141d_917c_42c5_8315_9e3283f05aa9.slice. Jan 23 00:59:26.946642 kubelet[2810]: E0123 00:59:26.942528 2810 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod479d141d_917c_42c5_8315_9e3283f05aa9.slice\": RecentStats: unable to find data in memory cache]" Jan 23 00:59:27.037590 containerd[1550]: time="2026-01-23T00:59:27.037414365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 00:59:27.117863 containerd[1550]: time="2026-01-23T00:59:27.117730180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-558649896b-xvhfg,Uid:655c83b6-f33b-4c1f-8ca9-c00c869c6e41,Namespace:calico-system,Attempt:0,}" Jan 23 00:59:27.131824 containerd[1550]: time="2026-01-23T00:59:27.131734907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ht42l,Uid:cba0d29e-89d0-474c-bb48-ac261d9e3439,Namespace:kube-system,Attempt:0,}" Jan 23 00:59:27.140912 containerd[1550]: time="2026-01-23T00:59:27.140658282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mv9zp,Uid:b6b8d32b-ca26-41e6-a351-31a3afa9d455,Namespace:kube-system,Attempt:0,}" Jan 23 00:59:27.157318 containerd[1550]: time="2026-01-23T00:59:27.157174348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cbc9d4d7d-44t6d,Uid:7c8b37a9-79e1-44f6-bd0d-7ff95f46b169,Namespace:calico-apiserver,Attempt:0,}" Jan 23 00:59:27.168917 containerd[1550]: time="2026-01-23T00:59:27.168797365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-j5rv6,Uid:43accc0b-89ee-4b5d-a714-8b1afe2391c5,Namespace:calico-system,Attempt:0,}" Jan 23 00:59:27.201239 containerd[1550]: time="2026-01-23T00:59:27.200688555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54d6f75d96-dvpcq,Uid:c12890d6-bb1a-45d9-90e6-7033e466e51a,Namespace:calico-system,Attempt:0,}" Jan 23 00:59:27.252904 containerd[1550]: time="2026-01-23T00:59:27.252802861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cbc9d4d7d-jwv45,Uid:479d141d-917c-42c5-8315-9e3283f05aa9,Namespace:calico-apiserver,Attempt:0,}" Jan 23 00:59:27.404642 systemd[1]: Created slice kubepods-besteffort-pod1a86b4da_5edc_4f85_b21e_20314381c9bb.slice - libcontainer container kubepods-besteffort-pod1a86b4da_5edc_4f85_b21e_20314381c9bb.slice. Jan 23 00:59:27.424453 containerd[1550]: time="2026-01-23T00:59:27.424409995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qpc42,Uid:1a86b4da-5edc-4f85-b21e-20314381c9bb,Namespace:calico-system,Attempt:0,}" Jan 23 00:59:27.592698 containerd[1550]: time="2026-01-23T00:59:27.592599591Z" level=error msg="Failed to destroy network for sandbox \"005f443546d2970a345b8e7ce411abbc8259a3ea137964eaf54c1823fa0dd53d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:59:27.601307 containerd[1550]: time="2026-01-23T00:59:27.601188851Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54d6f75d96-dvpcq,Uid:c12890d6-bb1a-45d9-90e6-7033e466e51a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"005f443546d2970a345b8e7ce411abbc8259a3ea137964eaf54c1823fa0dd53d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:59:27.601754 kubelet[2810]: E0123 00:59:27.601555 2810 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"005f443546d2970a345b8e7ce411abbc8259a3ea137964eaf54c1823fa0dd53d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:59:27.602395 kubelet[2810]: E0123 00:59:27.601807 2810 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"005f443546d2970a345b8e7ce411abbc8259a3ea137964eaf54c1823fa0dd53d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54d6f75d96-dvpcq" Jan 23 00:59:27.602395 kubelet[2810]: E0123 00:59:27.601864 2810 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"005f443546d2970a345b8e7ce411abbc8259a3ea137964eaf54c1823fa0dd53d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54d6f75d96-dvpcq" Jan 23 00:59:27.605567 kubelet[2810]: E0123 00:59:27.605383 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-54d6f75d96-dvpcq_calico-system(c12890d6-bb1a-45d9-90e6-7033e466e51a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-54d6f75d96-dvpcq_calico-system(c12890d6-bb1a-45d9-90e6-7033e466e51a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"005f443546d2970a345b8e7ce411abbc8259a3ea137964eaf54c1823fa0dd53d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-54d6f75d96-dvpcq" podUID="c12890d6-bb1a-45d9-90e6-7033e466e51a" Jan 23 00:59:27.639087 containerd[1550]: time="2026-01-23T00:59:27.638866935Z" level=error msg="Failed to destroy network for sandbox \"3acf6edb8ee65b370e5b87ba3c6afb61e1370c88bdf9635dd8444be15ec4c063\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:59:27.643567 systemd[1]: run-netns-cni\x2ddbd1d950\x2d03e4\x2df291\x2d0944\x2d15e89ed1ccd3.mount: Deactivated successfully. Jan 23 00:59:27.652815 containerd[1550]: time="2026-01-23T00:59:27.652711021Z" level=error msg="Failed to destroy network for sandbox \"8de98036e22b2ceeff8655a0237fff0f32703b31320f2548731b3638bdd0da3e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:59:27.660696 systemd[1]: run-netns-cni\x2d0fca7c9a\x2da575\x2db3b5\x2dc30a\x2dbce8e7f45d30.mount: Deactivated successfully. Jan 23 00:59:27.667840 containerd[1550]: time="2026-01-23T00:59:27.667783367Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cbc9d4d7d-jwv45,Uid:479d141d-917c-42c5-8315-9e3283f05aa9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3acf6edb8ee65b370e5b87ba3c6afb61e1370c88bdf9635dd8444be15ec4c063\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:59:27.669713 kubelet[2810]: E0123 00:59:27.669654 2810 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3acf6edb8ee65b370e5b87ba3c6afb61e1370c88bdf9635dd8444be15ec4c063\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:59:27.671453 kubelet[2810]: E0123 00:59:27.669931 2810 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3acf6edb8ee65b370e5b87ba3c6afb61e1370c88bdf9635dd8444be15ec4c063\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-jwv45" Jan 23 00:59:27.671530 containerd[1550]: time="2026-01-23T00:59:27.671339515Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ht42l,Uid:cba0d29e-89d0-474c-bb48-ac261d9e3439,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8de98036e22b2ceeff8655a0237fff0f32703b31320f2548731b3638bdd0da3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:59:27.672445 kubelet[2810]: E0123 00:59:27.672412 2810 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3acf6edb8ee65b370e5b87ba3c6afb61e1370c88bdf9635dd8444be15ec4c063\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-jwv45" Jan 23 00:59:27.673830 kubelet[2810]: E0123 00:59:27.672593 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cbc9d4d7d-jwv45_calico-apiserver(479d141d-917c-42c5-8315-9e3283f05aa9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cbc9d4d7d-jwv45_calico-apiserver(479d141d-917c-42c5-8315-9e3283f05aa9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3acf6edb8ee65b370e5b87ba3c6afb61e1370c88bdf9635dd8444be15ec4c063\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-jwv45" podUID="479d141d-917c-42c5-8315-9e3283f05aa9" Jan 23 00:59:27.685192 kubelet[2810]: E0123 00:59:27.676846 2810 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8de98036e22b2ceeff8655a0237fff0f32703b31320f2548731b3638bdd0da3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:59:27.685192 kubelet[2810]: E0123 00:59:27.676885 2810 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8de98036e22b2ceeff8655a0237fff0f32703b31320f2548731b3638bdd0da3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ht42l" Jan 23 00:59:27.685192 kubelet[2810]: E0123 00:59:27.676902 2810 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8de98036e22b2ceeff8655a0237fff0f32703b31320f2548731b3638bdd0da3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ht42l" Jan 23 00:59:27.685327 kubelet[2810]: E0123 00:59:27.677753 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-ht42l_kube-system(cba0d29e-89d0-474c-bb48-ac261d9e3439)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-ht42l_kube-system(cba0d29e-89d0-474c-bb48-ac261d9e3439)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8de98036e22b2ceeff8655a0237fff0f32703b31320f2548731b3638bdd0da3e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-ht42l" podUID="cba0d29e-89d0-474c-bb48-ac261d9e3439" Jan 23 00:59:27.696280 containerd[1550]: time="2026-01-23T00:59:27.696227733Z" level=error msg="Failed to destroy network for sandbox \"7f5d9a20dd9692453a3018332bb7f9038430e8bd523ef06a03bda6c1a7069b0c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:59:27.702514 containerd[1550]: time="2026-01-23T00:59:27.698262651Z" level=error msg="Failed to destroy network for sandbox \"93837cca2e94b566ee531a8c6d616d87aeee95f675b190fe4a1c0942408ea9aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:59:27.702263 systemd[1]: run-netns-cni\x2df46576ef\x2d1686\x2da8f5\x2df2a5\x2dffe82cb260ed.mount: Deactivated successfully. Jan 23 00:59:27.708294 systemd[1]: run-netns-cni\x2d71749475\x2d8f36\x2d6ec0\x2da661\x2df115da7a19e3.mount: Deactivated successfully. Jan 23 00:59:27.711771 containerd[1550]: time="2026-01-23T00:59:27.711626592Z" level=error msg="Failed to destroy network for sandbox \"f49fbca8c4164667aa2aff83bfa63cbb8fb1ebf0271f3027410dbf6caeb243f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:59:27.712359 containerd[1550]: time="2026-01-23T00:59:27.712252278Z" level=error msg="Failed to destroy network for sandbox \"02f77a51adef89ec78e24c6ec600f236029c1f589937d0eb4f9f96080e3eb7c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:59:27.715069 containerd[1550]: time="2026-01-23T00:59:27.714876817Z" level=error msg="Failed to destroy network for sandbox \"27f1e7a279ab00a1e07dbdf39edc128099ee5cd788af04b81479549415a15c53\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:59:27.723888 containerd[1550]: time="2026-01-23T00:59:27.723834251Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cbc9d4d7d-44t6d,Uid:7c8b37a9-79e1-44f6-bd0d-7ff95f46b169,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"93837cca2e94b566ee531a8c6d616d87aeee95f675b190fe4a1c0942408ea9aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:59:27.725559 kubelet[2810]: E0123 00:59:27.725418 2810 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93837cca2e94b566ee531a8c6d616d87aeee95f675b190fe4a1c0942408ea9aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:59:27.725559 kubelet[2810]: E0123 00:59:27.725493 2810 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93837cca2e94b566ee531a8c6d616d87aeee95f675b190fe4a1c0942408ea9aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-44t6d" Jan 23 00:59:27.725559 kubelet[2810]: E0123 00:59:27.725513 2810 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93837cca2e94b566ee531a8c6d616d87aeee95f675b190fe4a1c0942408ea9aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-44t6d" Jan 23 00:59:27.725800 kubelet[2810]: E0123 00:59:27.725566 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cbc9d4d7d-44t6d_calico-apiserver(7c8b37a9-79e1-44f6-bd0d-7ff95f46b169)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cbc9d4d7d-44t6d_calico-apiserver(7c8b37a9-79e1-44f6-bd0d-7ff95f46b169)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"93837cca2e94b566ee531a8c6d616d87aeee95f675b190fe4a1c0942408ea9aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-44t6d" podUID="7c8b37a9-79e1-44f6-bd0d-7ff95f46b169" Jan 23 00:59:27.728249 containerd[1550]: time="2026-01-23T00:59:27.728082289Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-558649896b-xvhfg,Uid:655c83b6-f33b-4c1f-8ca9-c00c869c6e41,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f5d9a20dd9692453a3018332bb7f9038430e8bd523ef06a03bda6c1a7069b0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:59:27.728626 kubelet[2810]: E0123 00:59:27.728369 2810 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f5d9a20dd9692453a3018332bb7f9038430e8bd523ef06a03bda6c1a7069b0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:59:27.728626 kubelet[2810]: E0123 00:59:27.728541 2810 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f5d9a20dd9692453a3018332bb7f9038430e8bd523ef06a03bda6c1a7069b0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-558649896b-xvhfg" Jan 23 00:59:27.728626 kubelet[2810]: E0123 00:59:27.728565 2810 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f5d9a20dd9692453a3018332bb7f9038430e8bd523ef06a03bda6c1a7069b0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-558649896b-xvhfg" Jan 23 00:59:27.729003 kubelet[2810]: E0123 00:59:27.728781 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-558649896b-xvhfg_calico-system(655c83b6-f33b-4c1f-8ca9-c00c869c6e41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-558649896b-xvhfg_calico-system(655c83b6-f33b-4c1f-8ca9-c00c869c6e41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f5d9a20dd9692453a3018332bb7f9038430e8bd523ef06a03bda6c1a7069b0c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-558649896b-xvhfg" podUID="655c83b6-f33b-4c1f-8ca9-c00c869c6e41" Jan 23 00:59:27.732801 containerd[1550]: time="2026-01-23T00:59:27.731201308Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-j5rv6,Uid:43accc0b-89ee-4b5d-a714-8b1afe2391c5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f49fbca8c4164667aa2aff83bfa63cbb8fb1ebf0271f3027410dbf6caeb243f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:59:27.733207 kubelet[2810]: E0123 00:59:27.731797 2810 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f49fbca8c4164667aa2aff83bfa63cbb8fb1ebf0271f3027410dbf6caeb243f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:59:27.733207 kubelet[2810]: E0123 00:59:27.731914 2810 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f49fbca8c4164667aa2aff83bfa63cbb8fb1ebf0271f3027410dbf6caeb243f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-j5rv6" Jan 23 00:59:27.733207 kubelet[2810]: E0123 00:59:27.731930 2810 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f49fbca8c4164667aa2aff83bfa63cbb8fb1ebf0271f3027410dbf6caeb243f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-j5rv6" Jan 23 00:59:27.733735 kubelet[2810]: E0123 00:59:27.732124 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-j5rv6_calico-system(43accc0b-89ee-4b5d-a714-8b1afe2391c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-j5rv6_calico-system(43accc0b-89ee-4b5d-a714-8b1afe2391c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f49fbca8c4164667aa2aff83bfa63cbb8fb1ebf0271f3027410dbf6caeb243f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-j5rv6" podUID="43accc0b-89ee-4b5d-a714-8b1afe2391c5" Jan 23 00:59:27.739493 containerd[1550]: time="2026-01-23T00:59:27.739379959Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qpc42,Uid:1a86b4da-5edc-4f85-b21e-20314381c9bb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"02f77a51adef89ec78e24c6ec600f236029c1f589937d0eb4f9f96080e3eb7c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:59:27.739735 kubelet[2810]: E0123 00:59:27.739579 2810 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02f77a51adef89ec78e24c6ec600f236029c1f589937d0eb4f9f96080e3eb7c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:59:27.739735 kubelet[2810]: E0123 00:59:27.739632 2810 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02f77a51adef89ec78e24c6ec600f236029c1f589937d0eb4f9f96080e3eb7c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qpc42" Jan 23 00:59:27.739735 kubelet[2810]: E0123 00:59:27.739697 2810 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02f77a51adef89ec78e24c6ec600f236029c1f589937d0eb4f9f96080e3eb7c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qpc42" Jan 23 00:59:27.739868 kubelet[2810]: E0123 00:59:27.739757 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qpc42_calico-system(1a86b4da-5edc-4f85-b21e-20314381c9bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qpc42_calico-system(1a86b4da-5edc-4f85-b21e-20314381c9bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02f77a51adef89ec78e24c6ec600f236029c1f589937d0eb4f9f96080e3eb7c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qpc42" podUID="1a86b4da-5edc-4f85-b21e-20314381c9bb" Jan 23 00:59:27.745299 containerd[1550]: time="2026-01-23T00:59:27.744784602Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mv9zp,Uid:b6b8d32b-ca26-41e6-a351-31a3afa9d455,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"27f1e7a279ab00a1e07dbdf39edc128099ee5cd788af04b81479549415a15c53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:59:27.745459 kubelet[2810]: E0123 00:59:27.745410 2810 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27f1e7a279ab00a1e07dbdf39edc128099ee5cd788af04b81479549415a15c53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:59:27.745516 kubelet[2810]: E0123 00:59:27.745454 2810 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27f1e7a279ab00a1e07dbdf39edc128099ee5cd788af04b81479549415a15c53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-mv9zp" Jan 23 00:59:27.745516 kubelet[2810]: E0123 00:59:27.745478 2810 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27f1e7a279ab00a1e07dbdf39edc128099ee5cd788af04b81479549415a15c53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-mv9zp" Jan 23 00:59:27.745586 kubelet[2810]: E0123 00:59:27.745526 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-mv9zp_kube-system(b6b8d32b-ca26-41e6-a351-31a3afa9d455)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-mv9zp_kube-system(b6b8d32b-ca26-41e6-a351-31a3afa9d455)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27f1e7a279ab00a1e07dbdf39edc128099ee5cd788af04b81479549415a15c53\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-mv9zp" podUID="b6b8d32b-ca26-41e6-a351-31a3afa9d455" Jan 23 00:59:28.601871 systemd[1]: run-netns-cni\x2d9ca4fe6b\x2d9c3c\x2d45af\x2defae\x2dcec52724668a.mount: Deactivated successfully. Jan 23 00:59:28.602176 systemd[1]: run-netns-cni\x2db32a9998\x2d81ec\x2d9c1b\x2d0a36\x2de1cc6dc8b730.mount: Deactivated successfully. Jan 23 00:59:28.602273 systemd[1]: run-netns-cni\x2dc38410fe\x2db817\x2dfc83\x2dc504\x2d0a69320de8bf.mount: Deactivated successfully. Jan 23 00:59:35.912865 update_engine[1541]: I20260123 00:59:35.911592 1541 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 00:59:35.912865 update_engine[1541]: I20260123 00:59:35.912177 1541 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 00:59:35.912865 update_engine[1541]: I20260123 00:59:35.912765 1541 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 00:59:35.930718 update_engine[1541]: E20260123 00:59:35.930404 1541 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 00:59:35.930927 update_engine[1541]: I20260123 00:59:35.930804 1541 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 23 00:59:37.807360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1164775842.mount: Deactivated successfully. Jan 23 00:59:37.987197 containerd[1550]: time="2026-01-23T00:59:37.986759419Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:59:37.991872 containerd[1550]: time="2026-01-23T00:59:37.990306158Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 23 00:59:37.993380 containerd[1550]: time="2026-01-23T00:59:37.993203864Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:59:38.000713 containerd[1550]: time="2026-01-23T00:59:38.000622801Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:59:38.002417 containerd[1550]: time="2026-01-23T00:59:38.002302587Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 10.964834943s" Jan 23 00:59:38.002417 containerd[1550]: time="2026-01-23T00:59:38.002383529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 23 00:59:38.041031 containerd[1550]: time="2026-01-23T00:59:38.040869544Z" level=info msg="CreateContainer within sandbox \"138d1a5d434826dd1682e991c4c00ec2226231123f2b93aa87c82208e1a2937a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 00:59:38.063402 containerd[1550]: time="2026-01-23T00:59:38.061128103Z" level=info msg="Container 2735c7f62974e0218dea12e4fe4dbde6002c3c79017c5c85be349eec8ef26a65: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:59:38.098396 containerd[1550]: time="2026-01-23T00:59:38.098290084Z" level=info msg="CreateContainer within sandbox \"138d1a5d434826dd1682e991c4c00ec2226231123f2b93aa87c82208e1a2937a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2735c7f62974e0218dea12e4fe4dbde6002c3c79017c5c85be349eec8ef26a65\"" Jan 23 00:59:38.099540 containerd[1550]: time="2026-01-23T00:59:38.099380238Z" level=info msg="StartContainer for \"2735c7f62974e0218dea12e4fe4dbde6002c3c79017c5c85be349eec8ef26a65\"" Jan 23 00:59:38.118821 containerd[1550]: time="2026-01-23T00:59:38.118743032Z" level=info msg="connecting to shim 2735c7f62974e0218dea12e4fe4dbde6002c3c79017c5c85be349eec8ef26a65" address="unix:///run/containerd/s/382d24be9882fbd9c12a029aa7af39d3e94687a61d0ba79f18435b0d482d7124" protocol=ttrpc version=3 Jan 23 00:59:38.229386 systemd[1]: Started cri-containerd-2735c7f62974e0218dea12e4fe4dbde6002c3c79017c5c85be349eec8ef26a65.scope - libcontainer container 2735c7f62974e0218dea12e4fe4dbde6002c3c79017c5c85be349eec8ef26a65. Jan 23 00:59:38.406673 containerd[1550]: time="2026-01-23T00:59:38.405779763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ht42l,Uid:cba0d29e-89d0-474c-bb48-ac261d9e3439,Namespace:kube-system,Attempt:0,}" Jan 23 00:59:38.472603 containerd[1550]: time="2026-01-23T00:59:38.471665602Z" level=info msg="StartContainer for \"2735c7f62974e0218dea12e4fe4dbde6002c3c79017c5c85be349eec8ef26a65\" returns successfully" Jan 23 00:59:38.585467 containerd[1550]: time="2026-01-23T00:59:38.585242722Z" level=error msg="Failed to destroy network for sandbox \"950790d5f91e3b0e63afbcb9fe2ec165866c4c0e232d6ae445fcb708a080c023\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:59:38.589601 containerd[1550]: time="2026-01-23T00:59:38.589382038Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ht42l,Uid:cba0d29e-89d0-474c-bb48-ac261d9e3439,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"950790d5f91e3b0e63afbcb9fe2ec165866c4c0e232d6ae445fcb708a080c023\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:59:38.593058 kubelet[2810]: E0123 00:59:38.591559 2810 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"950790d5f91e3b0e63afbcb9fe2ec165866c4c0e232d6ae445fcb708a080c023\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:59:38.595191 kubelet[2810]: E0123 00:59:38.592845 2810 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"950790d5f91e3b0e63afbcb9fe2ec165866c4c0e232d6ae445fcb708a080c023\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ht42l" Jan 23 00:59:38.595191 kubelet[2810]: E0123 00:59:38.594535 2810 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"950790d5f91e3b0e63afbcb9fe2ec165866c4c0e232d6ae445fcb708a080c023\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ht42l" Jan 23 00:59:38.595191 kubelet[2810]: E0123 00:59:38.594674 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-ht42l_kube-system(cba0d29e-89d0-474c-bb48-ac261d9e3439)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-ht42l_kube-system(cba0d29e-89d0-474c-bb48-ac261d9e3439)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"950790d5f91e3b0e63afbcb9fe2ec165866c4c0e232d6ae445fcb708a080c023\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-ht42l" podUID="cba0d29e-89d0-474c-bb48-ac261d9e3439" Jan 23 00:59:38.738644 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 00:59:38.742569 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 00:59:39.264362 kubelet[2810]: I0123 00:59:39.264192 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tz4vt" podStartSLOduration=2.574762772 podStartE2EDuration="24.264162048s" podCreationTimestamp="2026-01-23 00:59:15 +0000 UTC" firstStartedPulling="2026-01-23 00:59:16.31582052 +0000 UTC m=+44.210910563" lastFinishedPulling="2026-01-23 00:59:38.005219796 +0000 UTC m=+65.900309839" observedRunningTime="2026-01-23 00:59:39.24845258 +0000 UTC m=+67.143542643" watchObservedRunningTime="2026-01-23 00:59:39.264162048 +0000 UTC m=+67.159252101" Jan 23 00:59:39.403828 kubelet[2810]: I0123 00:59:39.403579 2810 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c12890d6-bb1a-45d9-90e6-7033e466e51a-whisker-backend-key-pair\") pod \"c12890d6-bb1a-45d9-90e6-7033e466e51a\" (UID: \"c12890d6-bb1a-45d9-90e6-7033e466e51a\") " Jan 23 00:59:39.403828 kubelet[2810]: I0123 00:59:39.403711 2810 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c12890d6-bb1a-45d9-90e6-7033e466e51a-whisker-ca-bundle\") pod \"c12890d6-bb1a-45d9-90e6-7033e466e51a\" (UID: \"c12890d6-bb1a-45d9-90e6-7033e466e51a\") " Jan 23 00:59:39.403828 kubelet[2810]: I0123 00:59:39.403743 2810 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kx49r\" (UniqueName: \"kubernetes.io/projected/c12890d6-bb1a-45d9-90e6-7033e466e51a-kube-api-access-kx49r\") pod \"c12890d6-bb1a-45d9-90e6-7033e466e51a\" (UID: \"c12890d6-bb1a-45d9-90e6-7033e466e51a\") " Jan 23 00:59:39.413683 kubelet[2810]: I0123 00:59:39.412691 2810 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c12890d6-bb1a-45d9-90e6-7033e466e51a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c12890d6-bb1a-45d9-90e6-7033e466e51a" (UID: "c12890d6-bb1a-45d9-90e6-7033e466e51a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 00:59:39.426054 systemd[1]: var-lib-kubelet-pods-c12890d6\x2dbb1a\x2d45d9\x2d90e6\x2d7033e466e51a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 00:59:39.431049 kubelet[2810]: I0123 00:59:39.428875 2810 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c12890d6-bb1a-45d9-90e6-7033e466e51a-kube-api-access-kx49r" (OuterVolumeSpecName: "kube-api-access-kx49r") pod "c12890d6-bb1a-45d9-90e6-7033e466e51a" (UID: "c12890d6-bb1a-45d9-90e6-7033e466e51a"). InnerVolumeSpecName "kube-api-access-kx49r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 00:59:39.431049 kubelet[2810]: I0123 00:59:39.429483 2810 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c12890d6-bb1a-45d9-90e6-7033e466e51a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c12890d6-bb1a-45d9-90e6-7033e466e51a" (UID: "c12890d6-bb1a-45d9-90e6-7033e466e51a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 00:59:39.431733 systemd[1]: var-lib-kubelet-pods-c12890d6\x2dbb1a\x2d45d9\x2d90e6\x2d7033e466e51a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkx49r.mount: Deactivated successfully. Jan 23 00:59:39.508323 kubelet[2810]: I0123 00:59:39.507762 2810 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c12890d6-bb1a-45d9-90e6-7033e466e51a-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 23 00:59:39.508323 kubelet[2810]: I0123 00:59:39.508163 2810 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kx49r\" (UniqueName: \"kubernetes.io/projected/c12890d6-bb1a-45d9-90e6-7033e466e51a-kube-api-access-kx49r\") on node \"localhost\" DevicePath \"\"" Jan 23 00:59:39.508323 kubelet[2810]: I0123 00:59:39.508178 2810 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c12890d6-bb1a-45d9-90e6-7033e466e51a-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jan 23 00:59:40.173658 systemd[1]: Removed slice kubepods-besteffort-podc12890d6_bb1a_45d9_90e6_7033e466e51a.slice - libcontainer container kubepods-besteffort-podc12890d6_bb1a_45d9_90e6_7033e466e51a.slice. Jan 23 00:59:40.400346 kubelet[2810]: I0123 00:59:40.400285 2810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c12890d6-bb1a-45d9-90e6-7033e466e51a" path="/var/lib/kubelet/pods/c12890d6-bb1a-45d9-90e6-7033e466e51a/volumes" Jan 23 00:59:40.409320 containerd[1550]: time="2026-01-23T00:59:40.409270480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cbc9d4d7d-44t6d,Uid:7c8b37a9-79e1-44f6-bd0d-7ff95f46b169,Namespace:calico-apiserver,Attempt:0,}" Jan 23 00:59:40.520420 kubelet[2810]: I0123 00:59:40.519809 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a42d98b-0861-4abd-98c0-5f1896587e7b-whisker-ca-bundle\") pod \"whisker-c95865886-4cvht\" (UID: \"6a42d98b-0861-4abd-98c0-5f1896587e7b\") " pod="calico-system/whisker-c95865886-4cvht" Jan 23 00:59:40.520848 kubelet[2810]: I0123 00:59:40.520602 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv2v6\" (UniqueName: \"kubernetes.io/projected/6a42d98b-0861-4abd-98c0-5f1896587e7b-kube-api-access-qv2v6\") pod \"whisker-c95865886-4cvht\" (UID: \"6a42d98b-0861-4abd-98c0-5f1896587e7b\") " pod="calico-system/whisker-c95865886-4cvht" Jan 23 00:59:40.520848 kubelet[2810]: I0123 00:59:40.520669 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6a42d98b-0861-4abd-98c0-5f1896587e7b-whisker-backend-key-pair\") pod \"whisker-c95865886-4cvht\" (UID: \"6a42d98b-0861-4abd-98c0-5f1896587e7b\") " pod="calico-system/whisker-c95865886-4cvht" Jan 23 00:59:40.522233 systemd[1]: Created slice kubepods-besteffort-pod6a42d98b_0861_4abd_98c0_5f1896587e7b.slice - libcontainer container kubepods-besteffort-pod6a42d98b_0861_4abd_98c0_5f1896587e7b.slice. Jan 23 00:59:40.840109 containerd[1550]: time="2026-01-23T00:59:40.839781182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c95865886-4cvht,Uid:6a42d98b-0861-4abd-98c0-5f1896587e7b,Namespace:calico-system,Attempt:0,}" Jan 23 00:59:41.031375 systemd-networkd[1459]: cali92c287d7d3b: Link UP Jan 23 00:59:41.032198 systemd-networkd[1459]: cali92c287d7d3b: Gained carrier Jan 23 00:59:41.073389 containerd[1550]: 2026-01-23 00:59:40.463 [INFO][4044] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 00:59:41.073389 containerd[1550]: 2026-01-23 00:59:40.522 [INFO][4044] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7cbc9d4d7d--44t6d-eth0 calico-apiserver-7cbc9d4d7d- calico-apiserver 7c8b37a9-79e1-44f6-bd0d-7ff95f46b169 877 0 2026-01-23 00:58:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cbc9d4d7d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7cbc9d4d7d-44t6d eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali92c287d7d3b [] [] }} ContainerID="cf905e0b8cdd01edc19614df7e29187ef914e94f702e96a97d772cfae3fea32f" Namespace="calico-apiserver" Pod="calico-apiserver-7cbc9d4d7d-44t6d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cbc9d4d7d--44t6d-" Jan 23 00:59:41.073389 containerd[1550]: 2026-01-23 00:59:40.523 [INFO][4044] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cf905e0b8cdd01edc19614df7e29187ef914e94f702e96a97d772cfae3fea32f" Namespace="calico-apiserver" Pod="calico-apiserver-7cbc9d4d7d-44t6d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cbc9d4d7d--44t6d-eth0" Jan 23 00:59:41.073389 containerd[1550]: 2026-01-23 00:59:40.783 [INFO][4060] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf905e0b8cdd01edc19614df7e29187ef914e94f702e96a97d772cfae3fea32f" HandleID="k8s-pod-network.cf905e0b8cdd01edc19614df7e29187ef914e94f702e96a97d772cfae3fea32f" Workload="localhost-k8s-calico--apiserver--7cbc9d4d7d--44t6d-eth0" Jan 23 00:59:41.073728 containerd[1550]: 2026-01-23 00:59:40.784 [INFO][4060] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cf905e0b8cdd01edc19614df7e29187ef914e94f702e96a97d772cfae3fea32f" HandleID="k8s-pod-network.cf905e0b8cdd01edc19614df7e29187ef914e94f702e96a97d772cfae3fea32f" Workload="localhost-k8s-calico--apiserver--7cbc9d4d7d--44t6d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001373a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7cbc9d4d7d-44t6d", "timestamp":"2026-01-23 00:59:40.783329146 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 00:59:41.073728 containerd[1550]: 2026-01-23 00:59:40.785 [INFO][4060] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 00:59:41.073728 containerd[1550]: 2026-01-23 00:59:40.785 [INFO][4060] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 00:59:41.073728 containerd[1550]: 2026-01-23 00:59:40.786 [INFO][4060] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 00:59:41.073728 containerd[1550]: 2026-01-23 00:59:40.806 [INFO][4060] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cf905e0b8cdd01edc19614df7e29187ef914e94f702e96a97d772cfae3fea32f" host="localhost" Jan 23 00:59:41.073728 containerd[1550]: 2026-01-23 00:59:40.841 [INFO][4060] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 00:59:41.073728 containerd[1550]: 2026-01-23 00:59:40.878 [INFO][4060] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 00:59:41.073728 containerd[1550]: 2026-01-23 00:59:40.886 [INFO][4060] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 00:59:41.073728 containerd[1550]: 2026-01-23 00:59:40.907 [INFO][4060] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 00:59:41.073728 containerd[1550]: 2026-01-23 00:59:40.907 [INFO][4060] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cf905e0b8cdd01edc19614df7e29187ef914e94f702e96a97d772cfae3fea32f" host="localhost" Jan 23 00:59:41.074899 containerd[1550]: 2026-01-23 00:59:40.910 [INFO][4060] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cf905e0b8cdd01edc19614df7e29187ef914e94f702e96a97d772cfae3fea32f Jan 23 00:59:41.074899 containerd[1550]: 2026-01-23 00:59:40.944 [INFO][4060] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cf905e0b8cdd01edc19614df7e29187ef914e94f702e96a97d772cfae3fea32f" host="localhost" Jan 23 00:59:41.074899 containerd[1550]: 2026-01-23 00:59:40.961 [INFO][4060] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.cf905e0b8cdd01edc19614df7e29187ef914e94f702e96a97d772cfae3fea32f" host="localhost" Jan 23 00:59:41.074899 containerd[1550]: 2026-01-23 00:59:40.961 [INFO][4060] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.cf905e0b8cdd01edc19614df7e29187ef914e94f702e96a97d772cfae3fea32f" host="localhost" Jan 23 00:59:41.074899 containerd[1550]: 2026-01-23 00:59:40.961 [INFO][4060] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 00:59:41.074899 containerd[1550]: 2026-01-23 00:59:40.962 [INFO][4060] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="cf905e0b8cdd01edc19614df7e29187ef914e94f702e96a97d772cfae3fea32f" HandleID="k8s-pod-network.cf905e0b8cdd01edc19614df7e29187ef914e94f702e96a97d772cfae3fea32f" Workload="localhost-k8s-calico--apiserver--7cbc9d4d7d--44t6d-eth0" Jan 23 00:59:41.075711 containerd[1550]: 2026-01-23 00:59:40.981 [INFO][4044] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cf905e0b8cdd01edc19614df7e29187ef914e94f702e96a97d772cfae3fea32f" Namespace="calico-apiserver" Pod="calico-apiserver-7cbc9d4d7d-44t6d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cbc9d4d7d--44t6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cbc9d4d7d--44t6d-eth0", GenerateName:"calico-apiserver-7cbc9d4d7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"7c8b37a9-79e1-44f6-bd0d-7ff95f46b169", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 58, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cbc9d4d7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7cbc9d4d7d-44t6d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali92c287d7d3b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:59:41.076246 containerd[1550]: 2026-01-23 00:59:40.986 [INFO][4044] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="cf905e0b8cdd01edc19614df7e29187ef914e94f702e96a97d772cfae3fea32f" Namespace="calico-apiserver" Pod="calico-apiserver-7cbc9d4d7d-44t6d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cbc9d4d7d--44t6d-eth0" Jan 23 00:59:41.076246 containerd[1550]: 2026-01-23 00:59:40.986 [INFO][4044] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali92c287d7d3b ContainerID="cf905e0b8cdd01edc19614df7e29187ef914e94f702e96a97d772cfae3fea32f" Namespace="calico-apiserver" Pod="calico-apiserver-7cbc9d4d7d-44t6d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cbc9d4d7d--44t6d-eth0" Jan 23 00:59:41.076246 containerd[1550]: 2026-01-23 00:59:41.030 [INFO][4044] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cf905e0b8cdd01edc19614df7e29187ef914e94f702e96a97d772cfae3fea32f" Namespace="calico-apiserver" Pod="calico-apiserver-7cbc9d4d7d-44t6d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cbc9d4d7d--44t6d-eth0" Jan 23 00:59:41.076367 containerd[1550]: 2026-01-23 00:59:41.031 [INFO][4044] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cf905e0b8cdd01edc19614df7e29187ef914e94f702e96a97d772cfae3fea32f" Namespace="calico-apiserver" Pod="calico-apiserver-7cbc9d4d7d-44t6d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cbc9d4d7d--44t6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cbc9d4d7d--44t6d-eth0", GenerateName:"calico-apiserver-7cbc9d4d7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"7c8b37a9-79e1-44f6-bd0d-7ff95f46b169", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 58, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cbc9d4d7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cf905e0b8cdd01edc19614df7e29187ef914e94f702e96a97d772cfae3fea32f", Pod:"calico-apiserver-7cbc9d4d7d-44t6d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali92c287d7d3b", MAC:"4a:2e:2c:f9:1a:27", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:59:41.076488 containerd[1550]: 2026-01-23 00:59:41.068 [INFO][4044] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cf905e0b8cdd01edc19614df7e29187ef914e94f702e96a97d772cfae3fea32f" Namespace="calico-apiserver" Pod="calico-apiserver-7cbc9d4d7d-44t6d" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cbc9d4d7d--44t6d-eth0" Jan 23 00:59:41.284476 containerd[1550]: time="2026-01-23T00:59:41.284289922Z" level=info msg="connecting to shim cf905e0b8cdd01edc19614df7e29187ef914e94f702e96a97d772cfae3fea32f" address="unix:///run/containerd/s/69b8f67dadb6534ef88556f349db050235e65cf08a2e79a5e3c5b95cdc144215" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:59:41.364537 systemd-networkd[1459]: cali81080336569: Link UP Jan 23 00:59:41.368905 systemd-networkd[1459]: cali81080336569: Gained carrier Jan 23 00:59:41.440209 containerd[1550]: time="2026-01-23T00:59:41.435819982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cbc9d4d7d-jwv45,Uid:479d141d-917c-42c5-8315-9e3283f05aa9,Namespace:calico-apiserver,Attempt:0,}" Jan 23 00:59:41.466357 systemd[1]: Started cri-containerd-cf905e0b8cdd01edc19614df7e29187ef914e94f702e96a97d772cfae3fea32f.scope - libcontainer container cf905e0b8cdd01edc19614df7e29187ef914e94f702e96a97d772cfae3fea32f. Jan 23 00:59:41.484801 containerd[1550]: 2026-01-23 00:59:40.939 [INFO][4070] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 00:59:41.484801 containerd[1550]: 2026-01-23 00:59:40.980 [INFO][4070] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--c95865886--4cvht-eth0 whisker-c95865886- calico-system 6a42d98b-0861-4abd-98c0-5f1896587e7b 959 0 2026-01-23 00:59:40 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:c95865886 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-c95865886-4cvht eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali81080336569 [] [] }} ContainerID="d73d7418fd0e5c2a9f957f02e6180ab685b92c1c07a4b52232c6b10fca6b0f96" Namespace="calico-system" Pod="whisker-c95865886-4cvht" WorkloadEndpoint="localhost-k8s-whisker--c95865886--4cvht-" Jan 23 00:59:41.484801 containerd[1550]: 2026-01-23 00:59:40.980 [INFO][4070] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d73d7418fd0e5c2a9f957f02e6180ab685b92c1c07a4b52232c6b10fca6b0f96" Namespace="calico-system" Pod="whisker-c95865886-4cvht" WorkloadEndpoint="localhost-k8s-whisker--c95865886--4cvht-eth0" Jan 23 00:59:41.484801 containerd[1550]: 2026-01-23 00:59:41.076 [INFO][4085] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d73d7418fd0e5c2a9f957f02e6180ab685b92c1c07a4b52232c6b10fca6b0f96" HandleID="k8s-pod-network.d73d7418fd0e5c2a9f957f02e6180ab685b92c1c07a4b52232c6b10fca6b0f96" Workload="localhost-k8s-whisker--c95865886--4cvht-eth0" Jan 23 00:59:41.485620 containerd[1550]: 2026-01-23 00:59:41.078 [INFO][4085] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d73d7418fd0e5c2a9f957f02e6180ab685b92c1c07a4b52232c6b10fca6b0f96" HandleID="k8s-pod-network.d73d7418fd0e5c2a9f957f02e6180ab685b92c1c07a4b52232c6b10fca6b0f96" Workload="localhost-k8s-whisker--c95865886--4cvht-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cc140), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-c95865886-4cvht", "timestamp":"2026-01-23 00:59:41.076578828 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 00:59:41.485620 containerd[1550]: 2026-01-23 00:59:41.078 [INFO][4085] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 00:59:41.485620 containerd[1550]: 2026-01-23 00:59:41.078 [INFO][4085] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 00:59:41.485620 containerd[1550]: 2026-01-23 00:59:41.078 [INFO][4085] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 00:59:41.485620 containerd[1550]: 2026-01-23 00:59:41.107 [INFO][4085] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d73d7418fd0e5c2a9f957f02e6180ab685b92c1c07a4b52232c6b10fca6b0f96" host="localhost" Jan 23 00:59:41.485620 containerd[1550]: 2026-01-23 00:59:41.138 [INFO][4085] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 00:59:41.485620 containerd[1550]: 2026-01-23 00:59:41.173 [INFO][4085] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 00:59:41.485620 containerd[1550]: 2026-01-23 00:59:41.185 [INFO][4085] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 00:59:41.485620 containerd[1550]: 2026-01-23 00:59:41.203 [INFO][4085] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 00:59:41.485620 containerd[1550]: 2026-01-23 00:59:41.203 [INFO][4085] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d73d7418fd0e5c2a9f957f02e6180ab685b92c1c07a4b52232c6b10fca6b0f96" host="localhost" Jan 23 00:59:41.486176 containerd[1550]: 2026-01-23 00:59:41.219 [INFO][4085] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d73d7418fd0e5c2a9f957f02e6180ab685b92c1c07a4b52232c6b10fca6b0f96 Jan 23 00:59:41.486176 containerd[1550]: 2026-01-23 00:59:41.264 [INFO][4085] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d73d7418fd0e5c2a9f957f02e6180ab685b92c1c07a4b52232c6b10fca6b0f96" host="localhost" Jan 23 00:59:41.486176 containerd[1550]: 2026-01-23 00:59:41.308 [INFO][4085] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.d73d7418fd0e5c2a9f957f02e6180ab685b92c1c07a4b52232c6b10fca6b0f96" host="localhost" Jan 23 00:59:41.486176 containerd[1550]: 2026-01-23 00:59:41.312 [INFO][4085] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.d73d7418fd0e5c2a9f957f02e6180ab685b92c1c07a4b52232c6b10fca6b0f96" host="localhost" Jan 23 00:59:41.486176 containerd[1550]: 2026-01-23 00:59:41.312 [INFO][4085] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 00:59:41.486176 containerd[1550]: 2026-01-23 00:59:41.313 [INFO][4085] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="d73d7418fd0e5c2a9f957f02e6180ab685b92c1c07a4b52232c6b10fca6b0f96" HandleID="k8s-pod-network.d73d7418fd0e5c2a9f957f02e6180ab685b92c1c07a4b52232c6b10fca6b0f96" Workload="localhost-k8s-whisker--c95865886--4cvht-eth0" Jan 23 00:59:41.486353 containerd[1550]: 2026-01-23 00:59:41.355 [INFO][4070] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d73d7418fd0e5c2a9f957f02e6180ab685b92c1c07a4b52232c6b10fca6b0f96" Namespace="calico-system" Pod="whisker-c95865886-4cvht" WorkloadEndpoint="localhost-k8s-whisker--c95865886--4cvht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--c95865886--4cvht-eth0", GenerateName:"whisker-c95865886-", Namespace:"calico-system", SelfLink:"", UID:"6a42d98b-0861-4abd-98c0-5f1896587e7b", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 59, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c95865886", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-c95865886-4cvht", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali81080336569", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:59:41.486353 containerd[1550]: 2026-01-23 00:59:41.357 [INFO][4070] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="d73d7418fd0e5c2a9f957f02e6180ab685b92c1c07a4b52232c6b10fca6b0f96" Namespace="calico-system" Pod="whisker-c95865886-4cvht" WorkloadEndpoint="localhost-k8s-whisker--c95865886--4cvht-eth0" Jan 23 00:59:41.486527 containerd[1550]: 2026-01-23 00:59:41.357 [INFO][4070] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali81080336569 ContainerID="d73d7418fd0e5c2a9f957f02e6180ab685b92c1c07a4b52232c6b10fca6b0f96" Namespace="calico-system" Pod="whisker-c95865886-4cvht" WorkloadEndpoint="localhost-k8s-whisker--c95865886--4cvht-eth0" Jan 23 00:59:41.486527 containerd[1550]: 2026-01-23 00:59:41.368 [INFO][4070] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d73d7418fd0e5c2a9f957f02e6180ab685b92c1c07a4b52232c6b10fca6b0f96" Namespace="calico-system" Pod="whisker-c95865886-4cvht" WorkloadEndpoint="localhost-k8s-whisker--c95865886--4cvht-eth0" Jan 23 00:59:41.487088 containerd[1550]: 2026-01-23 00:59:41.369 [INFO][4070] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d73d7418fd0e5c2a9f957f02e6180ab685b92c1c07a4b52232c6b10fca6b0f96" Namespace="calico-system" Pod="whisker-c95865886-4cvht" WorkloadEndpoint="localhost-k8s-whisker--c95865886--4cvht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--c95865886--4cvht-eth0", GenerateName:"whisker-c95865886-", Namespace:"calico-system", SelfLink:"", UID:"6a42d98b-0861-4abd-98c0-5f1896587e7b", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 59, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c95865886", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d73d7418fd0e5c2a9f957f02e6180ab685b92c1c07a4b52232c6b10fca6b0f96", Pod:"whisker-c95865886-4cvht", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali81080336569", MAC:"ea:33:b6:18:32:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:59:41.487249 containerd[1550]: 2026-01-23 00:59:41.480 [INFO][4070] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d73d7418fd0e5c2a9f957f02e6180ab685b92c1c07a4b52232c6b10fca6b0f96" Namespace="calico-system" Pod="whisker-c95865886-4cvht" WorkloadEndpoint="localhost-k8s-whisker--c95865886--4cvht-eth0" Jan 23 00:59:41.646637 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 00:59:41.677898 containerd[1550]: time="2026-01-23T00:59:41.677272082Z" level=info msg="connecting to shim d73d7418fd0e5c2a9f957f02e6180ab685b92c1c07a4b52232c6b10fca6b0f96" address="unix:///run/containerd/s/a7dec44da0687e8f39410319e151ab4051d75ec81aaf50b13786586cfbb7bc03" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:59:41.850398 systemd[1]: Started cri-containerd-d73d7418fd0e5c2a9f957f02e6180ab685b92c1c07a4b52232c6b10fca6b0f96.scope - libcontainer container d73d7418fd0e5c2a9f957f02e6180ab685b92c1c07a4b52232c6b10fca6b0f96. Jan 23 00:59:41.898748 containerd[1550]: time="2026-01-23T00:59:41.898467983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cbc9d4d7d-44t6d,Uid:7c8b37a9-79e1-44f6-bd0d-7ff95f46b169,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"cf905e0b8cdd01edc19614df7e29187ef914e94f702e96a97d772cfae3fea32f\"" Jan 23 00:59:41.908680 containerd[1550]: time="2026-01-23T00:59:41.908468849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 00:59:41.922085 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 00:59:42.091412 containerd[1550]: time="2026-01-23T00:59:42.089654310Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:42.112690 containerd[1550]: time="2026-01-23T00:59:42.112291165Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 00:59:42.130306 containerd[1550]: time="2026-01-23T00:59:42.129863199Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 00:59:42.139390 kubelet[2810]: E0123 00:59:42.139218 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:59:42.140210 kubelet[2810]: E0123 00:59:42.139458 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:59:42.140210 kubelet[2810]: E0123 00:59:42.139686 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7cbc9d4d7d-44t6d_calico-apiserver(7c8b37a9-79e1-44f6-bd0d-7ff95f46b169): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:42.140210 kubelet[2810]: E0123 00:59:42.139787 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-44t6d" podUID="7c8b37a9-79e1-44f6-bd0d-7ff95f46b169" Jan 23 00:59:42.165361 containerd[1550]: time="2026-01-23T00:59:42.165166394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c95865886-4cvht,Uid:6a42d98b-0861-4abd-98c0-5f1896587e7b,Namespace:calico-system,Attempt:0,} returns sandbox id \"d73d7418fd0e5c2a9f957f02e6180ab685b92c1c07a4b52232c6b10fca6b0f96\"" Jan 23 00:59:42.190894 containerd[1550]: time="2026-01-23T00:59:42.190804829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 00:59:42.199413 kubelet[2810]: E0123 00:59:42.199296 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-44t6d" podUID="7c8b37a9-79e1-44f6-bd0d-7ff95f46b169" Jan 23 00:59:42.233173 systemd-networkd[1459]: cali9bbfd657699: Link UP Jan 23 00:59:42.236892 systemd-networkd[1459]: cali9bbfd657699: Gained carrier Jan 23 00:59:42.308671 containerd[1550]: time="2026-01-23T00:59:42.308625324Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:42.312165 containerd[1550]: time="2026-01-23T00:59:42.311761973Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 00:59:42.312165 containerd[1550]: time="2026-01-23T00:59:42.311810407Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 00:59:42.313141 kubelet[2810]: E0123 00:59:42.313085 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 00:59:42.313482 kubelet[2810]: E0123 00:59:42.313400 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 00:59:42.314532 kubelet[2810]: E0123 00:59:42.314331 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-c95865886-4cvht_calico-system(6a42d98b-0861-4abd-98c0-5f1896587e7b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:42.322382 containerd[1550]: time="2026-01-23T00:59:42.322214858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 00:59:42.344796 containerd[1550]: 2026-01-23 00:59:41.586 [INFO][4219] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 00:59:42.344796 containerd[1550]: 2026-01-23 00:59:41.639 [INFO][4219] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7cbc9d4d7d--jwv45-eth0 calico-apiserver-7cbc9d4d7d- calico-apiserver 479d141d-917c-42c5-8315-9e3283f05aa9 883 0 2026-01-23 00:58:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cbc9d4d7d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7cbc9d4d7d-jwv45 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9bbfd657699 [] [] }} ContainerID="5452fd708942496b45c5485dc755b6980db77cc0353139e08b0042b0e7856ba4" Namespace="calico-apiserver" Pod="calico-apiserver-7cbc9d4d7d-jwv45" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cbc9d4d7d--jwv45-" Jan 23 00:59:42.344796 containerd[1550]: 2026-01-23 00:59:41.640 [INFO][4219] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5452fd708942496b45c5485dc755b6980db77cc0353139e08b0042b0e7856ba4" Namespace="calico-apiserver" Pod="calico-apiserver-7cbc9d4d7d-jwv45" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cbc9d4d7d--jwv45-eth0" Jan 23 00:59:42.344796 containerd[1550]: 2026-01-23 00:59:41.846 [INFO][4263] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5452fd708942496b45c5485dc755b6980db77cc0353139e08b0042b0e7856ba4" HandleID="k8s-pod-network.5452fd708942496b45c5485dc755b6980db77cc0353139e08b0042b0e7856ba4" Workload="localhost-k8s-calico--apiserver--7cbc9d4d7d--jwv45-eth0" Jan 23 00:59:42.346316 containerd[1550]: 2026-01-23 00:59:41.849 [INFO][4263] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5452fd708942496b45c5485dc755b6980db77cc0353139e08b0042b0e7856ba4" HandleID="k8s-pod-network.5452fd708942496b45c5485dc755b6980db77cc0353139e08b0042b0e7856ba4" Workload="localhost-k8s-calico--apiserver--7cbc9d4d7d--jwv45-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ac370), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7cbc9d4d7d-jwv45", "timestamp":"2026-01-23 00:59:41.846836627 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 00:59:42.346316 containerd[1550]: 2026-01-23 00:59:41.850 [INFO][4263] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 00:59:42.346316 containerd[1550]: 2026-01-23 00:59:41.850 [INFO][4263] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 00:59:42.346316 containerd[1550]: 2026-01-23 00:59:41.850 [INFO][4263] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 00:59:42.346316 containerd[1550]: 2026-01-23 00:59:41.918 [INFO][4263] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5452fd708942496b45c5485dc755b6980db77cc0353139e08b0042b0e7856ba4" host="localhost" Jan 23 00:59:42.346316 containerd[1550]: 2026-01-23 00:59:42.007 [INFO][4263] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 00:59:42.346316 containerd[1550]: 2026-01-23 00:59:42.033 [INFO][4263] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 00:59:42.346316 containerd[1550]: 2026-01-23 00:59:42.045 [INFO][4263] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 00:59:42.346316 containerd[1550]: 2026-01-23 00:59:42.055 [INFO][4263] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 00:59:42.346316 containerd[1550]: 2026-01-23 00:59:42.056 [INFO][4263] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5452fd708942496b45c5485dc755b6980db77cc0353139e08b0042b0e7856ba4" host="localhost" Jan 23 00:59:42.347214 containerd[1550]: 2026-01-23 00:59:42.062 [INFO][4263] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5452fd708942496b45c5485dc755b6980db77cc0353139e08b0042b0e7856ba4 Jan 23 00:59:42.347214 containerd[1550]: 2026-01-23 00:59:42.111 [INFO][4263] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5452fd708942496b45c5485dc755b6980db77cc0353139e08b0042b0e7856ba4" host="localhost" Jan 23 00:59:42.347214 containerd[1550]: 2026-01-23 00:59:42.193 [INFO][4263] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.5452fd708942496b45c5485dc755b6980db77cc0353139e08b0042b0e7856ba4" host="localhost" Jan 23 00:59:42.347214 containerd[1550]: 2026-01-23 00:59:42.198 [INFO][4263] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.5452fd708942496b45c5485dc755b6980db77cc0353139e08b0042b0e7856ba4" host="localhost" Jan 23 00:59:42.347214 containerd[1550]: 2026-01-23 00:59:42.198 [INFO][4263] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 00:59:42.347214 containerd[1550]: 2026-01-23 00:59:42.198 [INFO][4263] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="5452fd708942496b45c5485dc755b6980db77cc0353139e08b0042b0e7856ba4" HandleID="k8s-pod-network.5452fd708942496b45c5485dc755b6980db77cc0353139e08b0042b0e7856ba4" Workload="localhost-k8s-calico--apiserver--7cbc9d4d7d--jwv45-eth0" Jan 23 00:59:42.347774 containerd[1550]: 2026-01-23 00:59:42.212 [INFO][4219] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5452fd708942496b45c5485dc755b6980db77cc0353139e08b0042b0e7856ba4" Namespace="calico-apiserver" Pod="calico-apiserver-7cbc9d4d7d-jwv45" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cbc9d4d7d--jwv45-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cbc9d4d7d--jwv45-eth0", GenerateName:"calico-apiserver-7cbc9d4d7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"479d141d-917c-42c5-8315-9e3283f05aa9", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 58, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cbc9d4d7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7cbc9d4d7d-jwv45", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9bbfd657699", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:59:42.348095 containerd[1550]: 2026-01-23 00:59:42.212 [INFO][4219] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="5452fd708942496b45c5485dc755b6980db77cc0353139e08b0042b0e7856ba4" Namespace="calico-apiserver" Pod="calico-apiserver-7cbc9d4d7d-jwv45" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cbc9d4d7d--jwv45-eth0" Jan 23 00:59:42.348095 containerd[1550]: 2026-01-23 00:59:42.212 [INFO][4219] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9bbfd657699 ContainerID="5452fd708942496b45c5485dc755b6980db77cc0353139e08b0042b0e7856ba4" Namespace="calico-apiserver" Pod="calico-apiserver-7cbc9d4d7d-jwv45" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cbc9d4d7d--jwv45-eth0" Jan 23 00:59:42.348095 containerd[1550]: 2026-01-23 00:59:42.246 [INFO][4219] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5452fd708942496b45c5485dc755b6980db77cc0353139e08b0042b0e7856ba4" Namespace="calico-apiserver" Pod="calico-apiserver-7cbc9d4d7d-jwv45" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cbc9d4d7d--jwv45-eth0" Jan 23 00:59:42.348216 containerd[1550]: 2026-01-23 00:59:42.258 [INFO][4219] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5452fd708942496b45c5485dc755b6980db77cc0353139e08b0042b0e7856ba4" Namespace="calico-apiserver" Pod="calico-apiserver-7cbc9d4d7d-jwv45" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cbc9d4d7d--jwv45-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cbc9d4d7d--jwv45-eth0", GenerateName:"calico-apiserver-7cbc9d4d7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"479d141d-917c-42c5-8315-9e3283f05aa9", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 58, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cbc9d4d7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5452fd708942496b45c5485dc755b6980db77cc0353139e08b0042b0e7856ba4", Pod:"calico-apiserver-7cbc9d4d7d-jwv45", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9bbfd657699", MAC:"72:52:ff:0a:0b:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:59:42.348335 containerd[1550]: 2026-01-23 00:59:42.333 [INFO][4219] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5452fd708942496b45c5485dc755b6980db77cc0353139e08b0042b0e7856ba4" Namespace="calico-apiserver" Pod="calico-apiserver-7cbc9d4d7d-jwv45" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cbc9d4d7d--jwv45-eth0" Jan 23 00:59:42.402199 containerd[1550]: time="2026-01-23T00:59:42.402141147Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:42.407878 containerd[1550]: time="2026-01-23T00:59:42.407684489Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 00:59:42.407878 containerd[1550]: time="2026-01-23T00:59:42.407785537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 00:59:42.408232 kubelet[2810]: E0123 00:59:42.408148 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 00:59:42.408232 kubelet[2810]: E0123 00:59:42.408216 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 00:59:42.408862 kubelet[2810]: E0123 00:59:42.408329 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-c95865886-4cvht_calico-system(6a42d98b-0861-4abd-98c0-5f1896587e7b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:42.408862 kubelet[2810]: E0123 00:59:42.408430 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c95865886-4cvht" podUID="6a42d98b-0861-4abd-98c0-5f1896587e7b" Jan 23 00:59:42.466480 containerd[1550]: time="2026-01-23T00:59:42.466326605Z" level=info msg="connecting to shim 5452fd708942496b45c5485dc755b6980db77cc0353139e08b0042b0e7856ba4" address="unix:///run/containerd/s/bfd04d8646b420b069856074b6e5137791f1ea84885f0bbd7ad1a78ec9b97c72" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:59:42.611237 systemd[1]: Started cri-containerd-5452fd708942496b45c5485dc755b6980db77cc0353139e08b0042b0e7856ba4.scope - libcontainer container 5452fd708942496b45c5485dc755b6980db77cc0353139e08b0042b0e7856ba4. Jan 23 00:59:42.645899 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 00:59:42.765623 containerd[1550]: time="2026-01-23T00:59:42.765293248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cbc9d4d7d-jwv45,Uid:479d141d-917c-42c5-8315-9e3283f05aa9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5452fd708942496b45c5485dc755b6980db77cc0353139e08b0042b0e7856ba4\"" Jan 23 00:59:42.781986 containerd[1550]: time="2026-01-23T00:59:42.781782150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 00:59:42.855663 systemd-networkd[1459]: cali81080336569: Gained IPv6LL Jan 23 00:59:42.950631 containerd[1550]: time="2026-01-23T00:59:42.950518711Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:42.987408 systemd-networkd[1459]: cali92c287d7d3b: Gained IPv6LL Jan 23 00:59:42.989251 containerd[1550]: time="2026-01-23T00:59:42.988319652Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 00:59:42.989251 containerd[1550]: time="2026-01-23T00:59:42.988799473Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 00:59:42.990263 kubelet[2810]: E0123 00:59:42.990213 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:59:42.992060 kubelet[2810]: E0123 00:59:42.991790 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:59:42.992132 kubelet[2810]: E0123 00:59:42.992059 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7cbc9d4d7d-jwv45_calico-apiserver(479d141d-917c-42c5-8315-9e3283f05aa9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:42.992280 kubelet[2810]: E0123 00:59:42.992115 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-jwv45" podUID="479d141d-917c-42c5-8315-9e3283f05aa9" Jan 23 00:59:43.197065 kubelet[2810]: E0123 00:59:43.196903 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-jwv45" podUID="479d141d-917c-42c5-8315-9e3283f05aa9" Jan 23 00:59:43.206079 kubelet[2810]: E0123 00:59:43.205815 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-44t6d" podUID="7c8b37a9-79e1-44f6-bd0d-7ff95f46b169" Jan 23 00:59:43.207263 kubelet[2810]: E0123 00:59:43.207157 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c95865886-4cvht" podUID="6a42d98b-0861-4abd-98c0-5f1896587e7b" Jan 23 00:59:43.283341 systemd-networkd[1459]: vxlan.calico: Link UP Jan 23 00:59:43.283355 systemd-networkd[1459]: vxlan.calico: Gained carrier Jan 23 00:59:43.417297 containerd[1550]: time="2026-01-23T00:59:43.417125758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qpc42,Uid:1a86b4da-5edc-4f85-b21e-20314381c9bb,Namespace:calico-system,Attempt:0,}" Jan 23 00:59:43.436494 containerd[1550]: time="2026-01-23T00:59:43.436403101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mv9zp,Uid:b6b8d32b-ca26-41e6-a351-31a3afa9d455,Namespace:kube-system,Attempt:0,}" Jan 23 00:59:43.480041 containerd[1550]: time="2026-01-23T00:59:43.479354594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-j5rv6,Uid:43accc0b-89ee-4b5d-a714-8b1afe2391c5,Namespace:calico-system,Attempt:0,}" Jan 23 00:59:43.481403 containerd[1550]: time="2026-01-23T00:59:43.481211847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-558649896b-xvhfg,Uid:655c83b6-f33b-4c1f-8ca9-c00c869c6e41,Namespace:calico-system,Attempt:0,}" Jan 23 00:59:43.812606 systemd-networkd[1459]: cali9bbfd657699: Gained IPv6LL Jan 23 00:59:44.110701 systemd-networkd[1459]: calidc8664e5608: Link UP Jan 23 00:59:44.115150 systemd-networkd[1459]: calidc8664e5608: Gained carrier Jan 23 00:59:44.200410 containerd[1550]: 2026-01-23 00:59:43.692 [INFO][4441] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--qpc42-eth0 csi-node-driver- calico-system 1a86b4da-5edc-4f85-b21e-20314381c9bb 771 0 2026-01-23 00:59:15 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-qpc42 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calidc8664e5608 [] [] }} ContainerID="fffd2eb9c01e4912995639622a92f15812156087dbe8ae80fd4985e6a6ea6334" Namespace="calico-system" Pod="csi-node-driver-qpc42" WorkloadEndpoint="localhost-k8s-csi--node--driver--qpc42-" Jan 23 00:59:44.200410 containerd[1550]: 2026-01-23 00:59:43.696 [INFO][4441] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fffd2eb9c01e4912995639622a92f15812156087dbe8ae80fd4985e6a6ea6334" Namespace="calico-system" Pod="csi-node-driver-qpc42" WorkloadEndpoint="localhost-k8s-csi--node--driver--qpc42-eth0" Jan 23 00:59:44.200410 containerd[1550]: 2026-01-23 00:59:43.822 [INFO][4510] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fffd2eb9c01e4912995639622a92f15812156087dbe8ae80fd4985e6a6ea6334" HandleID="k8s-pod-network.fffd2eb9c01e4912995639622a92f15812156087dbe8ae80fd4985e6a6ea6334" Workload="localhost-k8s-csi--node--driver--qpc42-eth0" Jan 23 00:59:44.200854 containerd[1550]: 2026-01-23 00:59:43.825 [INFO][4510] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fffd2eb9c01e4912995639622a92f15812156087dbe8ae80fd4985e6a6ea6334" HandleID="k8s-pod-network.fffd2eb9c01e4912995639622a92f15812156087dbe8ae80fd4985e6a6ea6334" Workload="localhost-k8s-csi--node--driver--qpc42-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00058cb60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-qpc42", "timestamp":"2026-01-23 00:59:43.822376136 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 00:59:44.200854 containerd[1550]: 2026-01-23 00:59:43.825 [INFO][4510] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 00:59:44.200854 containerd[1550]: 2026-01-23 00:59:43.825 [INFO][4510] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 00:59:44.200854 containerd[1550]: 2026-01-23 00:59:43.825 [INFO][4510] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 00:59:44.200854 containerd[1550]: 2026-01-23 00:59:43.845 [INFO][4510] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fffd2eb9c01e4912995639622a92f15812156087dbe8ae80fd4985e6a6ea6334" host="localhost" Jan 23 00:59:44.200854 containerd[1550]: 2026-01-23 00:59:43.888 [INFO][4510] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 00:59:44.200854 containerd[1550]: 2026-01-23 00:59:43.937 [INFO][4510] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 00:59:44.200854 containerd[1550]: 2026-01-23 00:59:43.946 [INFO][4510] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 00:59:44.200854 containerd[1550]: 2026-01-23 00:59:43.959 [INFO][4510] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 00:59:44.200854 containerd[1550]: 2026-01-23 00:59:43.959 [INFO][4510] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fffd2eb9c01e4912995639622a92f15812156087dbe8ae80fd4985e6a6ea6334" host="localhost" Jan 23 00:59:44.203469 containerd[1550]: 2026-01-23 00:59:43.965 [INFO][4510] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fffd2eb9c01e4912995639622a92f15812156087dbe8ae80fd4985e6a6ea6334 Jan 23 00:59:44.203469 containerd[1550]: 2026-01-23 00:59:43.999 [INFO][4510] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fffd2eb9c01e4912995639622a92f15812156087dbe8ae80fd4985e6a6ea6334" host="localhost" Jan 23 00:59:44.203469 containerd[1550]: 2026-01-23 00:59:44.086 [INFO][4510] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.fffd2eb9c01e4912995639622a92f15812156087dbe8ae80fd4985e6a6ea6334" host="localhost" Jan 23 00:59:44.203469 containerd[1550]: 2026-01-23 00:59:44.086 [INFO][4510] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.fffd2eb9c01e4912995639622a92f15812156087dbe8ae80fd4985e6a6ea6334" host="localhost" Jan 23 00:59:44.203469 containerd[1550]: 2026-01-23 00:59:44.087 [INFO][4510] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 00:59:44.203469 containerd[1550]: 2026-01-23 00:59:44.087 [INFO][4510] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="fffd2eb9c01e4912995639622a92f15812156087dbe8ae80fd4985e6a6ea6334" HandleID="k8s-pod-network.fffd2eb9c01e4912995639622a92f15812156087dbe8ae80fd4985e6a6ea6334" Workload="localhost-k8s-csi--node--driver--qpc42-eth0" Jan 23 00:59:44.203705 containerd[1550]: 2026-01-23 00:59:44.097 [INFO][4441] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fffd2eb9c01e4912995639622a92f15812156087dbe8ae80fd4985e6a6ea6334" Namespace="calico-system" Pod="csi-node-driver-qpc42" WorkloadEndpoint="localhost-k8s-csi--node--driver--qpc42-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qpc42-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1a86b4da-5edc-4f85-b21e-20314381c9bb", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 59, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-qpc42", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidc8664e5608", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:59:44.203846 containerd[1550]: 2026-01-23 00:59:44.097 [INFO][4441] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="fffd2eb9c01e4912995639622a92f15812156087dbe8ae80fd4985e6a6ea6334" Namespace="calico-system" Pod="csi-node-driver-qpc42" WorkloadEndpoint="localhost-k8s-csi--node--driver--qpc42-eth0" Jan 23 00:59:44.203846 containerd[1550]: 2026-01-23 00:59:44.097 [INFO][4441] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidc8664e5608 ContainerID="fffd2eb9c01e4912995639622a92f15812156087dbe8ae80fd4985e6a6ea6334" Namespace="calico-system" Pod="csi-node-driver-qpc42" WorkloadEndpoint="localhost-k8s-csi--node--driver--qpc42-eth0" Jan 23 00:59:44.203846 containerd[1550]: 2026-01-23 00:59:44.120 [INFO][4441] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fffd2eb9c01e4912995639622a92f15812156087dbe8ae80fd4985e6a6ea6334" Namespace="calico-system" Pod="csi-node-driver-qpc42" WorkloadEndpoint="localhost-k8s-csi--node--driver--qpc42-eth0" Jan 23 00:59:44.204100 containerd[1550]: 2026-01-23 00:59:44.121 [INFO][4441] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fffd2eb9c01e4912995639622a92f15812156087dbe8ae80fd4985e6a6ea6334" Namespace="calico-system" Pod="csi-node-driver-qpc42" WorkloadEndpoint="localhost-k8s-csi--node--driver--qpc42-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qpc42-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1a86b4da-5edc-4f85-b21e-20314381c9bb", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 59, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fffd2eb9c01e4912995639622a92f15812156087dbe8ae80fd4985e6a6ea6334", Pod:"csi-node-driver-qpc42", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidc8664e5608", MAC:"de:16:e2:56:61:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:59:44.204234 containerd[1550]: 2026-01-23 00:59:44.176 [INFO][4441] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fffd2eb9c01e4912995639622a92f15812156087dbe8ae80fd4985e6a6ea6334" Namespace="calico-system" Pod="csi-node-driver-qpc42" WorkloadEndpoint="localhost-k8s-csi--node--driver--qpc42-eth0" Jan 23 00:59:44.215338 kubelet[2810]: E0123 00:59:44.215165 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-jwv45" podUID="479d141d-917c-42c5-8315-9e3283f05aa9" Jan 23 00:59:44.230315 kubelet[2810]: E0123 00:59:44.230180 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c95865886-4cvht" podUID="6a42d98b-0861-4abd-98c0-5f1896587e7b" Jan 23 00:59:44.306235 systemd-networkd[1459]: calif35e7c9ee62: Link UP Jan 23 00:59:44.310546 systemd-networkd[1459]: calif35e7c9ee62: Gained carrier Jan 23 00:59:44.378663 containerd[1550]: 2026-01-23 00:59:43.689 [INFO][4471] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--j5rv6-eth0 goldmane-7c778bb748- calico-system 43accc0b-89ee-4b5d-a714-8b1afe2391c5 879 0 2026-01-23 00:59:12 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-j5rv6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif35e7c9ee62 [] [] }} ContainerID="e02923f0785348377dfcdefcc5d6cd5e9f74b697cfeae8a09eba579909490567" Namespace="calico-system" Pod="goldmane-7c778bb748-j5rv6" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--j5rv6-" Jan 23 00:59:44.378663 containerd[1550]: 2026-01-23 00:59:43.689 [INFO][4471] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e02923f0785348377dfcdefcc5d6cd5e9f74b697cfeae8a09eba579909490567" Namespace="calico-system" Pod="goldmane-7c778bb748-j5rv6" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--j5rv6-eth0" Jan 23 00:59:44.378663 containerd[1550]: 2026-01-23 00:59:43.830 [INFO][4508] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e02923f0785348377dfcdefcc5d6cd5e9f74b697cfeae8a09eba579909490567" HandleID="k8s-pod-network.e02923f0785348377dfcdefcc5d6cd5e9f74b697cfeae8a09eba579909490567" Workload="localhost-k8s-goldmane--7c778bb748--j5rv6-eth0" Jan 23 00:59:44.379113 containerd[1550]: 2026-01-23 00:59:43.833 [INFO][4508] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e02923f0785348377dfcdefcc5d6cd5e9f74b697cfeae8a09eba579909490567" HandleID="k8s-pod-network.e02923f0785348377dfcdefcc5d6cd5e9f74b697cfeae8a09eba579909490567" Workload="localhost-k8s-goldmane--7c778bb748--j5rv6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df190), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-j5rv6", "timestamp":"2026-01-23 00:59:43.830719948 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 00:59:44.379113 containerd[1550]: 2026-01-23 00:59:43.833 [INFO][4508] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 00:59:44.379113 containerd[1550]: 2026-01-23 00:59:44.087 [INFO][4508] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 00:59:44.379113 containerd[1550]: 2026-01-23 00:59:44.088 [INFO][4508] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 00:59:44.379113 containerd[1550]: 2026-01-23 00:59:44.114 [INFO][4508] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e02923f0785348377dfcdefcc5d6cd5e9f74b697cfeae8a09eba579909490567" host="localhost" Jan 23 00:59:44.379113 containerd[1550]: 2026-01-23 00:59:44.132 [INFO][4508] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 00:59:44.379113 containerd[1550]: 2026-01-23 00:59:44.164 [INFO][4508] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 00:59:44.379113 containerd[1550]: 2026-01-23 00:59:44.181 [INFO][4508] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 00:59:44.379113 containerd[1550]: 2026-01-23 00:59:44.193 [INFO][4508] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 00:59:44.379113 containerd[1550]: 2026-01-23 00:59:44.193 [INFO][4508] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e02923f0785348377dfcdefcc5d6cd5e9f74b697cfeae8a09eba579909490567" host="localhost" Jan 23 00:59:44.382352 containerd[1550]: 2026-01-23 00:59:44.199 [INFO][4508] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e02923f0785348377dfcdefcc5d6cd5e9f74b697cfeae8a09eba579909490567 Jan 23 00:59:44.382352 containerd[1550]: 2026-01-23 00:59:44.218 [INFO][4508] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e02923f0785348377dfcdefcc5d6cd5e9f74b697cfeae8a09eba579909490567" host="localhost" Jan 23 00:59:44.382352 containerd[1550]: 2026-01-23 00:59:44.251 [INFO][4508] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.e02923f0785348377dfcdefcc5d6cd5e9f74b697cfeae8a09eba579909490567" host="localhost" Jan 23 00:59:44.382352 containerd[1550]: 2026-01-23 00:59:44.251 [INFO][4508] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.e02923f0785348377dfcdefcc5d6cd5e9f74b697cfeae8a09eba579909490567" host="localhost" Jan 23 00:59:44.382352 containerd[1550]: 2026-01-23 00:59:44.251 [INFO][4508] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 00:59:44.382352 containerd[1550]: 2026-01-23 00:59:44.251 [INFO][4508] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="e02923f0785348377dfcdefcc5d6cd5e9f74b697cfeae8a09eba579909490567" HandleID="k8s-pod-network.e02923f0785348377dfcdefcc5d6cd5e9f74b697cfeae8a09eba579909490567" Workload="localhost-k8s-goldmane--7c778bb748--j5rv6-eth0" Jan 23 00:59:44.382549 containerd[1550]: 2026-01-23 00:59:44.281 [INFO][4471] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e02923f0785348377dfcdefcc5d6cd5e9f74b697cfeae8a09eba579909490567" Namespace="calico-system" Pod="goldmane-7c778bb748-j5rv6" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--j5rv6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--j5rv6-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"43accc0b-89ee-4b5d-a714-8b1afe2391c5", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 59, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-j5rv6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif35e7c9ee62", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:59:44.382549 containerd[1550]: 2026-01-23 00:59:44.281 [INFO][4471] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="e02923f0785348377dfcdefcc5d6cd5e9f74b697cfeae8a09eba579909490567" Namespace="calico-system" Pod="goldmane-7c778bb748-j5rv6" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--j5rv6-eth0" Jan 23 00:59:44.382747 containerd[1550]: 2026-01-23 00:59:44.281 [INFO][4471] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif35e7c9ee62 ContainerID="e02923f0785348377dfcdefcc5d6cd5e9f74b697cfeae8a09eba579909490567" Namespace="calico-system" Pod="goldmane-7c778bb748-j5rv6" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--j5rv6-eth0" Jan 23 00:59:44.382747 containerd[1550]: 2026-01-23 00:59:44.307 [INFO][4471] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e02923f0785348377dfcdefcc5d6cd5e9f74b697cfeae8a09eba579909490567" Namespace="calico-system" Pod="goldmane-7c778bb748-j5rv6" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--j5rv6-eth0" Jan 23 00:59:44.382820 containerd[1550]: 2026-01-23 00:59:44.322 [INFO][4471] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e02923f0785348377dfcdefcc5d6cd5e9f74b697cfeae8a09eba579909490567" Namespace="calico-system" Pod="goldmane-7c778bb748-j5rv6" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--j5rv6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--j5rv6-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"43accc0b-89ee-4b5d-a714-8b1afe2391c5", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 59, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e02923f0785348377dfcdefcc5d6cd5e9f74b697cfeae8a09eba579909490567", Pod:"goldmane-7c778bb748-j5rv6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif35e7c9ee62", MAC:"82:eb:3a:90:9a:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:59:44.388152 containerd[1550]: 2026-01-23 00:59:44.355 [INFO][4471] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e02923f0785348377dfcdefcc5d6cd5e9f74b697cfeae8a09eba579909490567" Namespace="calico-system" Pod="goldmane-7c778bb748-j5rv6" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--j5rv6-eth0" Jan 23 00:59:44.403348 containerd[1550]: time="2026-01-23T00:59:44.403247549Z" level=info msg="connecting to shim fffd2eb9c01e4912995639622a92f15812156087dbe8ae80fd4985e6a6ea6334" address="unix:///run/containerd/s/b0a1a181a242ecbaf1023568982144114893c1a7519b8b9d4613b5bf30ca2946" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:59:44.534777 systemd[1]: Started cri-containerd-fffd2eb9c01e4912995639622a92f15812156087dbe8ae80fd4985e6a6ea6334.scope - libcontainer container fffd2eb9c01e4912995639622a92f15812156087dbe8ae80fd4985e6a6ea6334. Jan 23 00:59:44.543905 containerd[1550]: time="2026-01-23T00:59:44.543815216Z" level=info msg="connecting to shim e02923f0785348377dfcdefcc5d6cd5e9f74b697cfeae8a09eba579909490567" address="unix:///run/containerd/s/01d307961aacd804966889d2312c86731829ebe9e2b40bc078a2d7cdd7a29eba" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:59:44.596916 systemd-networkd[1459]: calia21a623746e: Link UP Jan 23 00:59:44.603701 systemd[1]: Started cri-containerd-e02923f0785348377dfcdefcc5d6cd5e9f74b697cfeae8a09eba579909490567.scope - libcontainer container e02923f0785348377dfcdefcc5d6cd5e9f74b697cfeae8a09eba579909490567. Jan 23 00:59:44.606348 systemd-networkd[1459]: calia21a623746e: Gained carrier Jan 23 00:59:44.648883 systemd-networkd[1459]: vxlan.calico: Gained IPv6LL Jan 23 00:59:44.659533 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 00:59:44.680678 containerd[1550]: 2026-01-23 00:59:43.737 [INFO][4462] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--mv9zp-eth0 coredns-66bc5c9577- kube-system b6b8d32b-ca26-41e6-a351-31a3afa9d455 875 0 2026-01-23 00:58:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-mv9zp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia21a623746e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="41ceb6b559a2b69c1fc866af0967c0951961605c43cc12c696fcfd45637c13ff" Namespace="kube-system" Pod="coredns-66bc5c9577-mv9zp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mv9zp-" Jan 23 00:59:44.680678 containerd[1550]: 2026-01-23 00:59:43.738 [INFO][4462] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="41ceb6b559a2b69c1fc866af0967c0951961605c43cc12c696fcfd45637c13ff" Namespace="kube-system" Pod="coredns-66bc5c9577-mv9zp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mv9zp-eth0" Jan 23 00:59:44.680678 containerd[1550]: 2026-01-23 00:59:43.909 [INFO][4524] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="41ceb6b559a2b69c1fc866af0967c0951961605c43cc12c696fcfd45637c13ff" HandleID="k8s-pod-network.41ceb6b559a2b69c1fc866af0967c0951961605c43cc12c696fcfd45637c13ff" Workload="localhost-k8s-coredns--66bc5c9577--mv9zp-eth0" Jan 23 00:59:44.681305 containerd[1550]: 2026-01-23 00:59:43.909 [INFO][4524] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="41ceb6b559a2b69c1fc866af0967c0951961605c43cc12c696fcfd45637c13ff" HandleID="k8s-pod-network.41ceb6b559a2b69c1fc866af0967c0951961605c43cc12c696fcfd45637c13ff" Workload="localhost-k8s-coredns--66bc5c9577--mv9zp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000147180), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-mv9zp", "timestamp":"2026-01-23 00:59:43.909424495 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 00:59:44.681305 containerd[1550]: 2026-01-23 00:59:43.910 [INFO][4524] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 00:59:44.681305 containerd[1550]: 2026-01-23 00:59:44.254 [INFO][4524] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 00:59:44.681305 containerd[1550]: 2026-01-23 00:59:44.255 [INFO][4524] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 00:59:44.681305 containerd[1550]: 2026-01-23 00:59:44.314 [INFO][4524] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.41ceb6b559a2b69c1fc866af0967c0951961605c43cc12c696fcfd45637c13ff" host="localhost" Jan 23 00:59:44.681305 containerd[1550]: 2026-01-23 00:59:44.384 [INFO][4524] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 00:59:44.681305 containerd[1550]: 2026-01-23 00:59:44.430 [INFO][4524] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 00:59:44.681305 containerd[1550]: 2026-01-23 00:59:44.451 [INFO][4524] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 00:59:44.681305 containerd[1550]: 2026-01-23 00:59:44.462 [INFO][4524] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 00:59:44.681305 containerd[1550]: 2026-01-23 00:59:44.466 [INFO][4524] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.41ceb6b559a2b69c1fc866af0967c0951961605c43cc12c696fcfd45637c13ff" host="localhost" Jan 23 00:59:44.681681 containerd[1550]: 2026-01-23 00:59:44.484 [INFO][4524] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.41ceb6b559a2b69c1fc866af0967c0951961605c43cc12c696fcfd45637c13ff Jan 23 00:59:44.681681 containerd[1550]: 2026-01-23 00:59:44.506 [INFO][4524] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.41ceb6b559a2b69c1fc866af0967c0951961605c43cc12c696fcfd45637c13ff" host="localhost" Jan 23 00:59:44.681681 containerd[1550]: 2026-01-23 00:59:44.544 [INFO][4524] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.41ceb6b559a2b69c1fc866af0967c0951961605c43cc12c696fcfd45637c13ff" host="localhost" Jan 23 00:59:44.681681 containerd[1550]: 2026-01-23 00:59:44.544 [INFO][4524] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.41ceb6b559a2b69c1fc866af0967c0951961605c43cc12c696fcfd45637c13ff" host="localhost" Jan 23 00:59:44.681681 containerd[1550]: 2026-01-23 00:59:44.544 [INFO][4524] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 00:59:44.681681 containerd[1550]: 2026-01-23 00:59:44.544 [INFO][4524] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="41ceb6b559a2b69c1fc866af0967c0951961605c43cc12c696fcfd45637c13ff" HandleID="k8s-pod-network.41ceb6b559a2b69c1fc866af0967c0951961605c43cc12c696fcfd45637c13ff" Workload="localhost-k8s-coredns--66bc5c9577--mv9zp-eth0" Jan 23 00:59:44.681864 containerd[1550]: 2026-01-23 00:59:44.562 [INFO][4462] cni-plugin/k8s.go 418: Populated endpoint ContainerID="41ceb6b559a2b69c1fc866af0967c0951961605c43cc12c696fcfd45637c13ff" Namespace="kube-system" Pod="coredns-66bc5c9577-mv9zp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mv9zp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--mv9zp-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b6b8d32b-ca26-41e6-a351-31a3afa9d455", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 58, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-mv9zp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia21a623746e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:59:44.681864 containerd[1550]: 2026-01-23 00:59:44.564 [INFO][4462] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="41ceb6b559a2b69c1fc866af0967c0951961605c43cc12c696fcfd45637c13ff" Namespace="kube-system" Pod="coredns-66bc5c9577-mv9zp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mv9zp-eth0" Jan 23 00:59:44.681864 containerd[1550]: 2026-01-23 00:59:44.564 [INFO][4462] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia21a623746e ContainerID="41ceb6b559a2b69c1fc866af0967c0951961605c43cc12c696fcfd45637c13ff" Namespace="kube-system" Pod="coredns-66bc5c9577-mv9zp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mv9zp-eth0" Jan 23 00:59:44.681864 containerd[1550]: 2026-01-23 00:59:44.614 [INFO][4462] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="41ceb6b559a2b69c1fc866af0967c0951961605c43cc12c696fcfd45637c13ff" Namespace="kube-system" Pod="coredns-66bc5c9577-mv9zp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mv9zp-eth0" Jan 23 00:59:44.681864 containerd[1550]: 2026-01-23 00:59:44.614 [INFO][4462] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="41ceb6b559a2b69c1fc866af0967c0951961605c43cc12c696fcfd45637c13ff" Namespace="kube-system" Pod="coredns-66bc5c9577-mv9zp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mv9zp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--mv9zp-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b6b8d32b-ca26-41e6-a351-31a3afa9d455", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 58, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"41ceb6b559a2b69c1fc866af0967c0951961605c43cc12c696fcfd45637c13ff", Pod:"coredns-66bc5c9577-mv9zp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia21a623746e", MAC:"22:64:86:9c:0e:8f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:59:44.681864 containerd[1550]: 2026-01-23 00:59:44.653 [INFO][4462] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="41ceb6b559a2b69c1fc866af0967c0951961605c43cc12c696fcfd45637c13ff" Namespace="kube-system" Pod="coredns-66bc5c9577-mv9zp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--mv9zp-eth0" Jan 23 00:59:44.715595 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 00:59:44.780839 systemd-networkd[1459]: calif7b1c3948bd: Link UP Jan 23 00:59:44.782660 systemd-networkd[1459]: calif7b1c3948bd: Gained carrier Jan 23 00:59:44.792079 containerd[1550]: time="2026-01-23T00:59:44.791656119Z" level=info msg="connecting to shim 41ceb6b559a2b69c1fc866af0967c0951961605c43cc12c696fcfd45637c13ff" address="unix:///run/containerd/s/919cfb8839d3c9d397b1793393f40e07024c6c20a9be28f1c22ddeefe9ae4786" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:59:44.797123 containerd[1550]: time="2026-01-23T00:59:44.796880845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qpc42,Uid:1a86b4da-5edc-4f85-b21e-20314381c9bb,Namespace:calico-system,Attempt:0,} returns sandbox id \"fffd2eb9c01e4912995639622a92f15812156087dbe8ae80fd4985e6a6ea6334\"" Jan 23 00:59:44.807682 containerd[1550]: time="2026-01-23T00:59:44.807630820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 00:59:44.884691 containerd[1550]: 2026-01-23 00:59:43.761 [INFO][4487] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--558649896b--xvhfg-eth0 calico-kube-controllers-558649896b- calico-system 655c83b6-f33b-4c1f-8ca9-c00c869c6e41 873 0 2026-01-23 00:59:15 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:558649896b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-558649896b-xvhfg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif7b1c3948bd [] [] }} ContainerID="208cbfeddb7478392e74409d60061f66630be7004e1a82ae5cd84bc96c72c716" Namespace="calico-system" Pod="calico-kube-controllers-558649896b-xvhfg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--558649896b--xvhfg-" Jan 23 00:59:44.884691 containerd[1550]: 2026-01-23 00:59:43.763 [INFO][4487] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="208cbfeddb7478392e74409d60061f66630be7004e1a82ae5cd84bc96c72c716" Namespace="calico-system" Pod="calico-kube-controllers-558649896b-xvhfg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--558649896b--xvhfg-eth0" Jan 23 00:59:44.884691 containerd[1550]: 2026-01-23 00:59:43.916 [INFO][4535] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="208cbfeddb7478392e74409d60061f66630be7004e1a82ae5cd84bc96c72c716" HandleID="k8s-pod-network.208cbfeddb7478392e74409d60061f66630be7004e1a82ae5cd84bc96c72c716" Workload="localhost-k8s-calico--kube--controllers--558649896b--xvhfg-eth0" Jan 23 00:59:44.884691 containerd[1550]: 2026-01-23 00:59:43.917 [INFO][4535] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="208cbfeddb7478392e74409d60061f66630be7004e1a82ae5cd84bc96c72c716" HandleID="k8s-pod-network.208cbfeddb7478392e74409d60061f66630be7004e1a82ae5cd84bc96c72c716" Workload="localhost-k8s-calico--kube--controllers--558649896b--xvhfg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000349be0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-558649896b-xvhfg", "timestamp":"2026-01-23 00:59:43.916686364 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 00:59:44.884691 containerd[1550]: 2026-01-23 00:59:43.917 [INFO][4535] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 00:59:44.884691 containerd[1550]: 2026-01-23 00:59:44.545 [INFO][4535] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 00:59:44.884691 containerd[1550]: 2026-01-23 00:59:44.545 [INFO][4535] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 00:59:44.884691 containerd[1550]: 2026-01-23 00:59:44.563 [INFO][4535] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.208cbfeddb7478392e74409d60061f66630be7004e1a82ae5cd84bc96c72c716" host="localhost" Jan 23 00:59:44.884691 containerd[1550]: 2026-01-23 00:59:44.604 [INFO][4535] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 00:59:44.884691 containerd[1550]: 2026-01-23 00:59:44.657 [INFO][4535] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 00:59:44.884691 containerd[1550]: 2026-01-23 00:59:44.680 [INFO][4535] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 00:59:44.884691 containerd[1550]: 2026-01-23 00:59:44.691 [INFO][4535] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 00:59:44.884691 containerd[1550]: 2026-01-23 00:59:44.691 [INFO][4535] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.208cbfeddb7478392e74409d60061f66630be7004e1a82ae5cd84bc96c72c716" host="localhost" Jan 23 00:59:44.884691 containerd[1550]: 2026-01-23 00:59:44.697 [INFO][4535] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.208cbfeddb7478392e74409d60061f66630be7004e1a82ae5cd84bc96c72c716 Jan 23 00:59:44.884691 containerd[1550]: 2026-01-23 00:59:44.737 [INFO][4535] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.208cbfeddb7478392e74409d60061f66630be7004e1a82ae5cd84bc96c72c716" host="localhost" Jan 23 00:59:44.884691 containerd[1550]: 2026-01-23 00:59:44.761 [INFO][4535] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.208cbfeddb7478392e74409d60061f66630be7004e1a82ae5cd84bc96c72c716" host="localhost" Jan 23 00:59:44.884691 containerd[1550]: 2026-01-23 00:59:44.761 [INFO][4535] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.208cbfeddb7478392e74409d60061f66630be7004e1a82ae5cd84bc96c72c716" host="localhost" Jan 23 00:59:44.884691 containerd[1550]: 2026-01-23 00:59:44.761 [INFO][4535] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 00:59:44.884691 containerd[1550]: 2026-01-23 00:59:44.761 [INFO][4535] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="208cbfeddb7478392e74409d60061f66630be7004e1a82ae5cd84bc96c72c716" HandleID="k8s-pod-network.208cbfeddb7478392e74409d60061f66630be7004e1a82ae5cd84bc96c72c716" Workload="localhost-k8s-calico--kube--controllers--558649896b--xvhfg-eth0" Jan 23 00:59:44.885788 containerd[1550]: 2026-01-23 00:59:44.775 [INFO][4487] cni-plugin/k8s.go 418: Populated endpoint ContainerID="208cbfeddb7478392e74409d60061f66630be7004e1a82ae5cd84bc96c72c716" Namespace="calico-system" Pod="calico-kube-controllers-558649896b-xvhfg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--558649896b--xvhfg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--558649896b--xvhfg-eth0", GenerateName:"calico-kube-controllers-558649896b-", Namespace:"calico-system", SelfLink:"", UID:"655c83b6-f33b-4c1f-8ca9-c00c869c6e41", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 59, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"558649896b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-558649896b-xvhfg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif7b1c3948bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:59:44.885788 containerd[1550]: 2026-01-23 00:59:44.775 [INFO][4487] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="208cbfeddb7478392e74409d60061f66630be7004e1a82ae5cd84bc96c72c716" Namespace="calico-system" Pod="calico-kube-controllers-558649896b-xvhfg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--558649896b--xvhfg-eth0" Jan 23 00:59:44.885788 containerd[1550]: 2026-01-23 00:59:44.775 [INFO][4487] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7b1c3948bd ContainerID="208cbfeddb7478392e74409d60061f66630be7004e1a82ae5cd84bc96c72c716" Namespace="calico-system" Pod="calico-kube-controllers-558649896b-xvhfg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--558649896b--xvhfg-eth0" Jan 23 00:59:44.885788 containerd[1550]: 2026-01-23 00:59:44.784 [INFO][4487] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="208cbfeddb7478392e74409d60061f66630be7004e1a82ae5cd84bc96c72c716" Namespace="calico-system" Pod="calico-kube-controllers-558649896b-xvhfg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--558649896b--xvhfg-eth0" Jan 23 00:59:44.885788 containerd[1550]: 2026-01-23 00:59:44.792 [INFO][4487] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="208cbfeddb7478392e74409d60061f66630be7004e1a82ae5cd84bc96c72c716" Namespace="calico-system" Pod="calico-kube-controllers-558649896b-xvhfg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--558649896b--xvhfg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--558649896b--xvhfg-eth0", GenerateName:"calico-kube-controllers-558649896b-", Namespace:"calico-system", SelfLink:"", UID:"655c83b6-f33b-4c1f-8ca9-c00c869c6e41", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 59, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"558649896b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"208cbfeddb7478392e74409d60061f66630be7004e1a82ae5cd84bc96c72c716", Pod:"calico-kube-controllers-558649896b-xvhfg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif7b1c3948bd", MAC:"ca:c1:30:2f:00:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:59:44.885788 containerd[1550]: 2026-01-23 00:59:44.869 [INFO][4487] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="208cbfeddb7478392e74409d60061f66630be7004e1a82ae5cd84bc96c72c716" Namespace="calico-system" Pod="calico-kube-controllers-558649896b-xvhfg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--558649896b--xvhfg-eth0" Jan 23 00:59:44.892234 containerd[1550]: time="2026-01-23T00:59:44.892191118Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:44.913378 containerd[1550]: time="2026-01-23T00:59:44.910606737Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 00:59:44.913378 containerd[1550]: time="2026-01-23T00:59:44.910734696Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 00:59:44.913546 kubelet[2810]: E0123 00:59:44.911897 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 00:59:44.913546 kubelet[2810]: E0123 00:59:44.912514 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 00:59:44.913470 systemd[1]: Started cri-containerd-41ceb6b559a2b69c1fc866af0967c0951961605c43cc12c696fcfd45637c13ff.scope - libcontainer container 41ceb6b559a2b69c1fc866af0967c0951961605c43cc12c696fcfd45637c13ff. Jan 23 00:59:44.915382 containerd[1550]: time="2026-01-23T00:59:44.914078461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-j5rv6,Uid:43accc0b-89ee-4b5d-a714-8b1afe2391c5,Namespace:calico-system,Attempt:0,} returns sandbox id \"e02923f0785348377dfcdefcc5d6cd5e9f74b697cfeae8a09eba579909490567\"" Jan 23 00:59:44.915446 kubelet[2810]: E0123 00:59:44.913855 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-qpc42_calico-system(1a86b4da-5edc-4f85-b21e-20314381c9bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:44.920258 containerd[1550]: time="2026-01-23T00:59:44.919779460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 00:59:44.948888 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 00:59:44.988270 containerd[1550]: time="2026-01-23T00:59:44.987423542Z" level=info msg="connecting to shim 208cbfeddb7478392e74409d60061f66630be7004e1a82ae5cd84bc96c72c716" address="unix:///run/containerd/s/ba9fc98992aec23df858d5a00e355fdd4c03f7d5335b5f31b8fbcb5424b30f60" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:59:45.014899 containerd[1550]: time="2026-01-23T00:59:45.014688036Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:45.016877 containerd[1550]: time="2026-01-23T00:59:45.016549051Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 00:59:45.016877 containerd[1550]: time="2026-01-23T00:59:45.016666751Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 00:59:45.017535 kubelet[2810]: E0123 00:59:45.017350 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 00:59:45.017535 kubelet[2810]: E0123 00:59:45.017467 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 00:59:45.018183 kubelet[2810]: E0123 00:59:45.017639 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-qpc42_calico-system(1a86b4da-5edc-4f85-b21e-20314381c9bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:45.018183 kubelet[2810]: E0123 00:59:45.017735 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpc42" podUID="1a86b4da-5edc-4f85-b21e-20314381c9bb" Jan 23 00:59:45.019471 containerd[1550]: time="2026-01-23T00:59:45.018504618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 00:59:45.068601 systemd[1]: Started cri-containerd-208cbfeddb7478392e74409d60061f66630be7004e1a82ae5cd84bc96c72c716.scope - libcontainer container 208cbfeddb7478392e74409d60061f66630be7004e1a82ae5cd84bc96c72c716. Jan 23 00:59:45.081981 containerd[1550]: time="2026-01-23T00:59:45.081762884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mv9zp,Uid:b6b8d32b-ca26-41e6-a351-31a3afa9d455,Namespace:kube-system,Attempt:0,} returns sandbox id \"41ceb6b559a2b69c1fc866af0967c0951961605c43cc12c696fcfd45637c13ff\"" Jan 23 00:59:45.095800 containerd[1550]: time="2026-01-23T00:59:45.095656477Z" level=info msg="CreateContainer within sandbox \"41ceb6b559a2b69c1fc866af0967c0951961605c43cc12c696fcfd45637c13ff\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 00:59:45.115668 containerd[1550]: time="2026-01-23T00:59:45.114914174Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:45.116586 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 00:59:45.119046 containerd[1550]: time="2026-01-23T00:59:45.118219076Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 00:59:45.119299 containerd[1550]: time="2026-01-23T00:59:45.118417554Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 00:59:45.119493 kubelet[2810]: E0123 00:59:45.119443 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 00:59:45.119547 kubelet[2810]: E0123 00:59:45.119506 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 00:59:45.119715 kubelet[2810]: E0123 00:59:45.119599 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-j5rv6_calico-system(43accc0b-89ee-4b5d-a714-8b1afe2391c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:45.119715 kubelet[2810]: E0123 00:59:45.119690 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-j5rv6" podUID="43accc0b-89ee-4b5d-a714-8b1afe2391c5" Jan 23 00:59:45.146144 containerd[1550]: time="2026-01-23T00:59:45.145929136Z" level=info msg="Container 25d8e8c94b5f0bbaf5d8789180633b49f48ee0986c5ba649b5f7007511e09d31: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:59:45.175180 containerd[1550]: time="2026-01-23T00:59:45.174894242Z" level=info msg="CreateContainer within sandbox \"41ceb6b559a2b69c1fc866af0967c0951961605c43cc12c696fcfd45637c13ff\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"25d8e8c94b5f0bbaf5d8789180633b49f48ee0986c5ba649b5f7007511e09d31\"" Jan 23 00:59:45.179601 containerd[1550]: time="2026-01-23T00:59:45.179410617Z" level=info msg="StartContainer for \"25d8e8c94b5f0bbaf5d8789180633b49f48ee0986c5ba649b5f7007511e09d31\"" Jan 23 00:59:45.183796 containerd[1550]: time="2026-01-23T00:59:45.183661497Z" level=info msg="connecting to shim 25d8e8c94b5f0bbaf5d8789180633b49f48ee0986c5ba649b5f7007511e09d31" address="unix:///run/containerd/s/919cfb8839d3c9d397b1793393f40e07024c6c20a9be28f1c22ddeefe9ae4786" protocol=ttrpc version=3 Jan 23 00:59:45.228649 kubelet[2810]: E0123 00:59:45.228557 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-j5rv6" podUID="43accc0b-89ee-4b5d-a714-8b1afe2391c5" Jan 23 00:59:45.230691 systemd[1]: Started cri-containerd-25d8e8c94b5f0bbaf5d8789180633b49f48ee0986c5ba649b5f7007511e09d31.scope - libcontainer container 25d8e8c94b5f0bbaf5d8789180633b49f48ee0986c5ba649b5f7007511e09d31. Jan 23 00:59:45.258833 containerd[1550]: time="2026-01-23T00:59:45.258733757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-558649896b-xvhfg,Uid:655c83b6-f33b-4c1f-8ca9-c00c869c6e41,Namespace:calico-system,Attempt:0,} returns sandbox id \"208cbfeddb7478392e74409d60061f66630be7004e1a82ae5cd84bc96c72c716\"" Jan 23 00:59:45.260237 kubelet[2810]: E0123 00:59:45.260187 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpc42" podUID="1a86b4da-5edc-4f85-b21e-20314381c9bb" Jan 23 00:59:45.262549 containerd[1550]: time="2026-01-23T00:59:45.262520361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 00:59:45.342491 containerd[1550]: time="2026-01-23T00:59:45.342320623Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:45.344196 containerd[1550]: time="2026-01-23T00:59:45.344111547Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 00:59:45.344510 containerd[1550]: time="2026-01-23T00:59:45.344157024Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 00:59:45.344794 kubelet[2810]: E0123 00:59:45.344496 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 00:59:45.344794 kubelet[2810]: E0123 00:59:45.344553 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 00:59:45.344794 kubelet[2810]: E0123 00:59:45.344683 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-558649896b-xvhfg_calico-system(655c83b6-f33b-4c1f-8ca9-c00c869c6e41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:45.344794 kubelet[2810]: E0123 00:59:45.344738 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-558649896b-xvhfg" podUID="655c83b6-f33b-4c1f-8ca9-c00c869c6e41" Jan 23 00:59:45.390639 containerd[1550]: time="2026-01-23T00:59:45.390542086Z" level=info msg="StartContainer for \"25d8e8c94b5f0bbaf5d8789180633b49f48ee0986c5ba649b5f7007511e09d31\" returns successfully" Jan 23 00:59:45.923792 update_engine[1541]: I20260123 00:59:45.908082 1541 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 00:59:45.927505 update_engine[1541]: I20260123 00:59:45.926784 1541 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 00:59:45.927505 update_engine[1541]: I20260123 00:59:45.927443 1541 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 00:59:45.953412 update_engine[1541]: E20260123 00:59:45.953202 1541 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 00:59:45.953412 update_engine[1541]: I20260123 00:59:45.953385 1541 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 23 00:59:45.953412 update_engine[1541]: I20260123 00:59:45.953405 1541 omaha_request_action.cc:617] Omaha request response: Jan 23 00:59:45.953678 update_engine[1541]: E20260123 00:59:45.953547 1541 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 23 00:59:45.989785 systemd-networkd[1459]: calidc8664e5608: Gained IPv6LL Jan 23 00:59:45.998097 update_engine[1541]: I20260123 00:59:45.997895 1541 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 23 00:59:45.998252 update_engine[1541]: I20260123 00:59:45.998183 1541 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 23 00:59:45.998252 update_engine[1541]: I20260123 00:59:45.998205 1541 update_attempter.cc:306] Processing Done. Jan 23 00:59:45.998252 update_engine[1541]: E20260123 00:59:45.998232 1541 update_attempter.cc:619] Update failed. Jan 23 00:59:45.998252 update_engine[1541]: I20260123 00:59:45.998247 1541 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 23 00:59:45.998462 update_engine[1541]: I20260123 00:59:45.998260 1541 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 23 00:59:45.998462 update_engine[1541]: I20260123 00:59:45.998273 1541 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 23 00:59:45.998560 update_engine[1541]: I20260123 00:59:45.998471 1541 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 23 00:59:45.998560 update_engine[1541]: I20260123 00:59:45.998522 1541 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 23 00:59:45.998560 update_engine[1541]: I20260123 00:59:45.998536 1541 omaha_request_action.cc:272] Request: Jan 23 00:59:45.998560 update_engine[1541]: Jan 23 00:59:45.998560 update_engine[1541]: Jan 23 00:59:45.998560 update_engine[1541]: Jan 23 00:59:45.998560 update_engine[1541]: Jan 23 00:59:45.998560 update_engine[1541]: Jan 23 00:59:45.998560 update_engine[1541]: Jan 23 00:59:45.998560 update_engine[1541]: I20260123 00:59:45.998550 1541 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 00:59:45.999190 update_engine[1541]: I20260123 00:59:45.998595 1541 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 00:59:46.001001 update_engine[1541]: I20260123 00:59:45.999214 1541 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 00:59:46.011171 locksmithd[1582]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 23 00:59:46.021901 update_engine[1541]: E20260123 00:59:46.020898 1541 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 00:59:46.021901 update_engine[1541]: I20260123 00:59:46.021148 1541 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 23 00:59:46.021901 update_engine[1541]: I20260123 00:59:46.021168 1541 omaha_request_action.cc:617] Omaha request response: Jan 23 00:59:46.021901 update_engine[1541]: I20260123 00:59:46.021180 1541 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 23 00:59:46.021901 update_engine[1541]: I20260123 00:59:46.021189 1541 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 23 00:59:46.021901 update_engine[1541]: I20260123 00:59:46.021198 1541 update_attempter.cc:306] Processing Done. Jan 23 00:59:46.021901 update_engine[1541]: I20260123 00:59:46.021207 1541 update_attempter.cc:310] Error event sent. Jan 23 00:59:46.021901 update_engine[1541]: I20260123 00:59:46.021263 1541 update_check_scheduler.cc:74] Next update check in 45m24s Jan 23 00:59:46.026173 locksmithd[1582]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 23 00:59:46.120249 systemd-networkd[1459]: calif35e7c9ee62: Gained IPv6LL Jan 23 00:59:46.273658 kubelet[2810]: E0123 00:59:46.272177 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-j5rv6" podUID="43accc0b-89ee-4b5d-a714-8b1afe2391c5" Jan 23 00:59:46.275905 kubelet[2810]: E0123 00:59:46.275283 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-558649896b-xvhfg" podUID="655c83b6-f33b-4c1f-8ca9-c00c869c6e41" Jan 23 00:59:46.278120 kubelet[2810]: E0123 00:59:46.277324 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpc42" podUID="1a86b4da-5edc-4f85-b21e-20314381c9bb" Jan 23 00:59:46.327113 kubelet[2810]: I0123 00:59:46.326497 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-mv9zp" podStartSLOduration=70.32647501 podStartE2EDuration="1m10.32647501s" podCreationTimestamp="2026-01-23 00:58:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:59:46.326048593 +0000 UTC m=+74.221138657" watchObservedRunningTime="2026-01-23 00:59:46.32647501 +0000 UTC m=+74.221565063" Jan 23 00:59:46.442367 systemd-networkd[1459]: calia21a623746e: Gained IPv6LL Jan 23 00:59:46.692486 systemd-networkd[1459]: calif7b1c3948bd: Gained IPv6LL Jan 23 00:59:47.281474 kubelet[2810]: E0123 00:59:47.281085 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-558649896b-xvhfg" podUID="655c83b6-f33b-4c1f-8ca9-c00c869c6e41" Jan 23 00:59:52.415913 containerd[1550]: time="2026-01-23T00:59:52.415781022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ht42l,Uid:cba0d29e-89d0-474c-bb48-ac261d9e3439,Namespace:kube-system,Attempt:0,}" Jan 23 00:59:52.779390 systemd-networkd[1459]: cali3849e31f1da: Link UP Jan 23 00:59:52.782765 systemd-networkd[1459]: cali3849e31f1da: Gained carrier Jan 23 00:59:52.840658 containerd[1550]: 2026-01-23 00:59:52.542 [INFO][4861] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--ht42l-eth0 coredns-66bc5c9577- kube-system cba0d29e-89d0-474c-bb48-ac261d9e3439 874 0 2026-01-23 00:58:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-ht42l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3849e31f1da [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="b4c3c9d44a6cb7151016d5318330f6d55aa946ee3ab110a72e2ac5a152c7fe91" Namespace="kube-system" Pod="coredns-66bc5c9577-ht42l" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ht42l-" Jan 23 00:59:52.840658 containerd[1550]: 2026-01-23 00:59:52.542 [INFO][4861] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b4c3c9d44a6cb7151016d5318330f6d55aa946ee3ab110a72e2ac5a152c7fe91" Namespace="kube-system" Pod="coredns-66bc5c9577-ht42l" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ht42l-eth0" Jan 23 00:59:52.840658 containerd[1550]: 2026-01-23 00:59:52.624 [INFO][4876] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b4c3c9d44a6cb7151016d5318330f6d55aa946ee3ab110a72e2ac5a152c7fe91" HandleID="k8s-pod-network.b4c3c9d44a6cb7151016d5318330f6d55aa946ee3ab110a72e2ac5a152c7fe91" Workload="localhost-k8s-coredns--66bc5c9577--ht42l-eth0" Jan 23 00:59:52.840658 containerd[1550]: 2026-01-23 00:59:52.625 [INFO][4876] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b4c3c9d44a6cb7151016d5318330f6d55aa946ee3ab110a72e2ac5a152c7fe91" HandleID="k8s-pod-network.b4c3c9d44a6cb7151016d5318330f6d55aa946ee3ab110a72e2ac5a152c7fe91" Workload="localhost-k8s-coredns--66bc5c9577--ht42l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a35e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-ht42l", "timestamp":"2026-01-23 00:59:52.62492204 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 00:59:52.840658 containerd[1550]: 2026-01-23 00:59:52.625 [INFO][4876] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 00:59:52.840658 containerd[1550]: 2026-01-23 00:59:52.625 [INFO][4876] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 00:59:52.840658 containerd[1550]: 2026-01-23 00:59:52.625 [INFO][4876] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 23 00:59:52.840658 containerd[1550]: 2026-01-23 00:59:52.657 [INFO][4876] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b4c3c9d44a6cb7151016d5318330f6d55aa946ee3ab110a72e2ac5a152c7fe91" host="localhost" Jan 23 00:59:52.840658 containerd[1550]: 2026-01-23 00:59:52.683 [INFO][4876] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jan 23 00:59:52.840658 containerd[1550]: 2026-01-23 00:59:52.703 [INFO][4876] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jan 23 00:59:52.840658 containerd[1550]: 2026-01-23 00:59:52.716 [INFO][4876] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 23 00:59:52.840658 containerd[1550]: 2026-01-23 00:59:52.723 [INFO][4876] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 23 00:59:52.840658 containerd[1550]: 2026-01-23 00:59:52.723 [INFO][4876] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b4c3c9d44a6cb7151016d5318330f6d55aa946ee3ab110a72e2ac5a152c7fe91" host="localhost" Jan 23 00:59:52.840658 containerd[1550]: 2026-01-23 00:59:52.729 [INFO][4876] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b4c3c9d44a6cb7151016d5318330f6d55aa946ee3ab110a72e2ac5a152c7fe91 Jan 23 00:59:52.840658 containerd[1550]: 2026-01-23 00:59:52.746 [INFO][4876] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b4c3c9d44a6cb7151016d5318330f6d55aa946ee3ab110a72e2ac5a152c7fe91" host="localhost" Jan 23 00:59:52.840658 containerd[1550]: 2026-01-23 00:59:52.763 [INFO][4876] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.b4c3c9d44a6cb7151016d5318330f6d55aa946ee3ab110a72e2ac5a152c7fe91" host="localhost" Jan 23 00:59:52.840658 containerd[1550]: 2026-01-23 00:59:52.763 [INFO][4876] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.b4c3c9d44a6cb7151016d5318330f6d55aa946ee3ab110a72e2ac5a152c7fe91" host="localhost" Jan 23 00:59:52.840658 containerd[1550]: 2026-01-23 00:59:52.763 [INFO][4876] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 00:59:52.840658 containerd[1550]: 2026-01-23 00:59:52.763 [INFO][4876] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="b4c3c9d44a6cb7151016d5318330f6d55aa946ee3ab110a72e2ac5a152c7fe91" HandleID="k8s-pod-network.b4c3c9d44a6cb7151016d5318330f6d55aa946ee3ab110a72e2ac5a152c7fe91" Workload="localhost-k8s-coredns--66bc5c9577--ht42l-eth0" Jan 23 00:59:52.841905 containerd[1550]: 2026-01-23 00:59:52.768 [INFO][4861] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b4c3c9d44a6cb7151016d5318330f6d55aa946ee3ab110a72e2ac5a152c7fe91" Namespace="kube-system" Pod="coredns-66bc5c9577-ht42l" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ht42l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--ht42l-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"cba0d29e-89d0-474c-bb48-ac261d9e3439", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 58, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-ht42l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3849e31f1da", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:59:52.841905 containerd[1550]: 2026-01-23 00:59:52.768 [INFO][4861] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="b4c3c9d44a6cb7151016d5318330f6d55aa946ee3ab110a72e2ac5a152c7fe91" Namespace="kube-system" Pod="coredns-66bc5c9577-ht42l" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ht42l-eth0" Jan 23 00:59:52.841905 containerd[1550]: 2026-01-23 00:59:52.768 [INFO][4861] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3849e31f1da ContainerID="b4c3c9d44a6cb7151016d5318330f6d55aa946ee3ab110a72e2ac5a152c7fe91" Namespace="kube-system" Pod="coredns-66bc5c9577-ht42l" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ht42l-eth0" Jan 23 00:59:52.841905 containerd[1550]: 2026-01-23 00:59:52.782 [INFO][4861] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b4c3c9d44a6cb7151016d5318330f6d55aa946ee3ab110a72e2ac5a152c7fe91" Namespace="kube-system" Pod="coredns-66bc5c9577-ht42l" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ht42l-eth0" Jan 23 00:59:52.841905 containerd[1550]: 2026-01-23 00:59:52.782 [INFO][4861] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b4c3c9d44a6cb7151016d5318330f6d55aa946ee3ab110a72e2ac5a152c7fe91" Namespace="kube-system" Pod="coredns-66bc5c9577-ht42l" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ht42l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--ht42l-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"cba0d29e-89d0-474c-bb48-ac261d9e3439", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 58, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b4c3c9d44a6cb7151016d5318330f6d55aa946ee3ab110a72e2ac5a152c7fe91", Pod:"coredns-66bc5c9577-ht42l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3849e31f1da", MAC:"e2:38:03:d6:91:29", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:59:52.841905 containerd[1550]: 2026-01-23 00:59:52.826 [INFO][4861] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b4c3c9d44a6cb7151016d5318330f6d55aa946ee3ab110a72e2ac5a152c7fe91" Namespace="kube-system" Pod="coredns-66bc5c9577-ht42l" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ht42l-eth0" Jan 23 00:59:52.931347 containerd[1550]: time="2026-01-23T00:59:52.931293256Z" level=info msg="connecting to shim b4c3c9d44a6cb7151016d5318330f6d55aa946ee3ab110a72e2ac5a152c7fe91" address="unix:///run/containerd/s/41e7a260a656dc2d0d3217426bc8d0269aca487eb13b52a076616b8f1e48c4b6" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:59:52.977882 systemd[1]: Started cri-containerd-b4c3c9d44a6cb7151016d5318330f6d55aa946ee3ab110a72e2ac5a152c7fe91.scope - libcontainer container b4c3c9d44a6cb7151016d5318330f6d55aa946ee3ab110a72e2ac5a152c7fe91. Jan 23 00:59:53.016298 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 00:59:53.103729 containerd[1550]: time="2026-01-23T00:59:53.103627505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ht42l,Uid:cba0d29e-89d0-474c-bb48-ac261d9e3439,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4c3c9d44a6cb7151016d5318330f6d55aa946ee3ab110a72e2ac5a152c7fe91\"" Jan 23 00:59:53.130867 containerd[1550]: time="2026-01-23T00:59:53.130807027Z" level=info msg="CreateContainer within sandbox \"b4c3c9d44a6cb7151016d5318330f6d55aa946ee3ab110a72e2ac5a152c7fe91\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 00:59:53.184662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount913920803.mount: Deactivated successfully. Jan 23 00:59:53.200091 containerd[1550]: time="2026-01-23T00:59:53.199838643Z" level=info msg="Container 598dea3f51fd155b470df954ff647e074f1a80d8eff523229be049f94b5796b7: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:59:53.213787 containerd[1550]: time="2026-01-23T00:59:53.213663457Z" level=info msg="CreateContainer within sandbox \"b4c3c9d44a6cb7151016d5318330f6d55aa946ee3ab110a72e2ac5a152c7fe91\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"598dea3f51fd155b470df954ff647e074f1a80d8eff523229be049f94b5796b7\"" Jan 23 00:59:53.216233 containerd[1550]: time="2026-01-23T00:59:53.216200315Z" level=info msg="StartContainer for \"598dea3f51fd155b470df954ff647e074f1a80d8eff523229be049f94b5796b7\"" Jan 23 00:59:53.230279 containerd[1550]: time="2026-01-23T00:59:53.230146124Z" level=info msg="connecting to shim 598dea3f51fd155b470df954ff647e074f1a80d8eff523229be049f94b5796b7" address="unix:///run/containerd/s/41e7a260a656dc2d0d3217426bc8d0269aca487eb13b52a076616b8f1e48c4b6" protocol=ttrpc version=3 Jan 23 00:59:53.279886 systemd[1]: Started cri-containerd-598dea3f51fd155b470df954ff647e074f1a80d8eff523229be049f94b5796b7.scope - libcontainer container 598dea3f51fd155b470df954ff647e074f1a80d8eff523229be049f94b5796b7. Jan 23 00:59:53.420758 containerd[1550]: time="2026-01-23T00:59:53.420163526Z" level=info msg="StartContainer for \"598dea3f51fd155b470df954ff647e074f1a80d8eff523229be049f94b5796b7\" returns successfully" Jan 23 00:59:54.181327 systemd-networkd[1459]: cali3849e31f1da: Gained IPv6LL Jan 23 00:59:54.441597 kubelet[2810]: I0123 00:59:54.441418 2810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ht42l" podStartSLOduration=78.441393855 podStartE2EDuration="1m18.441393855s" podCreationTimestamp="2026-01-23 00:58:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:59:54.395591382 +0000 UTC m=+82.290681455" watchObservedRunningTime="2026-01-23 00:59:54.441393855 +0000 UTC m=+82.336483899" Jan 23 00:59:55.395810 containerd[1550]: time="2026-01-23T00:59:55.394190040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 00:59:55.482157 containerd[1550]: time="2026-01-23T00:59:55.481758042Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:55.493499 containerd[1550]: time="2026-01-23T00:59:55.493191544Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 00:59:55.493499 containerd[1550]: time="2026-01-23T00:59:55.493346934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 00:59:55.493685 kubelet[2810]: E0123 00:59:55.493591 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:59:55.493685 kubelet[2810]: E0123 00:59:55.493642 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:59:55.494345 kubelet[2810]: E0123 00:59:55.493734 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7cbc9d4d7d-jwv45_calico-apiserver(479d141d-917c-42c5-8315-9e3283f05aa9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:55.494345 kubelet[2810]: E0123 00:59:55.493780 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-jwv45" podUID="479d141d-917c-42c5-8315-9e3283f05aa9" Jan 23 00:59:57.401860 containerd[1550]: time="2026-01-23T00:59:57.401496855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 00:59:57.477824 containerd[1550]: time="2026-01-23T00:59:57.471825097Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:57.498219 containerd[1550]: time="2026-01-23T00:59:57.498129411Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 00:59:57.498576 containerd[1550]: time="2026-01-23T00:59:57.498311882Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 00:59:57.498723 kubelet[2810]: E0123 00:59:57.498634 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 00:59:57.498723 kubelet[2810]: E0123 00:59:57.498704 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 00:59:57.499419 kubelet[2810]: E0123 00:59:57.498803 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-c95865886-4cvht_calico-system(6a42d98b-0861-4abd-98c0-5f1896587e7b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:57.502890 containerd[1550]: time="2026-01-23T00:59:57.502677333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 00:59:57.594257 containerd[1550]: time="2026-01-23T00:59:57.590924567Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:57.597652 containerd[1550]: time="2026-01-23T00:59:57.597521873Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 00:59:57.597775 containerd[1550]: time="2026-01-23T00:59:57.597685158Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 00:59:57.598132 kubelet[2810]: E0123 00:59:57.597883 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 00:59:57.598132 kubelet[2810]: E0123 00:59:57.598098 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 00:59:57.598900 kubelet[2810]: E0123 00:59:57.598628 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-c95865886-4cvht_calico-system(6a42d98b-0861-4abd-98c0-5f1896587e7b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:57.599518 kubelet[2810]: E0123 00:59:57.599239 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c95865886-4cvht" podUID="6a42d98b-0861-4abd-98c0-5f1896587e7b" Jan 23 00:59:58.404406 containerd[1550]: time="2026-01-23T00:59:58.401600260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 00:59:58.501508 containerd[1550]: time="2026-01-23T00:59:58.499822171Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:58.506366 containerd[1550]: time="2026-01-23T00:59:58.504310249Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 00:59:58.506366 containerd[1550]: time="2026-01-23T00:59:58.504520924Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 00:59:58.524861 kubelet[2810]: E0123 00:59:58.523886 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 00:59:58.524861 kubelet[2810]: E0123 00:59:58.524589 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 00:59:58.583846 kubelet[2810]: E0123 00:59:58.528240 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-558649896b-xvhfg_calico-system(655c83b6-f33b-4c1f-8ca9-c00c869c6e41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:58.583846 kubelet[2810]: E0123 00:59:58.528311 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-558649896b-xvhfg" podUID="655c83b6-f33b-4c1f-8ca9-c00c869c6e41" Jan 23 00:59:58.589440 containerd[1550]: time="2026-01-23T00:59:58.565723634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 00:59:58.731029 containerd[1550]: time="2026-01-23T00:59:58.718486957Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:58.778303 containerd[1550]: time="2026-01-23T00:59:58.737504161Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 00:59:58.778303 containerd[1550]: time="2026-01-23T00:59:58.738253891Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 00:59:58.783647 kubelet[2810]: E0123 00:59:58.778546 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:59:58.785852 kubelet[2810]: E0123 00:59:58.785699 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:59:58.787554 kubelet[2810]: E0123 00:59:58.787354 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7cbc9d4d7d-44t6d_calico-apiserver(7c8b37a9-79e1-44f6-bd0d-7ff95f46b169): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:58.788092 kubelet[2810]: E0123 00:59:58.787822 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-44t6d" podUID="7c8b37a9-79e1-44f6-bd0d-7ff95f46b169" Jan 23 00:59:59.393691 containerd[1550]: time="2026-01-23T00:59:59.393639168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 00:59:59.463329 containerd[1550]: time="2026-01-23T00:59:59.463242658Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:59.465176 containerd[1550]: time="2026-01-23T00:59:59.464875880Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 00:59:59.465176 containerd[1550]: time="2026-01-23T00:59:59.465147717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 00:59:59.465644 kubelet[2810]: E0123 00:59:59.465528 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 00:59:59.465644 kubelet[2810]: E0123 00:59:59.465622 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 00:59:59.465836 kubelet[2810]: E0123 00:59:59.465741 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-qpc42_calico-system(1a86b4da-5edc-4f85-b21e-20314381c9bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:59.466616 containerd[1550]: time="2026-01-23T00:59:59.466572868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 00:59:59.530757 containerd[1550]: time="2026-01-23T00:59:59.530675097Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:59:59.535248 containerd[1550]: time="2026-01-23T00:59:59.534912318Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 00:59:59.535248 containerd[1550]: time="2026-01-23T00:59:59.534983297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 00:59:59.535572 kubelet[2810]: E0123 00:59:59.535517 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 00:59:59.536682 kubelet[2810]: E0123 00:59:59.535586 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 00:59:59.536682 kubelet[2810]: E0123 00:59:59.535685 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-qpc42_calico-system(1a86b4da-5edc-4f85-b21e-20314381c9bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 00:59:59.536682 kubelet[2810]: E0123 00:59:59.535753 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpc42" podUID="1a86b4da-5edc-4f85-b21e-20314381c9bb" Jan 23 01:00:00.405192 containerd[1550]: time="2026-01-23T01:00:00.404400604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:00:00.953490 containerd[1550]: time="2026-01-23T01:00:00.952878807Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:00:00.956707 containerd[1550]: time="2026-01-23T01:00:00.956477101Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:00:00.956707 containerd[1550]: time="2026-01-23T01:00:00.956604190Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:00:00.957202 kubelet[2810]: E0123 01:00:00.956811 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:00:00.957202 kubelet[2810]: E0123 01:00:00.956877 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:00:00.957202 kubelet[2810]: E0123 01:00:00.957099 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-j5rv6_calico-system(43accc0b-89ee-4b5d-a714-8b1afe2391c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:00:00.957202 kubelet[2810]: E0123 01:00:00.957145 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-j5rv6" podUID="43accc0b-89ee-4b5d-a714-8b1afe2391c5" Jan 23 01:00:09.399017 kubelet[2810]: E0123 01:00:09.398566 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-jwv45" podUID="479d141d-917c-42c5-8315-9e3283f05aa9" Jan 23 01:00:10.396912 kubelet[2810]: E0123 01:00:10.396798 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpc42" podUID="1a86b4da-5edc-4f85-b21e-20314381c9bb" Jan 23 01:00:11.395534 kubelet[2810]: E0123 01:00:11.395445 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-558649896b-xvhfg" podUID="655c83b6-f33b-4c1f-8ca9-c00c869c6e41" Jan 23 01:00:11.402148 kubelet[2810]: E0123 01:00:11.401236 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c95865886-4cvht" podUID="6a42d98b-0861-4abd-98c0-5f1896587e7b" Jan 23 01:00:13.393551 kubelet[2810]: E0123 01:00:13.392989 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-44t6d" podUID="7c8b37a9-79e1-44f6-bd0d-7ff95f46b169" Jan 23 01:00:13.396896 kubelet[2810]: E0123 01:00:13.396714 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-j5rv6" podUID="43accc0b-89ee-4b5d-a714-8b1afe2391c5" Jan 23 01:00:21.396013 containerd[1550]: time="2026-01-23T01:00:21.395234071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:00:21.493079 containerd[1550]: time="2026-01-23T01:00:21.492915162Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:00:21.496576 containerd[1550]: time="2026-01-23T01:00:21.496188274Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:00:21.496576 containerd[1550]: time="2026-01-23T01:00:21.496313297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:00:21.496720 kubelet[2810]: E0123 01:00:21.496662 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:00:21.497279 kubelet[2810]: E0123 01:00:21.496725 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:00:21.497279 kubelet[2810]: E0123 01:00:21.497033 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7cbc9d4d7d-jwv45_calico-apiserver(479d141d-917c-42c5-8315-9e3283f05aa9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:00:21.497279 kubelet[2810]: E0123 01:00:21.497088 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-jwv45" podUID="479d141d-917c-42c5-8315-9e3283f05aa9" Jan 23 01:00:22.395605 containerd[1550]: time="2026-01-23T01:00:22.395258044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:00:22.471625 containerd[1550]: time="2026-01-23T01:00:22.471072568Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:00:22.490509 containerd[1550]: time="2026-01-23T01:00:22.490434446Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:00:22.490838 containerd[1550]: time="2026-01-23T01:00:22.490719289Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:00:22.494021 kubelet[2810]: E0123 01:00:22.493319 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:00:22.494021 kubelet[2810]: E0123 01:00:22.493397 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:00:22.494021 kubelet[2810]: E0123 01:00:22.493496 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-qpc42_calico-system(1a86b4da-5edc-4f85-b21e-20314381c9bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:00:22.506592 containerd[1550]: time="2026-01-23T01:00:22.506515628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:00:22.655590 containerd[1550]: time="2026-01-23T01:00:22.655243230Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:00:22.657673 containerd[1550]: time="2026-01-23T01:00:22.657091472Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:00:22.658798 containerd[1550]: time="2026-01-23T01:00:22.658291303Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:00:22.660853 kubelet[2810]: E0123 01:00:22.659524 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:00:22.660853 kubelet[2810]: E0123 01:00:22.659622 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:00:22.660853 kubelet[2810]: E0123 01:00:22.659729 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-qpc42_calico-system(1a86b4da-5edc-4f85-b21e-20314381c9bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:00:22.662852 kubelet[2810]: E0123 01:00:22.659792 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpc42" podUID="1a86b4da-5edc-4f85-b21e-20314381c9bb" Jan 23 01:00:24.393766 containerd[1550]: time="2026-01-23T01:00:24.393104754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:00:24.463912 containerd[1550]: time="2026-01-23T01:00:24.463369589Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:00:24.467267 containerd[1550]: time="2026-01-23T01:00:24.467067235Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:00:24.467267 containerd[1550]: time="2026-01-23T01:00:24.467213639Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:00:24.470338 kubelet[2810]: E0123 01:00:24.468314 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:00:24.470338 kubelet[2810]: E0123 01:00:24.468383 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:00:24.470338 kubelet[2810]: E0123 01:00:24.468608 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-j5rv6_calico-system(43accc0b-89ee-4b5d-a714-8b1afe2391c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:00:24.470338 kubelet[2810]: E0123 01:00:24.468653 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-j5rv6" podUID="43accc0b-89ee-4b5d-a714-8b1afe2391c5" Jan 23 01:00:24.477785 containerd[1550]: time="2026-01-23T01:00:24.471928924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:00:24.578049 containerd[1550]: time="2026-01-23T01:00:24.577882949Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:00:24.583660 containerd[1550]: time="2026-01-23T01:00:24.583604343Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:00:24.583825 containerd[1550]: time="2026-01-23T01:00:24.583789768Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:00:24.584307 kubelet[2810]: E0123 01:00:24.584223 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:00:24.584387 kubelet[2810]: E0123 01:00:24.584313 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:00:24.584427 kubelet[2810]: E0123 01:00:24.584407 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-c95865886-4cvht_calico-system(6a42d98b-0861-4abd-98c0-5f1896587e7b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:00:24.587841 containerd[1550]: time="2026-01-23T01:00:24.587768024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:00:24.684698 containerd[1550]: time="2026-01-23T01:00:24.684365562Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:00:24.688797 containerd[1550]: time="2026-01-23T01:00:24.688753816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:00:24.689205 containerd[1550]: time="2026-01-23T01:00:24.689062182Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:00:24.692206 kubelet[2810]: E0123 01:00:24.691922 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:00:24.692206 kubelet[2810]: E0123 01:00:24.692117 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:00:24.693227 kubelet[2810]: E0123 01:00:24.693051 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-c95865886-4cvht_calico-system(6a42d98b-0861-4abd-98c0-5f1896587e7b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:00:24.693227 kubelet[2810]: E0123 01:00:24.693119 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c95865886-4cvht" podUID="6a42d98b-0861-4abd-98c0-5f1896587e7b" Jan 23 01:00:25.394490 containerd[1550]: time="2026-01-23T01:00:25.394392631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:00:25.489034 containerd[1550]: time="2026-01-23T01:00:25.488337623Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:00:25.493787 containerd[1550]: time="2026-01-23T01:00:25.493585035Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:00:25.493787 containerd[1550]: time="2026-01-23T01:00:25.493641641Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:00:25.494112 kubelet[2810]: E0123 01:00:25.493924 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:00:25.494112 kubelet[2810]: E0123 01:00:25.494050 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:00:25.494725 kubelet[2810]: E0123 01:00:25.494221 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7cbc9d4d7d-44t6d_calico-apiserver(7c8b37a9-79e1-44f6-bd0d-7ff95f46b169): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:00:25.494725 kubelet[2810]: E0123 01:00:25.494261 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-44t6d" podUID="7c8b37a9-79e1-44f6-bd0d-7ff95f46b169" Jan 23 01:00:26.420456 containerd[1550]: time="2026-01-23T01:00:26.415573043Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:00:26.502299 containerd[1550]: time="2026-01-23T01:00:26.501665598Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:00:26.508064 containerd[1550]: time="2026-01-23T01:00:26.508015000Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:00:26.512081 containerd[1550]: time="2026-01-23T01:00:26.508514492Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:00:26.514919 kubelet[2810]: E0123 01:00:26.510700 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:00:26.514919 kubelet[2810]: E0123 01:00:26.514479 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:00:26.518761 kubelet[2810]: E0123 01:00:26.516767 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-558649896b-xvhfg_calico-system(655c83b6-f33b-4c1f-8ca9-c00c869c6e41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:00:26.518761 kubelet[2810]: E0123 01:00:26.516818 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-558649896b-xvhfg" podUID="655c83b6-f33b-4c1f-8ca9-c00c869c6e41" Jan 23 01:00:34.395134 kubelet[2810]: E0123 01:00:34.393443 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-jwv45" podUID="479d141d-917c-42c5-8315-9e3283f05aa9" Jan 23 01:00:35.399855 kubelet[2810]: E0123 01:00:35.396072 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpc42" podUID="1a86b4da-5edc-4f85-b21e-20314381c9bb" Jan 23 01:00:36.396815 kubelet[2810]: E0123 01:00:36.396708 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-j5rv6" podUID="43accc0b-89ee-4b5d-a714-8b1afe2391c5" Jan 23 01:00:36.398037 kubelet[2810]: E0123 01:00:36.397851 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c95865886-4cvht" podUID="6a42d98b-0861-4abd-98c0-5f1896587e7b" Jan 23 01:00:38.393899 kubelet[2810]: E0123 01:00:38.393596 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-558649896b-xvhfg" podUID="655c83b6-f33b-4c1f-8ca9-c00c869c6e41" Jan 23 01:00:39.398089 kubelet[2810]: E0123 01:00:39.397998 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-44t6d" podUID="7c8b37a9-79e1-44f6-bd0d-7ff95f46b169" Jan 23 01:00:48.400520 kubelet[2810]: E0123 01:00:48.400384 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-jwv45" podUID="479d141d-917c-42c5-8315-9e3283f05aa9" Jan 23 01:00:48.403471 kubelet[2810]: E0123 01:00:48.401721 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpc42" podUID="1a86b4da-5edc-4f85-b21e-20314381c9bb" Jan 23 01:00:49.395720 kubelet[2810]: E0123 01:00:49.395643 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-j5rv6" podUID="43accc0b-89ee-4b5d-a714-8b1afe2391c5" Jan 23 01:00:50.399053 kubelet[2810]: E0123 01:00:50.398861 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-558649896b-xvhfg" podUID="655c83b6-f33b-4c1f-8ca9-c00c869c6e41" Jan 23 01:00:50.400764 kubelet[2810]: E0123 01:00:50.399849 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c95865886-4cvht" podUID="6a42d98b-0861-4abd-98c0-5f1896587e7b" Jan 23 01:00:54.394131 kubelet[2810]: E0123 01:00:54.393697 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-44t6d" podUID="7c8b37a9-79e1-44f6-bd0d-7ff95f46b169" Jan 23 01:00:55.457089 systemd[1]: Started sshd@9-10.0.0.13:22-10.0.0.1:53914.service - OpenSSH per-connection server daemon (10.0.0.1:53914). Jan 23 01:00:55.637150 sshd[5065]: Accepted publickey for core from 10.0.0.1 port 53914 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:00:55.640144 sshd-session[5065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:00:55.657752 systemd-logind[1531]: New session 10 of user core. Jan 23 01:00:55.665504 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 01:00:55.957071 sshd[5071]: Connection closed by 10.0.0.1 port 53914 Jan 23 01:00:55.959134 sshd-session[5065]: pam_unix(sshd:session): session closed for user core Jan 23 01:00:55.967203 systemd[1]: sshd@9-10.0.0.13:22-10.0.0.1:53914.service: Deactivated successfully. Jan 23 01:00:55.987759 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 01:00:55.994186 systemd-logind[1531]: Session 10 logged out. Waiting for processes to exit. Jan 23 01:00:55.996732 systemd-logind[1531]: Removed session 10. Jan 23 01:01:00.979381 systemd[1]: Started sshd@10-10.0.0.13:22-10.0.0.1:53920.service - OpenSSH per-connection server daemon (10.0.0.1:53920). Jan 23 01:01:01.054500 sshd[5088]: Accepted publickey for core from 10.0.0.1 port 53920 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:01:01.058665 sshd-session[5088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:01:01.068386 systemd-logind[1531]: New session 11 of user core. Jan 23 01:01:01.077834 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 01:01:01.331684 sshd[5091]: Connection closed by 10.0.0.1 port 53920 Jan 23 01:01:01.333671 sshd-session[5088]: pam_unix(sshd:session): session closed for user core Jan 23 01:01:01.341477 systemd[1]: sshd@10-10.0.0.13:22-10.0.0.1:53920.service: Deactivated successfully. Jan 23 01:01:01.346896 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 01:01:01.349202 systemd-logind[1531]: Session 11 logged out. Waiting for processes to exit. Jan 23 01:01:01.353200 systemd-logind[1531]: Removed session 11. Jan 23 01:01:01.394524 kubelet[2810]: E0123 01:01:01.394299 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-j5rv6" podUID="43accc0b-89ee-4b5d-a714-8b1afe2391c5" Jan 23 01:01:01.395733 kubelet[2810]: E0123 01:01:01.395625 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpc42" podUID="1a86b4da-5edc-4f85-b21e-20314381c9bb" Jan 23 01:01:02.395885 containerd[1550]: time="2026-01-23T01:01:02.395171557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:01:02.488413 containerd[1550]: time="2026-01-23T01:01:02.488299374Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:01:02.493024 containerd[1550]: time="2026-01-23T01:01:02.492271442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:01:02.493024 containerd[1550]: time="2026-01-23T01:01:02.492416493Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:01:02.495202 kubelet[2810]: E0123 01:01:02.495097 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:01:02.495769 kubelet[2810]: E0123 01:01:02.495202 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:01:02.495769 kubelet[2810]: E0123 01:01:02.495369 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7cbc9d4d7d-jwv45_calico-apiserver(479d141d-917c-42c5-8315-9e3283f05aa9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:01:02.495769 kubelet[2810]: E0123 01:01:02.495421 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-jwv45" podUID="479d141d-917c-42c5-8315-9e3283f05aa9" Jan 23 01:01:03.393813 kubelet[2810]: E0123 01:01:03.393156 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-558649896b-xvhfg" podUID="655c83b6-f33b-4c1f-8ca9-c00c869c6e41" Jan 23 01:01:05.396893 containerd[1550]: time="2026-01-23T01:01:05.396626993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:01:05.576275 containerd[1550]: time="2026-01-23T01:01:05.575011403Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:01:05.578433 containerd[1550]: time="2026-01-23T01:01:05.578395363Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:01:05.580207 containerd[1550]: time="2026-01-23T01:01:05.578607340Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:01:05.580436 kubelet[2810]: E0123 01:01:05.580386 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:01:05.581061 kubelet[2810]: E0123 01:01:05.580452 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:01:05.581061 kubelet[2810]: E0123 01:01:05.580547 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-c95865886-4cvht_calico-system(6a42d98b-0861-4abd-98c0-5f1896587e7b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:01:05.586310 containerd[1550]: time="2026-01-23T01:01:05.586181218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:01:05.695834 containerd[1550]: time="2026-01-23T01:01:05.664806693Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:01:05.707020 containerd[1550]: time="2026-01-23T01:01:05.706013025Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:01:05.707625 containerd[1550]: time="2026-01-23T01:01:05.707470825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:01:05.708009 kubelet[2810]: E0123 01:01:05.707723 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:01:05.708857 kubelet[2810]: E0123 01:01:05.708783 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:01:05.709067 kubelet[2810]: E0123 01:01:05.709038 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-c95865886-4cvht_calico-system(6a42d98b-0861-4abd-98c0-5f1896587e7b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:01:05.710269 kubelet[2810]: E0123 01:01:05.709156 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c95865886-4cvht" podUID="6a42d98b-0861-4abd-98c0-5f1896587e7b" Jan 23 01:01:06.403530 systemd[1]: Started sshd@11-10.0.0.13:22-10.0.0.1:39706.service - OpenSSH per-connection server daemon (10.0.0.1:39706). Jan 23 01:01:06.538074 sshd[5112]: Accepted publickey for core from 10.0.0.1 port 39706 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:01:06.539348 sshd-session[5112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:01:06.553680 systemd-logind[1531]: New session 12 of user core. Jan 23 01:01:06.562657 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 01:01:06.892198 sshd[5115]: Connection closed by 10.0.0.1 port 39706 Jan 23 01:01:06.898094 sshd-session[5112]: pam_unix(sshd:session): session closed for user core Jan 23 01:01:06.909213 systemd[1]: sshd@11-10.0.0.13:22-10.0.0.1:39706.service: Deactivated successfully. Jan 23 01:01:06.917504 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 01:01:06.920647 systemd-logind[1531]: Session 12 logged out. Waiting for processes to exit. Jan 23 01:01:06.924856 systemd-logind[1531]: Removed session 12. Jan 23 01:01:07.393417 containerd[1550]: time="2026-01-23T01:01:07.393231770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:01:07.503062 containerd[1550]: time="2026-01-23T01:01:07.502624566Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:01:07.509311 containerd[1550]: time="2026-01-23T01:01:07.509110207Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:01:07.509311 containerd[1550]: time="2026-01-23T01:01:07.509179880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:01:07.509821 kubelet[2810]: E0123 01:01:07.509707 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:01:07.510509 kubelet[2810]: E0123 01:01:07.509832 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:01:07.510509 kubelet[2810]: E0123 01:01:07.510112 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7cbc9d4d7d-44t6d_calico-apiserver(7c8b37a9-79e1-44f6-bd0d-7ff95f46b169): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:01:07.510509 kubelet[2810]: E0123 01:01:07.510168 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-44t6d" podUID="7c8b37a9-79e1-44f6-bd0d-7ff95f46b169" Jan 23 01:01:11.957572 systemd[1]: Started sshd@12-10.0.0.13:22-10.0.0.1:39718.service - OpenSSH per-connection server daemon (10.0.0.1:39718). Jan 23 01:01:12.295533 sshd[5163]: Accepted publickey for core from 10.0.0.1 port 39718 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:01:12.299563 sshd-session[5163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:01:12.324030 systemd-logind[1531]: New session 13 of user core. Jan 23 01:01:12.330390 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 01:01:12.636997 sshd[5166]: Connection closed by 10.0.0.1 port 39718 Jan 23 01:01:12.637608 sshd-session[5163]: pam_unix(sshd:session): session closed for user core Jan 23 01:01:12.653459 systemd[1]: sshd@12-10.0.0.13:22-10.0.0.1:39718.service: Deactivated successfully. Jan 23 01:01:12.658085 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 01:01:12.660815 systemd-logind[1531]: Session 13 logged out. Waiting for processes to exit. Jan 23 01:01:12.664767 systemd-logind[1531]: Removed session 13. Jan 23 01:01:13.395213 containerd[1550]: time="2026-01-23T01:01:13.395164009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:01:13.502906 containerd[1550]: time="2026-01-23T01:01:13.502150896Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:01:13.510412 containerd[1550]: time="2026-01-23T01:01:13.510091850Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:01:13.511860 containerd[1550]: time="2026-01-23T01:01:13.511663610Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:01:13.512009 kubelet[2810]: E0123 01:01:13.511732 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:01:13.512009 kubelet[2810]: E0123 01:01:13.511794 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:01:13.512647 kubelet[2810]: E0123 01:01:13.512147 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-qpc42_calico-system(1a86b4da-5edc-4f85-b21e-20314381c9bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:01:13.512699 containerd[1550]: time="2026-01-23T01:01:13.512217760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:01:13.604428 containerd[1550]: time="2026-01-23T01:01:13.603808459Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:01:13.610734 containerd[1550]: time="2026-01-23T01:01:13.610575288Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:01:13.610734 containerd[1550]: time="2026-01-23T01:01:13.610690793Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:01:13.611404 kubelet[2810]: E0123 01:01:13.611119 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:01:13.611404 kubelet[2810]: E0123 01:01:13.611234 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:01:13.612894 kubelet[2810]: E0123 01:01:13.611621 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-j5rv6_calico-system(43accc0b-89ee-4b5d-a714-8b1afe2391c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:01:13.612894 kubelet[2810]: E0123 01:01:13.611710 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-j5rv6" podUID="43accc0b-89ee-4b5d-a714-8b1afe2391c5" Jan 23 01:01:13.615055 containerd[1550]: time="2026-01-23T01:01:13.613366771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:01:13.699194 containerd[1550]: time="2026-01-23T01:01:13.697544346Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:01:13.704172 containerd[1550]: time="2026-01-23T01:01:13.704010431Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:01:13.704172 containerd[1550]: time="2026-01-23T01:01:13.704124144Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:01:13.704870 kubelet[2810]: E0123 01:01:13.704794 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:01:13.705409 kubelet[2810]: E0123 01:01:13.705337 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:01:13.706765 kubelet[2810]: E0123 01:01:13.706390 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-qpc42_calico-system(1a86b4da-5edc-4f85-b21e-20314381c9bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:01:13.706765 kubelet[2810]: E0123 01:01:13.706500 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpc42" podUID="1a86b4da-5edc-4f85-b21e-20314381c9bb" Jan 23 01:01:15.409398 containerd[1550]: time="2026-01-23T01:01:15.409335366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:01:15.485305 containerd[1550]: time="2026-01-23T01:01:15.485036499Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:01:15.488385 containerd[1550]: time="2026-01-23T01:01:15.488232824Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:01:15.489068 containerd[1550]: time="2026-01-23T01:01:15.488577227Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:01:15.489370 kubelet[2810]: E0123 01:01:15.489138 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:01:15.490026 kubelet[2810]: E0123 01:01:15.489247 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:01:15.490832 kubelet[2810]: E0123 01:01:15.490083 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-558649896b-xvhfg_calico-system(655c83b6-f33b-4c1f-8ca9-c00c869c6e41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:01:15.490832 kubelet[2810]: E0123 01:01:15.490366 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-558649896b-xvhfg" podUID="655c83b6-f33b-4c1f-8ca9-c00c869c6e41" Jan 23 01:01:17.402525 kubelet[2810]: E0123 01:01:17.400817 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-jwv45" podUID="479d141d-917c-42c5-8315-9e3283f05aa9" Jan 23 01:01:17.691473 systemd[1]: Started sshd@13-10.0.0.13:22-10.0.0.1:55440.service - OpenSSH per-connection server daemon (10.0.0.1:55440). Jan 23 01:01:17.796547 sshd[5193]: Accepted publickey for core from 10.0.0.1 port 55440 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:01:17.798468 sshd-session[5193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:01:17.818522 systemd-logind[1531]: New session 14 of user core. Jan 23 01:01:17.827542 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 01:01:18.115779 sshd[5196]: Connection closed by 10.0.0.1 port 55440 Jan 23 01:01:18.116407 sshd-session[5193]: pam_unix(sshd:session): session closed for user core Jan 23 01:01:18.130792 systemd[1]: sshd@13-10.0.0.13:22-10.0.0.1:55440.service: Deactivated successfully. Jan 23 01:01:18.135675 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 01:01:18.141454 systemd-logind[1531]: Session 14 logged out. Waiting for processes to exit. Jan 23 01:01:18.152043 systemd-logind[1531]: Removed session 14. Jan 23 01:01:20.398480 kubelet[2810]: E0123 01:01:20.397471 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c95865886-4cvht" podUID="6a42d98b-0861-4abd-98c0-5f1896587e7b" Jan 23 01:01:22.396653 kubelet[2810]: E0123 01:01:22.396546 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-44t6d" podUID="7c8b37a9-79e1-44f6-bd0d-7ff95f46b169" Jan 23 01:01:23.152461 systemd[1]: Started sshd@14-10.0.0.13:22-10.0.0.1:55902.service - OpenSSH per-connection server daemon (10.0.0.1:55902). Jan 23 01:01:23.363469 sshd[5224]: Accepted publickey for core from 10.0.0.1 port 55902 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:01:23.366755 sshd-session[5224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:01:23.420720 systemd-logind[1531]: New session 15 of user core. Jan 23 01:01:23.427398 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 01:01:23.904201 sshd[5227]: Connection closed by 10.0.0.1 port 55902 Jan 23 01:01:23.904685 sshd-session[5224]: pam_unix(sshd:session): session closed for user core Jan 23 01:01:23.922414 systemd[1]: sshd@14-10.0.0.13:22-10.0.0.1:55902.service: Deactivated successfully. Jan 23 01:01:23.930621 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 01:01:23.946071 systemd-logind[1531]: Session 15 logged out. Waiting for processes to exit. Jan 23 01:01:23.954130 systemd-logind[1531]: Removed session 15. Jan 23 01:01:24.412323 kubelet[2810]: E0123 01:01:24.412119 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-j5rv6" podUID="43accc0b-89ee-4b5d-a714-8b1afe2391c5" Jan 23 01:01:28.399023 kubelet[2810]: E0123 01:01:28.397793 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-jwv45" podUID="479d141d-917c-42c5-8315-9e3283f05aa9" Jan 23 01:01:28.401819 kubelet[2810]: E0123 01:01:28.401520 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpc42" podUID="1a86b4da-5edc-4f85-b21e-20314381c9bb" Jan 23 01:01:28.937771 systemd[1]: Started sshd@15-10.0.0.13:22-10.0.0.1:55910.service - OpenSSH per-connection server daemon (10.0.0.1:55910). Jan 23 01:01:29.064350 sshd[5241]: Accepted publickey for core from 10.0.0.1 port 55910 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:01:29.067229 sshd-session[5241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:01:29.099088 systemd-logind[1531]: New session 16 of user core. Jan 23 01:01:29.116586 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 01:01:29.384061 sshd[5244]: Connection closed by 10.0.0.1 port 55910 Jan 23 01:01:29.385350 sshd-session[5241]: pam_unix(sshd:session): session closed for user core Jan 23 01:01:29.395346 systemd[1]: sshd@15-10.0.0.13:22-10.0.0.1:55910.service: Deactivated successfully. Jan 23 01:01:29.406063 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 01:01:29.408016 systemd-logind[1531]: Session 16 logged out. Waiting for processes to exit. Jan 23 01:01:29.415885 systemd-logind[1531]: Removed session 16. Jan 23 01:01:30.403626 kubelet[2810]: E0123 01:01:30.402805 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-558649896b-xvhfg" podUID="655c83b6-f33b-4c1f-8ca9-c00c869c6e41" Jan 23 01:01:33.400245 kubelet[2810]: E0123 01:01:33.400153 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c95865886-4cvht" podUID="6a42d98b-0861-4abd-98c0-5f1896587e7b" Jan 23 01:01:34.406633 systemd[1]: Started sshd@16-10.0.0.13:22-10.0.0.1:60814.service - OpenSSH per-connection server daemon (10.0.0.1:60814). Jan 23 01:01:34.515391 sshd[5261]: Accepted publickey for core from 10.0.0.1 port 60814 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:01:34.520507 sshd-session[5261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:01:34.536457 systemd-logind[1531]: New session 17 of user core. Jan 23 01:01:34.547306 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 01:01:34.909431 sshd[5264]: Connection closed by 10.0.0.1 port 60814 Jan 23 01:01:34.912716 sshd-session[5261]: pam_unix(sshd:session): session closed for user core Jan 23 01:01:34.927103 systemd[1]: sshd@16-10.0.0.13:22-10.0.0.1:60814.service: Deactivated successfully. Jan 23 01:01:34.932099 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 01:01:34.939689 systemd-logind[1531]: Session 17 logged out. Waiting for processes to exit. Jan 23 01:01:34.953281 systemd[1]: Started sshd@17-10.0.0.13:22-10.0.0.1:60820.service - OpenSSH per-connection server daemon (10.0.0.1:60820). Jan 23 01:01:34.955526 systemd-logind[1531]: Removed session 17. Jan 23 01:01:35.199283 sshd[5282]: Accepted publickey for core from 10.0.0.1 port 60820 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:01:35.202600 sshd-session[5282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:01:35.225877 systemd-logind[1531]: New session 18 of user core. Jan 23 01:01:35.235674 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 01:01:35.593173 sshd[5285]: Connection closed by 10.0.0.1 port 60820 Jan 23 01:01:35.596043 sshd-session[5282]: pam_unix(sshd:session): session closed for user core Jan 23 01:01:35.622082 systemd[1]: sshd@17-10.0.0.13:22-10.0.0.1:60820.service: Deactivated successfully. Jan 23 01:01:35.629250 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 01:01:35.636736 systemd-logind[1531]: Session 18 logged out. Waiting for processes to exit. Jan 23 01:01:35.643268 systemd[1]: Started sshd@18-10.0.0.13:22-10.0.0.1:60832.service - OpenSSH per-connection server daemon (10.0.0.1:60832). Jan 23 01:01:35.646922 systemd-logind[1531]: Removed session 18. Jan 23 01:01:35.827191 sshd[5297]: Accepted publickey for core from 10.0.0.1 port 60832 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:01:35.828602 sshd-session[5297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:01:35.843064 systemd-logind[1531]: New session 19 of user core. Jan 23 01:01:35.854239 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 01:01:36.148722 sshd[5300]: Connection closed by 10.0.0.1 port 60832 Jan 23 01:01:36.160201 sshd-session[5297]: pam_unix(sshd:session): session closed for user core Jan 23 01:01:36.187141 systemd[1]: sshd@18-10.0.0.13:22-10.0.0.1:60832.service: Deactivated successfully. Jan 23 01:01:36.189821 systemd-logind[1531]: Session 19 logged out. Waiting for processes to exit. Jan 23 01:01:36.212481 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 01:01:36.230404 systemd-logind[1531]: Removed session 19. Jan 23 01:01:36.406572 kubelet[2810]: E0123 01:01:36.401307 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-44t6d" podUID="7c8b37a9-79e1-44f6-bd0d-7ff95f46b169" Jan 23 01:01:37.395012 kubelet[2810]: E0123 01:01:37.394016 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-j5rv6" podUID="43accc0b-89ee-4b5d-a714-8b1afe2391c5" Jan 23 01:01:41.183091 systemd[1]: Started sshd@19-10.0.0.13:22-10.0.0.1:60840.service - OpenSSH per-connection server daemon (10.0.0.1:60840). Jan 23 01:01:41.315464 sshd[5341]: Accepted publickey for core from 10.0.0.1 port 60840 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:01:41.325801 sshd-session[5341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:01:41.350110 systemd-logind[1531]: New session 20 of user core. Jan 23 01:01:41.385027 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 01:01:41.394604 kubelet[2810]: E0123 01:01:41.394564 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpc42" podUID="1a86b4da-5edc-4f85-b21e-20314381c9bb" Jan 23 01:01:41.795822 sshd[5344]: Connection closed by 10.0.0.1 port 60840 Jan 23 01:01:41.796493 sshd-session[5341]: pam_unix(sshd:session): session closed for user core Jan 23 01:01:41.807119 systemd[1]: sshd@19-10.0.0.13:22-10.0.0.1:60840.service: Deactivated successfully. Jan 23 01:01:41.816769 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 01:01:41.823280 systemd-logind[1531]: Session 20 logged out. Waiting for processes to exit. Jan 23 01:01:41.826893 systemd-logind[1531]: Removed session 20. Jan 23 01:01:43.392756 kubelet[2810]: E0123 01:01:43.392652 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-jwv45" podUID="479d141d-917c-42c5-8315-9e3283f05aa9" Jan 23 01:01:45.402055 kubelet[2810]: E0123 01:01:45.401595 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-558649896b-xvhfg" podUID="655c83b6-f33b-4c1f-8ca9-c00c869c6e41" Jan 23 01:01:46.831452 systemd[1]: Started sshd@20-10.0.0.13:22-10.0.0.1:56034.service - OpenSSH per-connection server daemon (10.0.0.1:56034). Jan 23 01:01:46.964675 sshd[5360]: Accepted publickey for core from 10.0.0.1 port 56034 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:01:46.965863 sshd-session[5360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:01:46.987024 systemd-logind[1531]: New session 21 of user core. Jan 23 01:01:46.995334 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 01:01:47.336012 sshd[5366]: Connection closed by 10.0.0.1 port 56034 Jan 23 01:01:47.337245 sshd-session[5360]: pam_unix(sshd:session): session closed for user core Jan 23 01:01:47.349053 systemd-logind[1531]: Session 21 logged out. Waiting for processes to exit. Jan 23 01:01:47.351703 systemd[1]: sshd@20-10.0.0.13:22-10.0.0.1:56034.service: Deactivated successfully. Jan 23 01:01:47.357676 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 01:01:47.362182 systemd-logind[1531]: Removed session 21. Jan 23 01:01:48.395705 kubelet[2810]: E0123 01:01:48.395650 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-44t6d" podUID="7c8b37a9-79e1-44f6-bd0d-7ff95f46b169" Jan 23 01:01:48.405230 kubelet[2810]: E0123 01:01:48.405061 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c95865886-4cvht" podUID="6a42d98b-0861-4abd-98c0-5f1896587e7b" Jan 23 01:01:50.397072 kubelet[2810]: E0123 01:01:50.396802 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-j5rv6" podUID="43accc0b-89ee-4b5d-a714-8b1afe2391c5" Jan 23 01:01:52.394222 systemd[1]: Started sshd@21-10.0.0.13:22-10.0.0.1:56048.service - OpenSSH per-connection server daemon (10.0.0.1:56048). Jan 23 01:01:52.590534 sshd[5381]: Accepted publickey for core from 10.0.0.1 port 56048 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:01:52.595676 sshd-session[5381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:01:52.618141 systemd-logind[1531]: New session 22 of user core. Jan 23 01:01:52.632609 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 01:01:53.046196 sshd[5384]: Connection closed by 10.0.0.1 port 56048 Jan 23 01:01:53.044778 sshd-session[5381]: pam_unix(sshd:session): session closed for user core Jan 23 01:01:53.065813 systemd[1]: sshd@21-10.0.0.13:22-10.0.0.1:56048.service: Deactivated successfully. Jan 23 01:01:53.084445 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 01:01:53.089902 systemd-logind[1531]: Session 22 logged out. Waiting for processes to exit. Jan 23 01:01:53.097065 systemd-logind[1531]: Removed session 22. Jan 23 01:01:53.394157 kubelet[2810]: E0123 01:01:53.393930 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpc42" podUID="1a86b4da-5edc-4f85-b21e-20314381c9bb" Jan 23 01:01:55.399025 kubelet[2810]: E0123 01:01:55.398878 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-jwv45" podUID="479d141d-917c-42c5-8315-9e3283f05aa9" Jan 23 01:01:56.406705 kubelet[2810]: E0123 01:01:56.406505 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-558649896b-xvhfg" podUID="655c83b6-f33b-4c1f-8ca9-c00c869c6e41" Jan 23 01:01:58.065118 systemd[1]: Started sshd@22-10.0.0.13:22-10.0.0.1:36874.service - OpenSSH per-connection server daemon (10.0.0.1:36874). Jan 23 01:01:58.175773 sshd[5398]: Accepted publickey for core from 10.0.0.1 port 36874 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:01:58.182816 sshd-session[5398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:01:58.205562 systemd-logind[1531]: New session 23 of user core. Jan 23 01:01:58.211508 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 01:01:58.502514 sshd[5401]: Connection closed by 10.0.0.1 port 36874 Jan 23 01:01:58.503367 sshd-session[5398]: pam_unix(sshd:session): session closed for user core Jan 23 01:01:58.512640 systemd[1]: sshd@22-10.0.0.13:22-10.0.0.1:36874.service: Deactivated successfully. Jan 23 01:01:58.515804 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 01:01:58.519478 systemd-logind[1531]: Session 23 logged out. Waiting for processes to exit. Jan 23 01:01:58.521866 systemd-logind[1531]: Removed session 23. Jan 23 01:02:00.403663 kubelet[2810]: E0123 01:02:00.403557 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c95865886-4cvht" podUID="6a42d98b-0861-4abd-98c0-5f1896587e7b" Jan 23 01:02:02.398701 kubelet[2810]: E0123 01:02:02.398476 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-44t6d" podUID="7c8b37a9-79e1-44f6-bd0d-7ff95f46b169" Jan 23 01:02:03.405912 kubelet[2810]: E0123 01:02:03.405829 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-j5rv6" podUID="43accc0b-89ee-4b5d-a714-8b1afe2391c5" Jan 23 01:02:03.546219 systemd[1]: Started sshd@23-10.0.0.13:22-10.0.0.1:55338.service - OpenSSH per-connection server daemon (10.0.0.1:55338). Jan 23 01:02:03.672749 sshd[5415]: Accepted publickey for core from 10.0.0.1 port 55338 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:02:03.680655 sshd-session[5415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:02:03.720121 systemd-logind[1531]: New session 24 of user core. Jan 23 01:02:03.737317 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 01:02:04.172668 sshd[5418]: Connection closed by 10.0.0.1 port 55338 Jan 23 01:02:04.174286 sshd-session[5415]: pam_unix(sshd:session): session closed for user core Jan 23 01:02:04.199665 systemd[1]: sshd@23-10.0.0.13:22-10.0.0.1:55338.service: Deactivated successfully. Jan 23 01:02:04.209889 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 01:02:04.217811 systemd-logind[1531]: Session 24 logged out. Waiting for processes to exit. Jan 23 01:02:04.226606 systemd-logind[1531]: Removed session 24. Jan 23 01:02:08.399161 kubelet[2810]: E0123 01:02:08.399099 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpc42" podUID="1a86b4da-5edc-4f85-b21e-20314381c9bb" Jan 23 01:02:08.402128 kubelet[2810]: E0123 01:02:08.400202 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-jwv45" podUID="479d141d-917c-42c5-8315-9e3283f05aa9" Jan 23 01:02:09.235773 systemd[1]: Started sshd@24-10.0.0.13:22-10.0.0.1:55348.service - OpenSSH per-connection server daemon (10.0.0.1:55348). Jan 23 01:02:09.352311 sshd[5432]: Accepted publickey for core from 10.0.0.1 port 55348 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:02:09.356379 sshd-session[5432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:02:09.365166 systemd-logind[1531]: New session 25 of user core. Jan 23 01:02:09.384882 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 01:02:09.683922 sshd[5435]: Connection closed by 10.0.0.1 port 55348 Jan 23 01:02:09.684781 sshd-session[5432]: pam_unix(sshd:session): session closed for user core Jan 23 01:02:09.700546 systemd[1]: sshd@24-10.0.0.13:22-10.0.0.1:55348.service: Deactivated successfully. Jan 23 01:02:09.717212 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 01:02:09.730175 systemd-logind[1531]: Session 25 logged out. Waiting for processes to exit. Jan 23 01:02:09.740815 systemd-logind[1531]: Removed session 25. Jan 23 01:02:10.397180 kubelet[2810]: E0123 01:02:10.397088 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-558649896b-xvhfg" podUID="655c83b6-f33b-4c1f-8ca9-c00c869c6e41" Jan 23 01:02:11.398737 kubelet[2810]: E0123 01:02:11.398123 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c95865886-4cvht" podUID="6a42d98b-0861-4abd-98c0-5f1896587e7b" Jan 23 01:02:14.715710 systemd[1]: Started sshd@25-10.0.0.13:22-10.0.0.1:49252.service - OpenSSH per-connection server daemon (10.0.0.1:49252). Jan 23 01:02:14.827602 sshd[5480]: Accepted publickey for core from 10.0.0.1 port 49252 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:02:14.831827 sshd-session[5480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:02:14.841538 systemd-logind[1531]: New session 26 of user core. Jan 23 01:02:14.853175 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 01:02:15.108146 sshd[5483]: Connection closed by 10.0.0.1 port 49252 Jan 23 01:02:15.109313 sshd-session[5480]: pam_unix(sshd:session): session closed for user core Jan 23 01:02:15.118995 systemd[1]: sshd@25-10.0.0.13:22-10.0.0.1:49252.service: Deactivated successfully. Jan 23 01:02:15.130278 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 01:02:15.133321 systemd-logind[1531]: Session 26 logged out. Waiting for processes to exit. Jan 23 01:02:15.138820 systemd-logind[1531]: Removed session 26. Jan 23 01:02:15.399531 kubelet[2810]: E0123 01:02:15.398864 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-44t6d" podUID="7c8b37a9-79e1-44f6-bd0d-7ff95f46b169" Jan 23 01:02:18.395202 kubelet[2810]: E0123 01:02:18.394927 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-j5rv6" podUID="43accc0b-89ee-4b5d-a714-8b1afe2391c5" Jan 23 01:02:19.395564 kubelet[2810]: E0123 01:02:19.395342 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpc42" podUID="1a86b4da-5edc-4f85-b21e-20314381c9bb" Jan 23 01:02:20.134600 systemd[1]: Started sshd@26-10.0.0.13:22-10.0.0.1:49260.service - OpenSSH per-connection server daemon (10.0.0.1:49260). Jan 23 01:02:20.230320 sshd[5502]: Accepted publickey for core from 10.0.0.1 port 49260 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:02:20.234190 sshd-session[5502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:02:20.246617 systemd-logind[1531]: New session 27 of user core. Jan 23 01:02:20.254237 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 23 01:02:20.400158 kubelet[2810]: E0123 01:02:20.399903 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-jwv45" podUID="479d141d-917c-42c5-8315-9e3283f05aa9" Jan 23 01:02:20.402015 kubelet[2810]: E0123 01:02:20.401213 2810 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:02:20.443224 sshd[5505]: Connection closed by 10.0.0.1 port 49260 Jan 23 01:02:20.444271 sshd-session[5502]: pam_unix(sshd:session): session closed for user core Jan 23 01:02:20.451825 systemd[1]: sshd@26-10.0.0.13:22-10.0.0.1:49260.service: Deactivated successfully. Jan 23 01:02:20.455884 systemd[1]: session-27.scope: Deactivated successfully. Jan 23 01:02:20.458078 systemd-logind[1531]: Session 27 logged out. Waiting for processes to exit. Jan 23 01:02:20.460624 systemd-logind[1531]: Removed session 27. Jan 23 01:02:22.399912 kubelet[2810]: E0123 01:02:22.399540 2810 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:02:25.396817 kubelet[2810]: E0123 01:02:25.396711 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-558649896b-xvhfg" podUID="655c83b6-f33b-4c1f-8ca9-c00c869c6e41" Jan 23 01:02:25.470561 systemd[1]: Started sshd@27-10.0.0.13:22-10.0.0.1:48298.service - OpenSSH per-connection server daemon (10.0.0.1:48298). Jan 23 01:02:25.566254 sshd[5525]: Accepted publickey for core from 10.0.0.1 port 48298 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:02:25.568417 sshd-session[5525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:02:25.603526 systemd-logind[1531]: New session 28 of user core. Jan 23 01:02:25.609253 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 23 01:02:25.954747 sshd[5528]: Connection closed by 10.0.0.1 port 48298 Jan 23 01:02:25.957327 sshd-session[5525]: pam_unix(sshd:session): session closed for user core Jan 23 01:02:25.982573 systemd[1]: sshd@27-10.0.0.13:22-10.0.0.1:48298.service: Deactivated successfully. Jan 23 01:02:25.989295 systemd[1]: session-28.scope: Deactivated successfully. Jan 23 01:02:25.994785 systemd-logind[1531]: Session 28 logged out. Waiting for processes to exit. Jan 23 01:02:26.004006 systemd-logind[1531]: Removed session 28. Jan 23 01:02:26.405039 containerd[1550]: time="2026-01-23T01:02:26.401113368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:02:26.409194 kubelet[2810]: E0123 01:02:26.408426 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-44t6d" podUID="7c8b37a9-79e1-44f6-bd0d-7ff95f46b169" Jan 23 01:02:26.537028 containerd[1550]: time="2026-01-23T01:02:26.536138907Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:02:26.545714 containerd[1550]: time="2026-01-23T01:02:26.544371080Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:02:26.545714 containerd[1550]: time="2026-01-23T01:02:26.544422201Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:02:26.546056 kubelet[2810]: E0123 01:02:26.544722 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:02:26.546056 kubelet[2810]: E0123 01:02:26.544781 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:02:26.546056 kubelet[2810]: E0123 01:02:26.544880 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-c95865886-4cvht_calico-system(6a42d98b-0861-4abd-98c0-5f1896587e7b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:02:26.549357 containerd[1550]: time="2026-01-23T01:02:26.548928021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:02:26.641985 containerd[1550]: time="2026-01-23T01:02:26.641798165Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:02:26.666679 containerd[1550]: time="2026-01-23T01:02:26.665349981Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:02:26.666679 containerd[1550]: time="2026-01-23T01:02:26.665536325Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:02:26.676252 kubelet[2810]: E0123 01:02:26.667779 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:02:26.676252 kubelet[2810]: E0123 01:02:26.667858 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:02:26.676252 kubelet[2810]: E0123 01:02:26.668153 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-c95865886-4cvht_calico-system(6a42d98b-0861-4abd-98c0-5f1896587e7b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:02:26.676443 kubelet[2810]: E0123 01:02:26.668233 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c95865886-4cvht" podUID="6a42d98b-0861-4abd-98c0-5f1896587e7b" Jan 23 01:02:30.394549 kubelet[2810]: E0123 01:02:30.394174 2810 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:02:31.003133 systemd[1]: Started sshd@28-10.0.0.13:22-10.0.0.1:48314.service - OpenSSH per-connection server daemon (10.0.0.1:48314). Jan 23 01:02:31.155598 sshd[5547]: Accepted publickey for core from 10.0.0.1 port 48314 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:02:31.158436 sshd-session[5547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:02:31.201772 systemd-logind[1531]: New session 29 of user core. Jan 23 01:02:31.207626 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 23 01:02:31.599379 sshd[5550]: Connection closed by 10.0.0.1 port 48314 Jan 23 01:02:31.601822 sshd-session[5547]: pam_unix(sshd:session): session closed for user core Jan 23 01:02:31.615923 systemd-logind[1531]: Session 29 logged out. Waiting for processes to exit. Jan 23 01:02:31.620433 systemd[1]: sshd@28-10.0.0.13:22-10.0.0.1:48314.service: Deactivated successfully. Jan 23 01:02:31.625400 systemd[1]: session-29.scope: Deactivated successfully. Jan 23 01:02:31.631424 systemd-logind[1531]: Removed session 29. Jan 23 01:02:32.418328 containerd[1550]: time="2026-01-23T01:02:32.417170152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:02:32.503809 containerd[1550]: time="2026-01-23T01:02:32.503748627Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:02:32.507359 containerd[1550]: time="2026-01-23T01:02:32.507305329Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:02:32.507734 containerd[1550]: time="2026-01-23T01:02:32.507564965Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:02:32.513770 kubelet[2810]: E0123 01:02:32.513727 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:02:32.516870 kubelet[2810]: E0123 01:02:32.515802 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:02:32.516870 kubelet[2810]: E0123 01:02:32.515927 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7cbc9d4d7d-jwv45_calico-apiserver(479d141d-917c-42c5-8315-9e3283f05aa9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:02:32.516870 kubelet[2810]: E0123 01:02:32.516052 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-jwv45" podUID="479d141d-917c-42c5-8315-9e3283f05aa9" Jan 23 01:02:33.394014 kubelet[2810]: E0123 01:02:33.393829 2810 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:02:33.401579 kubelet[2810]: E0123 01:02:33.401377 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-j5rv6" podUID="43accc0b-89ee-4b5d-a714-8b1afe2391c5" Jan 23 01:02:33.402755 kubelet[2810]: E0123 01:02:33.402607 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpc42" podUID="1a86b4da-5edc-4f85-b21e-20314381c9bb" Jan 23 01:02:36.618621 systemd[1]: Started sshd@29-10.0.0.13:22-10.0.0.1:35686.service - OpenSSH per-connection server daemon (10.0.0.1:35686). Jan 23 01:02:36.712905 sshd[5568]: Accepted publickey for core from 10.0.0.1 port 35686 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:02:36.716704 sshd-session[5568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:02:36.728086 systemd-logind[1531]: New session 30 of user core. Jan 23 01:02:36.735195 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 23 01:02:37.033739 sshd[5571]: Connection closed by 10.0.0.1 port 35686 Jan 23 01:02:37.035040 sshd-session[5568]: pam_unix(sshd:session): session closed for user core Jan 23 01:02:37.049130 systemd[1]: sshd@29-10.0.0.13:22-10.0.0.1:35686.service: Deactivated successfully. Jan 23 01:02:37.056903 systemd[1]: session-30.scope: Deactivated successfully. Jan 23 01:02:37.063698 systemd-logind[1531]: Session 30 logged out. Waiting for processes to exit. Jan 23 01:02:37.070790 systemd-logind[1531]: Removed session 30. Jan 23 01:02:37.394620 kubelet[2810]: E0123 01:02:37.390841 2810 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:02:40.403280 containerd[1550]: time="2026-01-23T01:02:40.403120463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:02:40.410377 kubelet[2810]: E0123 01:02:40.410255 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c95865886-4cvht" podUID="6a42d98b-0861-4abd-98c0-5f1896587e7b" Jan 23 01:02:40.500284 containerd[1550]: time="2026-01-23T01:02:40.500139361Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:02:40.509109 containerd[1550]: time="2026-01-23T01:02:40.507719938Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:02:40.509109 containerd[1550]: time="2026-01-23T01:02:40.507833600Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:02:40.509304 kubelet[2810]: E0123 01:02:40.508789 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:02:40.509304 kubelet[2810]: E0123 01:02:40.508833 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:02:40.515596 containerd[1550]: time="2026-01-23T01:02:40.510888393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:02:40.515697 kubelet[2810]: E0123 01:02:40.514256 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7cbc9d4d7d-44t6d_calico-apiserver(7c8b37a9-79e1-44f6-bd0d-7ff95f46b169): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:02:40.515697 kubelet[2810]: E0123 01:02:40.514313 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-44t6d" podUID="7c8b37a9-79e1-44f6-bd0d-7ff95f46b169" Jan 23 01:02:40.581851 containerd[1550]: time="2026-01-23T01:02:40.581701764Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:02:40.586014 containerd[1550]: time="2026-01-23T01:02:40.585779310Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:02:40.586210 containerd[1550]: time="2026-01-23T01:02:40.585907530Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:02:40.586443 kubelet[2810]: E0123 01:02:40.586365 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:02:40.586443 kubelet[2810]: E0123 01:02:40.586419 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:02:40.586618 kubelet[2810]: E0123 01:02:40.586567 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-558649896b-xvhfg_calico-system(655c83b6-f33b-4c1f-8ca9-c00c869c6e41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:02:40.586824 kubelet[2810]: E0123 01:02:40.586624 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-558649896b-xvhfg" podUID="655c83b6-f33b-4c1f-8ca9-c00c869c6e41" Jan 23 01:02:42.063352 systemd[1]: Started sshd@30-10.0.0.13:22-10.0.0.1:35688.service - OpenSSH per-connection server daemon (10.0.0.1:35688). Jan 23 01:02:42.216787 sshd[5609]: Accepted publickey for core from 10.0.0.1 port 35688 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:02:42.220295 sshd-session[5609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:02:42.243063 systemd-logind[1531]: New session 31 of user core. Jan 23 01:02:42.259593 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 23 01:02:42.562174 sshd[5612]: Connection closed by 10.0.0.1 port 35688 Jan 23 01:02:42.564077 sshd-session[5609]: pam_unix(sshd:session): session closed for user core Jan 23 01:02:42.587604 systemd[1]: sshd@30-10.0.0.13:22-10.0.0.1:35688.service: Deactivated successfully. Jan 23 01:02:42.596348 systemd[1]: session-31.scope: Deactivated successfully. Jan 23 01:02:42.605433 systemd-logind[1531]: Session 31 logged out. Waiting for processes to exit. Jan 23 01:02:42.608288 systemd-logind[1531]: Removed session 31. Jan 23 01:02:45.399436 containerd[1550]: time="2026-01-23T01:02:45.398024008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:02:45.508334 containerd[1550]: time="2026-01-23T01:02:45.507747475Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:02:45.514055 containerd[1550]: time="2026-01-23T01:02:45.513856530Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:02:45.514269 containerd[1550]: time="2026-01-23T01:02:45.514090215Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:02:45.515155 kubelet[2810]: E0123 01:02:45.515051 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:02:45.515155 kubelet[2810]: E0123 01:02:45.515119 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:02:45.520549 kubelet[2810]: E0123 01:02:45.519776 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-qpc42_calico-system(1a86b4da-5edc-4f85-b21e-20314381c9bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:02:45.526258 containerd[1550]: time="2026-01-23T01:02:45.526139986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:02:45.616476 containerd[1550]: time="2026-01-23T01:02:45.616220456Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:02:45.620189 containerd[1550]: time="2026-01-23T01:02:45.620102074Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:02:45.625591 kubelet[2810]: E0123 01:02:45.625114 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:02:45.625591 kubelet[2810]: E0123 01:02:45.625196 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:02:45.625591 kubelet[2810]: E0123 01:02:45.625295 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-qpc42_calico-system(1a86b4da-5edc-4f85-b21e-20314381c9bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:02:45.625917 kubelet[2810]: E0123 01:02:45.625474 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpc42" podUID="1a86b4da-5edc-4f85-b21e-20314381c9bb" Jan 23 01:02:45.626196 containerd[1550]: time="2026-01-23T01:02:45.620369554Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:02:46.399379 kubelet[2810]: E0123 01:02:46.399263 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-jwv45" podUID="479d141d-917c-42c5-8315-9e3283f05aa9" Jan 23 01:02:47.391831 kubelet[2810]: E0123 01:02:47.391720 2810 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:02:47.392599 kubelet[2810]: E0123 01:02:47.392496 2810 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:02:47.620300 systemd[1]: Started sshd@31-10.0.0.13:22-10.0.0.1:35992.service - OpenSSH per-connection server daemon (10.0.0.1:35992). Jan 23 01:02:47.801033 sshd[5628]: Accepted publickey for core from 10.0.0.1 port 35992 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:02:47.808196 sshd-session[5628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:02:47.838616 systemd-logind[1531]: New session 32 of user core. Jan 23 01:02:47.850179 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 23 01:02:48.202763 sshd[5631]: Connection closed by 10.0.0.1 port 35992 Jan 23 01:02:48.198457 sshd-session[5628]: pam_unix(sshd:session): session closed for user core Jan 23 01:02:48.217185 systemd[1]: sshd@31-10.0.0.13:22-10.0.0.1:35992.service: Deactivated successfully. Jan 23 01:02:48.229696 systemd[1]: session-32.scope: Deactivated successfully. Jan 23 01:02:48.232193 systemd-logind[1531]: Session 32 logged out. Waiting for processes to exit. Jan 23 01:02:48.244655 systemd[1]: Started sshd@32-10.0.0.13:22-10.0.0.1:35998.service - OpenSSH per-connection server daemon (10.0.0.1:35998). Jan 23 01:02:48.253752 systemd-logind[1531]: Removed session 32. Jan 23 01:02:48.352169 sshd[5645]: Accepted publickey for core from 10.0.0.1 port 35998 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:02:48.363712 sshd-session[5645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:02:48.379471 systemd-logind[1531]: New session 33 of user core. Jan 23 01:02:48.386476 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 23 01:02:48.397670 containerd[1550]: time="2026-01-23T01:02:48.397096904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:02:48.493406 containerd[1550]: time="2026-01-23T01:02:48.493099292Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:02:48.501283 containerd[1550]: time="2026-01-23T01:02:48.501232982Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:02:48.501576 containerd[1550]: time="2026-01-23T01:02:48.501447262Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:02:48.502688 kubelet[2810]: E0123 01:02:48.502305 2810 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:02:48.502688 kubelet[2810]: E0123 01:02:48.502372 2810 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:02:48.502688 kubelet[2810]: E0123 01:02:48.502480 2810 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-j5rv6_calico-system(43accc0b-89ee-4b5d-a714-8b1afe2391c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:02:48.504226 kubelet[2810]: E0123 01:02:48.504173 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-j5rv6" podUID="43accc0b-89ee-4b5d-a714-8b1afe2391c5" Jan 23 01:02:49.350246 sshd[5648]: Connection closed by 10.0.0.1 port 35998 Jan 23 01:02:49.352291 sshd-session[5645]: pam_unix(sshd:session): session closed for user core Jan 23 01:02:49.367312 systemd[1]: sshd@32-10.0.0.13:22-10.0.0.1:35998.service: Deactivated successfully. Jan 23 01:02:49.372860 systemd[1]: session-33.scope: Deactivated successfully. Jan 23 01:02:49.391126 systemd-logind[1531]: Session 33 logged out. Waiting for processes to exit. Jan 23 01:02:49.398427 systemd[1]: Started sshd@33-10.0.0.13:22-10.0.0.1:36004.service - OpenSSH per-connection server daemon (10.0.0.1:36004). Jan 23 01:02:49.401124 systemd-logind[1531]: Removed session 33. Jan 23 01:02:49.593783 sshd[5660]: Accepted publickey for core from 10.0.0.1 port 36004 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:02:49.596703 sshd-session[5660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:02:49.614102 systemd-logind[1531]: New session 34 of user core. Jan 23 01:02:49.628844 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 23 01:02:50.394014 kubelet[2810]: E0123 01:02:50.393121 2810 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:02:50.907617 sshd[5663]: Connection closed by 10.0.0.1 port 36004 Jan 23 01:02:50.910381 sshd-session[5660]: pam_unix(sshd:session): session closed for user core Jan 23 01:02:50.931636 systemd[1]: sshd@33-10.0.0.13:22-10.0.0.1:36004.service: Deactivated successfully. Jan 23 01:02:50.936703 systemd[1]: session-34.scope: Deactivated successfully. Jan 23 01:02:50.941782 systemd-logind[1531]: Session 34 logged out. Waiting for processes to exit. Jan 23 01:02:50.948342 systemd[1]: Started sshd@34-10.0.0.13:22-10.0.0.1:36016.service - OpenSSH per-connection server daemon (10.0.0.1:36016). Jan 23 01:02:50.951344 systemd-logind[1531]: Removed session 34. Jan 23 01:02:51.078025 sshd[5688]: Accepted publickey for core from 10.0.0.1 port 36016 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:02:51.083618 sshd-session[5688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:02:51.100130 systemd-logind[1531]: New session 35 of user core. Jan 23 01:02:51.108823 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 23 01:02:51.760063 sshd[5698]: Connection closed by 10.0.0.1 port 36016 Jan 23 01:02:51.762504 sshd-session[5688]: pam_unix(sshd:session): session closed for user core Jan 23 01:02:51.804705 systemd[1]: sshd@34-10.0.0.13:22-10.0.0.1:36016.service: Deactivated successfully. Jan 23 01:02:51.810916 systemd[1]: session-35.scope: Deactivated successfully. Jan 23 01:02:51.817576 systemd-logind[1531]: Session 35 logged out. Waiting for processes to exit. Jan 23 01:02:51.827596 systemd[1]: Started sshd@35-10.0.0.13:22-10.0.0.1:36024.service - OpenSSH per-connection server daemon (10.0.0.1:36024). Jan 23 01:02:51.833715 systemd-logind[1531]: Removed session 35. Jan 23 01:02:51.982569 sshd[5710]: Accepted publickey for core from 10.0.0.1 port 36024 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:02:51.986334 sshd-session[5710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:02:52.011631 systemd-logind[1531]: New session 36 of user core. Jan 23 01:02:52.023228 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 23 01:02:52.325631 sshd[5715]: Connection closed by 10.0.0.1 port 36024 Jan 23 01:02:52.326921 sshd-session[5710]: pam_unix(sshd:session): session closed for user core Jan 23 01:02:52.339459 systemd[1]: sshd@35-10.0.0.13:22-10.0.0.1:36024.service: Deactivated successfully. Jan 23 01:02:52.351234 systemd[1]: session-36.scope: Deactivated successfully. Jan 23 01:02:52.355995 systemd-logind[1531]: Session 36 logged out. Waiting for processes to exit. Jan 23 01:02:52.360040 systemd-logind[1531]: Removed session 36. Jan 23 01:02:54.395026 kubelet[2810]: E0123 01:02:54.394739 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-558649896b-xvhfg" podUID="655c83b6-f33b-4c1f-8ca9-c00c869c6e41" Jan 23 01:02:55.397705 kubelet[2810]: E0123 01:02:55.397289 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-44t6d" podUID="7c8b37a9-79e1-44f6-bd0d-7ff95f46b169" Jan 23 01:02:55.403653 kubelet[2810]: E0123 01:02:55.403497 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c95865886-4cvht" podUID="6a42d98b-0861-4abd-98c0-5f1896587e7b" Jan 23 01:02:57.359309 systemd[1]: Started sshd@36-10.0.0.13:22-10.0.0.1:43232.service - OpenSSH per-connection server daemon (10.0.0.1:43232). Jan 23 01:02:57.485305 sshd[5741]: Accepted publickey for core from 10.0.0.1 port 43232 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:02:57.489233 sshd-session[5741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:02:57.517831 systemd-logind[1531]: New session 37 of user core. Jan 23 01:02:57.531524 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 23 01:02:57.796134 sshd[5744]: Connection closed by 10.0.0.1 port 43232 Jan 23 01:02:57.794233 sshd-session[5741]: pam_unix(sshd:session): session closed for user core Jan 23 01:02:57.805468 systemd[1]: sshd@36-10.0.0.13:22-10.0.0.1:43232.service: Deactivated successfully. Jan 23 01:02:57.810888 systemd[1]: session-37.scope: Deactivated successfully. Jan 23 01:02:57.813093 systemd-logind[1531]: Session 37 logged out. Waiting for processes to exit. Jan 23 01:02:57.819450 systemd-logind[1531]: Removed session 37. Jan 23 01:02:58.406361 kubelet[2810]: E0123 01:02:58.402401 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-jwv45" podUID="479d141d-917c-42c5-8315-9e3283f05aa9" Jan 23 01:02:58.411827 kubelet[2810]: E0123 01:02:58.411672 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpc42" podUID="1a86b4da-5edc-4f85-b21e-20314381c9bb" Jan 23 01:03:02.407268 kubelet[2810]: E0123 01:03:02.407101 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-j5rv6" podUID="43accc0b-89ee-4b5d-a714-8b1afe2391c5" Jan 23 01:03:02.829136 systemd[1]: Started sshd@37-10.0.0.13:22-10.0.0.1:56500.service - OpenSSH per-connection server daemon (10.0.0.1:56500). Jan 23 01:03:03.004384 sshd[5759]: Accepted publickey for core from 10.0.0.1 port 56500 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:03:03.008099 sshd-session[5759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:03:03.031327 systemd-logind[1531]: New session 38 of user core. Jan 23 01:03:03.046537 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 23 01:03:03.356241 sshd[5762]: Connection closed by 10.0.0.1 port 56500 Jan 23 01:03:03.359097 sshd-session[5759]: pam_unix(sshd:session): session closed for user core Jan 23 01:03:03.391445 systemd[1]: sshd@37-10.0.0.13:22-10.0.0.1:56500.service: Deactivated successfully. Jan 23 01:03:03.401024 systemd[1]: session-38.scope: Deactivated successfully. Jan 23 01:03:03.407295 systemd-logind[1531]: Session 38 logged out. Waiting for processes to exit. Jan 23 01:03:03.415818 systemd-logind[1531]: Removed session 38. Jan 23 01:03:08.401867 kubelet[2810]: E0123 01:03:08.401149 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-44t6d" podUID="7c8b37a9-79e1-44f6-bd0d-7ff95f46b169" Jan 23 01:03:08.417006 kubelet[2810]: E0123 01:03:08.415288 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c95865886-4cvht" podUID="6a42d98b-0861-4abd-98c0-5f1896587e7b" Jan 23 01:03:08.430739 systemd[1]: Started sshd@38-10.0.0.13:22-10.0.0.1:56502.service - OpenSSH per-connection server daemon (10.0.0.1:56502). Jan 23 01:03:08.569288 sshd[5775]: Accepted publickey for core from 10.0.0.1 port 56502 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:03:08.575723 sshd-session[5775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:03:08.595691 systemd-logind[1531]: New session 39 of user core. Jan 23 01:03:08.608004 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 23 01:03:08.949039 sshd[5778]: Connection closed by 10.0.0.1 port 56502 Jan 23 01:03:08.952336 sshd-session[5775]: pam_unix(sshd:session): session closed for user core Jan 23 01:03:08.968835 systemd[1]: sshd@38-10.0.0.13:22-10.0.0.1:56502.service: Deactivated successfully. Jan 23 01:03:08.991338 systemd[1]: session-39.scope: Deactivated successfully. Jan 23 01:03:08.994426 systemd-logind[1531]: Session 39 logged out. Waiting for processes to exit. Jan 23 01:03:08.998523 systemd-logind[1531]: Removed session 39. Jan 23 01:03:09.393694 kubelet[2810]: E0123 01:03:09.393032 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-558649896b-xvhfg" podUID="655c83b6-f33b-4c1f-8ca9-c00c869c6e41" Jan 23 01:03:09.410127 kubelet[2810]: E0123 01:03:09.408342 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpc42" podUID="1a86b4da-5edc-4f85-b21e-20314381c9bb" Jan 23 01:03:11.581279 kubelet[2810]: E0123 01:03:11.580454 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-jwv45" podUID="479d141d-917c-42c5-8315-9e3283f05aa9" Jan 23 01:03:14.009174 systemd[1]: Started sshd@39-10.0.0.13:22-10.0.0.1:47288.service - OpenSSH per-connection server daemon (10.0.0.1:47288). Jan 23 01:03:14.151259 sshd[5821]: Accepted publickey for core from 10.0.0.1 port 47288 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:03:14.159270 sshd-session[5821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:03:14.206384 systemd-logind[1531]: New session 40 of user core. Jan 23 01:03:14.221615 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 23 01:03:14.445393 sshd[5826]: Connection closed by 10.0.0.1 port 47288 Jan 23 01:03:14.447208 sshd-session[5821]: pam_unix(sshd:session): session closed for user core Jan 23 01:03:14.468891 systemd[1]: sshd@39-10.0.0.13:22-10.0.0.1:47288.service: Deactivated successfully. Jan 23 01:03:14.487240 systemd[1]: session-40.scope: Deactivated successfully. Jan 23 01:03:14.495227 systemd-logind[1531]: Session 40 logged out. Waiting for processes to exit. Jan 23 01:03:14.504643 systemd-logind[1531]: Removed session 40. Jan 23 01:03:16.409775 kubelet[2810]: E0123 01:03:16.409112 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-j5rv6" podUID="43accc0b-89ee-4b5d-a714-8b1afe2391c5" Jan 23 01:03:18.750383 containerd[1550]: time="2026-01-23T01:03:18.750060561Z" level=warning msg="container event discarded" container=0225059d890f0e8ccadf9666906e2b9ed4642ec8f74a6ac3c763b20980c2981b type=CONTAINER_CREATED_EVENT Jan 23 01:03:18.763817 containerd[1550]: time="2026-01-23T01:03:18.763040644Z" level=warning msg="container event discarded" container=0225059d890f0e8ccadf9666906e2b9ed4642ec8f74a6ac3c763b20980c2981b type=CONTAINER_STARTED_EVENT Jan 23 01:03:18.763817 containerd[1550]: time="2026-01-23T01:03:18.763370620Z" level=warning msg="container event discarded" container=abfbcd7b5d58a78a0faa6b97c578ca3a9d5283fa83789cee424d6c72a5009bff type=CONTAINER_CREATED_EVENT Jan 23 01:03:18.763817 containerd[1550]: time="2026-01-23T01:03:18.763400305Z" level=warning msg="container event discarded" container=abfbcd7b5d58a78a0faa6b97c578ca3a9d5283fa83789cee424d6c72a5009bff type=CONTAINER_STARTED_EVENT Jan 23 01:03:18.836885 containerd[1550]: time="2026-01-23T01:03:18.836706318Z" level=warning msg="container event discarded" container=f37cf133c88f43e335cc525da98821f96f6ae4afbb3ced9c418786bc20262773 type=CONTAINER_CREATED_EVENT Jan 23 01:03:18.836885 containerd[1550]: time="2026-01-23T01:03:18.836831773Z" level=warning msg="container event discarded" container=f37cf133c88f43e335cc525da98821f96f6ae4afbb3ced9c418786bc20262773 type=CONTAINER_STARTED_EVENT Jan 23 01:03:19.126588 containerd[1550]: time="2026-01-23T01:03:19.126509102Z" level=warning msg="container event discarded" container=99c925d4f549af35810a3e3dd67d40a8474bb60b97f0e6ea75e62a019a040618 type=CONTAINER_CREATED_EVENT Jan 23 01:03:19.139039 containerd[1550]: time="2026-01-23T01:03:19.138880852Z" level=warning msg="container event discarded" container=43e9b75dd60fc8232d668f98e56b85e178e434d4cfeb2e760ff0872875583023 type=CONTAINER_CREATED_EVENT Jan 23 01:03:19.164360 containerd[1550]: time="2026-01-23T01:03:19.162588866Z" level=warning msg="container event discarded" container=1922dc08dff260148aa606a913d2f20459165371b4381f478419688be74844f0 type=CONTAINER_CREATED_EVENT Jan 23 01:03:19.354815 containerd[1550]: time="2026-01-23T01:03:19.354725782Z" level=warning msg="container event discarded" container=43e9b75dd60fc8232d668f98e56b85e178e434d4cfeb2e760ff0872875583023 type=CONTAINER_STARTED_EVENT Jan 23 01:03:19.388026 containerd[1550]: time="2026-01-23T01:03:19.385222704Z" level=warning msg="container event discarded" container=99c925d4f549af35810a3e3dd67d40a8474bb60b97f0e6ea75e62a019a040618 type=CONTAINER_STARTED_EVENT Jan 23 01:03:19.414297 containerd[1550]: time="2026-01-23T01:03:19.414161669Z" level=warning msg="container event discarded" container=1922dc08dff260148aa606a913d2f20459165371b4381f478419688be74844f0 type=CONTAINER_STARTED_EVENT Jan 23 01:03:19.511121 systemd[1]: Started sshd@40-10.0.0.13:22-10.0.0.1:47290.service - OpenSSH per-connection server daemon (10.0.0.1:47290). Jan 23 01:03:19.735286 sshd[5839]: Accepted publickey for core from 10.0.0.1 port 47290 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:03:19.737045 sshd-session[5839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:03:19.764659 systemd-logind[1531]: New session 41 of user core. Jan 23 01:03:19.797513 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 23 01:03:20.320132 sshd[5842]: Connection closed by 10.0.0.1 port 47290 Jan 23 01:03:20.326203 sshd-session[5839]: pam_unix(sshd:session): session closed for user core Jan 23 01:03:20.346929 systemd-logind[1531]: Session 41 logged out. Waiting for processes to exit. Jan 23 01:03:20.352147 systemd[1]: sshd@40-10.0.0.13:22-10.0.0.1:47290.service: Deactivated successfully. Jan 23 01:03:20.360551 systemd[1]: session-41.scope: Deactivated successfully. Jan 23 01:03:20.400093 systemd-logind[1531]: Removed session 41. Jan 23 01:03:21.397478 kubelet[2810]: E0123 01:03:21.396547 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-44t6d" podUID="7c8b37a9-79e1-44f6-bd0d-7ff95f46b169" Jan 23 01:03:21.401461 kubelet[2810]: E0123 01:03:21.397596 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-558649896b-xvhfg" podUID="655c83b6-f33b-4c1f-8ca9-c00c869c6e41" Jan 23 01:03:21.413254 kubelet[2810]: E0123 01:03:21.412337 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c95865886-4cvht" podUID="6a42d98b-0861-4abd-98c0-5f1896587e7b" Jan 23 01:03:22.406267 kubelet[2810]: E0123 01:03:22.404682 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qpc42" podUID="1a86b4da-5edc-4f85-b21e-20314381c9bb" Jan 23 01:03:25.349170 systemd[1]: Started sshd@41-10.0.0.13:22-10.0.0.1:60624.service - OpenSSH per-connection server daemon (10.0.0.1:60624). Jan 23 01:03:25.492994 sshd[5856]: Accepted publickey for core from 10.0.0.1 port 60624 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:03:25.498306 sshd-session[5856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:03:25.524665 systemd-logind[1531]: New session 42 of user core. Jan 23 01:03:25.536240 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 23 01:03:25.818139 sshd[5859]: Connection closed by 10.0.0.1 port 60624 Jan 23 01:03:25.822145 sshd-session[5856]: pam_unix(sshd:session): session closed for user core Jan 23 01:03:25.841221 systemd[1]: sshd@41-10.0.0.13:22-10.0.0.1:60624.service: Deactivated successfully. Jan 23 01:03:25.851054 systemd[1]: session-42.scope: Deactivated successfully. Jan 23 01:03:25.859520 systemd-logind[1531]: Session 42 logged out. Waiting for processes to exit. Jan 23 01:03:25.863541 systemd-logind[1531]: Removed session 42. Jan 23 01:03:26.402191 kubelet[2810]: E0123 01:03:26.398874 2810 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cbc9d4d7d-jwv45" podUID="479d141d-917c-42c5-8315-9e3283f05aa9"