Jan 23 17:57:41.814739 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 23 17:57:41.814762 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Jan 23 16:10:02 -00 2026 Jan 23 17:57:41.814771 kernel: KASLR enabled Jan 23 17:57:41.814777 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 23 17:57:41.814783 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390b8118 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Jan 23 17:57:41.814788 kernel: random: crng init done Jan 23 17:57:41.814795 kernel: secureboot: Secure boot disabled Jan 23 17:57:41.814800 kernel: ACPI: Early table checksum verification disabled Jan 23 17:57:41.814807 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jan 23 17:57:41.814813 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 23 17:57:41.814820 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:57:41.814826 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:57:41.814832 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:57:41.814837 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:57:41.814881 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:57:41.814892 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:57:41.814898 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:57:41.814904 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:57:41.814911 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:57:41.814917 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 23 17:57:41.814923 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 23 17:57:41.814929 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 23 17:57:41.814935 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 23 17:57:41.814941 kernel: NODE_DATA(0) allocated [mem 0x13967da00-0x139684fff] Jan 23 17:57:41.814947 kernel: Zone ranges: Jan 23 17:57:41.814953 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 23 17:57:41.814961 kernel: DMA32 empty Jan 23 17:57:41.814966 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 23 17:57:41.814972 kernel: Device empty Jan 23 17:57:41.814978 kernel: Movable zone start for each node Jan 23 17:57:41.814984 kernel: Early memory node ranges Jan 23 17:57:41.814990 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Jan 23 17:57:41.814997 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Jan 23 17:57:41.815003 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Jan 23 17:57:41.815008 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jan 23 17:57:41.815014 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jan 23 17:57:41.815020 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jan 23 17:57:41.815026 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jan 23 17:57:41.815034 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jan 23 17:57:41.815040 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jan 23 17:57:41.815049 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 23 17:57:41.815055 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 23 17:57:41.815062 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Jan 23 17:57:41.815069 kernel: psci: probing for conduit method from ACPI. Jan 23 17:57:41.815076 kernel: psci: PSCIv1.1 detected in firmware. Jan 23 17:57:41.815082 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 17:57:41.815088 kernel: psci: Trusted OS migration not required Jan 23 17:57:41.815095 kernel: psci: SMC Calling Convention v1.1 Jan 23 17:57:41.815101 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 23 17:57:41.815108 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 23 17:57:41.815114 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 23 17:57:41.815121 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 17:57:41.815127 kernel: Detected PIPT I-cache on CPU0 Jan 23 17:57:41.815133 kernel: CPU features: detected: GIC system register CPU interface Jan 23 17:57:41.815141 kernel: CPU features: detected: Spectre-v4 Jan 23 17:57:41.815148 kernel: CPU features: detected: Spectre-BHB Jan 23 17:57:41.815154 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 23 17:57:41.815161 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 23 17:57:41.815167 kernel: CPU features: detected: ARM erratum 1418040 Jan 23 17:57:41.815173 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 23 17:57:41.815180 kernel: alternatives: applying boot alternatives Jan 23 17:57:41.815187 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=5fc6d8e43735a6d26d13c2f5b234025ac82c601a45144671feeb457ddade8f9d Jan 23 17:57:41.815194 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 17:57:41.815200 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 17:57:41.815207 kernel: Fallback order for Node 0: 0 Jan 23 17:57:41.815215 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1024000 Jan 23 17:57:41.815221 kernel: Policy zone: Normal Jan 23 17:57:41.815227 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 17:57:41.815234 kernel: software IO TLB: area num 2. Jan 23 17:57:41.815240 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Jan 23 17:57:41.815247 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 17:57:41.815253 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 17:57:41.815260 kernel: rcu: RCU event tracing is enabled. Jan 23 17:57:41.815295 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 17:57:41.815303 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 17:57:41.815309 kernel: Tracing variant of Tasks RCU enabled. Jan 23 17:57:41.815316 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 17:57:41.815324 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 17:57:41.815331 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 17:57:41.815338 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 17:57:41.815344 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 17:57:41.815350 kernel: GICv3: 256 SPIs implemented Jan 23 17:57:41.815357 kernel: GICv3: 0 Extended SPIs implemented Jan 23 17:57:41.815363 kernel: Root IRQ handler: gic_handle_irq Jan 23 17:57:41.815383 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 23 17:57:41.815391 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jan 23 17:57:41.815398 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 23 17:57:41.815404 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 23 17:57:41.815413 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100100000 (indirect, esz 8, psz 64K, shr 1) Jan 23 17:57:41.815420 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100110000 (flat, esz 8, psz 64K, shr 1) Jan 23 17:57:41.815426 kernel: GICv3: using LPI property table @0x0000000100120000 Jan 23 17:57:41.815433 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100130000 Jan 23 17:57:41.815439 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 17:57:41.815446 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 17:57:41.815452 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 23 17:57:41.815459 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 23 17:57:41.815466 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 23 17:57:41.815472 kernel: Console: colour dummy device 80x25 Jan 23 17:57:41.815479 kernel: ACPI: Core revision 20240827 Jan 23 17:57:41.815487 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 23 17:57:41.815494 kernel: pid_max: default: 32768 minimum: 301 Jan 23 17:57:41.815501 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 17:57:41.815508 kernel: landlock: Up and running. Jan 23 17:57:41.815514 kernel: SELinux: Initializing. Jan 23 17:57:41.815521 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 17:57:41.815528 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 17:57:41.815534 kernel: rcu: Hierarchical SRCU implementation. Jan 23 17:57:41.815541 kernel: rcu: Max phase no-delay instances is 400. Jan 23 17:57:41.815549 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 17:57:41.815556 kernel: Remapping and enabling EFI services. Jan 23 17:57:41.815563 kernel: smp: Bringing up secondary CPUs ... Jan 23 17:57:41.815569 kernel: Detected PIPT I-cache on CPU1 Jan 23 17:57:41.815576 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 23 17:57:41.815583 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100140000 Jan 23 17:57:41.815589 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 17:57:41.815596 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 23 17:57:41.815603 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 17:57:41.815610 kernel: SMP: Total of 2 processors activated. Jan 23 17:57:41.815622 kernel: CPU: All CPU(s) started at EL1 Jan 23 17:57:41.815630 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 17:57:41.815638 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 23 17:57:41.815645 kernel: CPU features: detected: Common not Private translations Jan 23 17:57:41.815652 kernel: CPU features: detected: CRC32 instructions Jan 23 17:57:41.815659 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 23 17:57:41.815667 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 23 17:57:41.815675 kernel: CPU features: detected: LSE atomic instructions Jan 23 17:57:41.815682 kernel: CPU features: detected: Privileged Access Never Jan 23 17:57:41.815689 kernel: CPU features: detected: RAS Extension Support Jan 23 17:57:41.815696 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 23 17:57:41.815703 kernel: alternatives: applying system-wide alternatives Jan 23 17:57:41.815710 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jan 23 17:57:41.815718 kernel: Memory: 3858852K/4096000K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 215668K reserved, 16384K cma-reserved) Jan 23 17:57:41.815725 kernel: devtmpfs: initialized Jan 23 17:57:41.815732 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 17:57:41.815741 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 17:57:41.815748 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 23 17:57:41.815755 kernel: 0 pages in range for non-PLT usage Jan 23 17:57:41.815762 kernel: 508400 pages in range for PLT usage Jan 23 17:57:41.815769 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 17:57:41.815776 kernel: SMBIOS 3.0.0 present. Jan 23 17:57:41.815783 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 23 17:57:41.815790 kernel: DMI: Memory slots populated: 1/1 Jan 23 17:57:41.815798 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 17:57:41.815806 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 17:57:41.815813 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 17:57:41.815820 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 17:57:41.815827 kernel: audit: initializing netlink subsys (disabled) Jan 23 17:57:41.815834 kernel: audit: type=2000 audit(0.013:1): state=initialized audit_enabled=0 res=1 Jan 23 17:57:41.815841 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 17:57:41.815858 kernel: cpuidle: using governor menu Jan 23 17:57:41.815866 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 17:57:41.815873 kernel: ASID allocator initialised with 32768 entries Jan 23 17:57:41.815882 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 17:57:41.815890 kernel: Serial: AMBA PL011 UART driver Jan 23 17:57:41.815897 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 17:57:41.815904 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 17:57:41.815911 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 17:57:41.815918 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 17:57:41.815925 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 17:57:41.815932 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 17:57:41.815939 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 17:57:41.815947 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 17:57:41.815954 kernel: ACPI: Added _OSI(Module Device) Jan 23 17:57:41.815961 kernel: ACPI: Added _OSI(Processor Device) Jan 23 17:57:41.815968 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 17:57:41.815975 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 17:57:41.815982 kernel: ACPI: Interpreter enabled Jan 23 17:57:41.815989 kernel: ACPI: Using GIC for interrupt routing Jan 23 17:57:41.815996 kernel: ACPI: MCFG table detected, 1 entries Jan 23 17:57:41.816003 kernel: ACPI: CPU0 has been hot-added Jan 23 17:57:41.816010 kernel: ACPI: CPU1 has been hot-added Jan 23 17:57:41.816019 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 23 17:57:41.816026 kernel: printk: legacy console [ttyAMA0] enabled Jan 23 17:57:41.816033 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 17:57:41.816186 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 17:57:41.816253 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 23 17:57:41.816351 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 23 17:57:41.816413 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 23 17:57:41.816476 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 23 17:57:41.816485 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 23 17:57:41.816493 kernel: PCI host bridge to bus 0000:00 Jan 23 17:57:41.816565 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 23 17:57:41.816620 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 23 17:57:41.816673 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 23 17:57:41.816725 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 17:57:41.816807 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jan 23 17:57:41.816930 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 conventional PCI endpoint Jan 23 17:57:41.816998 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11289000-0x11289fff] Jan 23 17:57:41.817059 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref] Jan 23 17:57:41.817132 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:57:41.817193 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11288000-0x11288fff] Jan 23 17:57:41.817257 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 23 17:57:41.817354 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Jan 23 17:57:41.817415 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Jan 23 17:57:41.817483 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:57:41.817543 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11287000-0x11287fff] Jan 23 17:57:41.817601 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 23 17:57:41.817664 kernel: pci 0000:00:02.1: bridge window [mem 0x10e00000-0x10ffffff] Jan 23 17:57:41.817734 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:57:41.817796 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11286000-0x11286fff] Jan 23 17:57:41.817870 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 23 17:57:41.817936 kernel: pci 0000:00:02.2: bridge window [mem 0x10c00000-0x10dfffff] Jan 23 17:57:41.817997 kernel: pci 0000:00:02.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Jan 23 17:57:41.818065 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:57:41.818135 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11285000-0x11285fff] Jan 23 17:57:41.818198 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 23 17:57:41.818258 kernel: pci 0000:00:02.3: bridge window [mem 0x10a00000-0x10bfffff] Jan 23 17:57:41.818366 kernel: pci 0000:00:02.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Jan 23 17:57:41.818451 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:57:41.818513 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11284000-0x11284fff] Jan 23 17:57:41.818573 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 23 17:57:41.818632 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 23 17:57:41.818694 kernel: pci 0000:00:02.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Jan 23 17:57:41.818760 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:57:41.818819 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11283000-0x11283fff] Jan 23 17:57:41.818895 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 23 17:57:41.818955 kernel: pci 0000:00:02.5: bridge window [mem 0x10600000-0x107fffff] Jan 23 17:57:41.819014 kernel: pci 0000:00:02.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Jan 23 17:57:41.819080 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:57:41.819144 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11282000-0x11282fff] Jan 23 17:57:41.819203 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 23 17:57:41.819261 kernel: pci 0000:00:02.6: bridge window [mem 0x10400000-0x105fffff] Jan 23 17:57:41.819335 kernel: pci 0000:00:02.6: bridge window [mem 0x8000500000-0x80005fffff 64bit pref] Jan 23 17:57:41.819403 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:57:41.819462 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11281000-0x11281fff] Jan 23 17:57:41.819524 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 23 17:57:41.819582 kernel: pci 0000:00:02.7: bridge window [mem 0x10200000-0x103fffff] Jan 23 17:57:41.819649 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:57:41.819709 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11280000-0x11280fff] Jan 23 17:57:41.819767 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 23 17:57:41.819825 kernel: pci 0000:00:03.0: bridge window [mem 0x10000000-0x101fffff] Jan 23 17:57:41.819943 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 conventional PCI endpoint Jan 23 17:57:41.820098 kernel: pci 0000:00:04.0: BAR 0 [io 0x0000-0x0007] Jan 23 17:57:41.820174 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 23 17:57:41.820237 kernel: pci 0000:01:00.0: BAR 1 [mem 0x11000000-0x11000fff] Jan 23 17:57:41.820316 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jan 23 17:57:41.820379 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 23 17:57:41.820450 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jan 23 17:57:41.820512 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10e00000-0x10e03fff 64bit] Jan 23 17:57:41.820589 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Jan 23 17:57:41.820652 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10c00000-0x10c00fff] Jan 23 17:57:41.820728 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Jan 23 17:57:41.820807 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Jan 23 17:57:41.820886 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Jan 23 17:57:41.820957 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 23 17:57:41.821038 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff] Jan 23 17:57:41.821102 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Jan 23 17:57:41.821172 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Jan 23 17:57:41.821234 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10600000-0x10600fff] Jan 23 17:57:41.821336 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Jan 23 17:57:41.821410 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 23 17:57:41.821473 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10400000-0x10400fff] Jan 23 17:57:41.821536 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000500000-0x8000503fff 64bit pref] Jan 23 17:57:41.821597 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 23 17:57:41.821659 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 23 17:57:41.821719 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 23 17:57:41.821777 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 23 17:57:41.821839 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 23 17:57:41.821920 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 23 17:57:41.821985 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 23 17:57:41.822047 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 23 17:57:41.822111 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 23 17:57:41.822170 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 23 17:57:41.822232 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 23 17:57:41.822322 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 23 17:57:41.822384 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 23 17:57:41.822451 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 23 17:57:41.822510 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 23 17:57:41.822571 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 23 17:57:41.822634 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 23 17:57:41.822693 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 23 17:57:41.822752 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 23 17:57:41.822815 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 23 17:57:41.822926 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 23 17:57:41.822992 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 23 17:57:41.823057 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 23 17:57:41.823117 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 23 17:57:41.823176 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 23 17:57:41.823239 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 23 17:57:41.823343 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 23 17:57:41.823406 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 23 17:57:41.823471 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff]: assigned Jan 23 17:57:41.823530 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Jan 23 17:57:41.823590 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff]: assigned Jan 23 17:57:41.823648 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Jan 23 17:57:41.823709 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff]: assigned Jan 23 17:57:41.823768 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Jan 23 17:57:41.823833 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff]: assigned Jan 23 17:57:41.823906 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Jan 23 17:57:41.823983 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff]: assigned Jan 23 17:57:41.824046 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Jan 23 17:57:41.824107 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Jan 23 17:57:41.824166 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Jan 23 17:57:41.824227 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Jan 23 17:57:41.824298 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Jan 23 17:57:41.824359 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Jan 23 17:57:41.824418 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Jan 23 17:57:41.824477 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff]: assigned Jan 23 17:57:41.824535 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Jan 23 17:57:41.824600 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8001200000-0x8001203fff 64bit pref]: assigned Jan 23 17:57:41.824658 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11200000-0x11200fff]: assigned Jan 23 17:57:41.824717 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11201000-0x11201fff]: assigned Jan 23 17:57:41.824778 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Jan 23 17:57:41.824837 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11202000-0x11202fff]: assigned Jan 23 17:57:41.824939 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Jan 23 17:57:41.825003 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11203000-0x11203fff]: assigned Jan 23 17:57:41.825065 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Jan 23 17:57:41.825128 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11204000-0x11204fff]: assigned Jan 23 17:57:41.825187 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Jan 23 17:57:41.825246 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11205000-0x11205fff]: assigned Jan 23 17:57:41.827440 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Jan 23 17:57:41.827538 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11206000-0x11206fff]: assigned Jan 23 17:57:41.827600 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Jan 23 17:57:41.827663 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11207000-0x11207fff]: assigned Jan 23 17:57:41.827732 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Jan 23 17:57:41.827797 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11208000-0x11208fff]: assigned Jan 23 17:57:41.827878 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Jan 23 17:57:41.827948 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11209000-0x11209fff]: assigned Jan 23 17:57:41.828009 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff]: assigned Jan 23 17:57:41.828072 kernel: pci 0000:00:04.0: BAR 0 [io 0xa000-0xa007]: assigned Jan 23 17:57:41.828141 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Jan 23 17:57:41.828202 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jan 23 17:57:41.828281 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Jan 23 17:57:41.828352 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 23 17:57:41.828413 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 23 17:57:41.828472 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 23 17:57:41.828532 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 23 17:57:41.828600 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Jan 23 17:57:41.828662 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 23 17:57:41.828722 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 23 17:57:41.828783 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 23 17:57:41.828842 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 23 17:57:41.828928 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Jan 23 17:57:41.828990 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Jan 23 17:57:41.829052 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 23 17:57:41.829112 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 23 17:57:41.829174 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 23 17:57:41.829232 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 23 17:57:41.829318 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Jan 23 17:57:41.829383 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 23 17:57:41.829442 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 23 17:57:41.829504 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 23 17:57:41.829563 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 23 17:57:41.829634 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Jan 23 17:57:41.829696 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff]: assigned Jan 23 17:57:41.829756 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 23 17:57:41.829814 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 23 17:57:41.829925 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 23 17:57:41.829991 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 23 17:57:41.830059 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Jan 23 17:57:41.830125 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Jan 23 17:57:41.830187 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 23 17:57:41.830261 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 23 17:57:41.830344 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 23 17:57:41.830404 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 23 17:57:41.830471 kernel: pci 0000:07:00.0: ROM [mem 0x10c00000-0x10c7ffff pref]: assigned Jan 23 17:57:41.830532 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000c00000-0x8000c03fff 64bit pref]: assigned Jan 23 17:57:41.830596 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10c80000-0x10c80fff]: assigned Jan 23 17:57:41.830658 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 23 17:57:41.830720 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 23 17:57:41.830780 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 23 17:57:41.830840 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 23 17:57:41.830919 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 23 17:57:41.830981 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 23 17:57:41.831041 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 23 17:57:41.831103 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 23 17:57:41.831164 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 23 17:57:41.831223 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 23 17:57:41.833193 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 23 17:57:41.833348 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 23 17:57:41.833417 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 23 17:57:41.833471 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 23 17:57:41.833525 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 23 17:57:41.833597 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 23 17:57:41.833654 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 23 17:57:41.833715 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 23 17:57:41.833778 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 23 17:57:41.833834 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 23 17:57:41.833914 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 23 17:57:41.833983 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 23 17:57:41.834041 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 23 17:57:41.834100 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 23 17:57:41.834164 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 23 17:57:41.834219 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 23 17:57:41.834307 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 23 17:57:41.834375 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 23 17:57:41.834431 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 23 17:57:41.834487 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 23 17:57:41.834555 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 23 17:57:41.834610 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 23 17:57:41.834665 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 23 17:57:41.834726 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 23 17:57:41.834783 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 23 17:57:41.834886 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 23 17:57:41.834966 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 23 17:57:41.835040 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 23 17:57:41.835097 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 23 17:57:41.835160 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 23 17:57:41.835216 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 23 17:57:41.835342 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 23 17:57:41.835354 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 23 17:57:41.835362 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 23 17:57:41.835372 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 23 17:57:41.835380 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 23 17:57:41.835388 kernel: iommu: Default domain type: Translated Jan 23 17:57:41.835395 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 17:57:41.835403 kernel: efivars: Registered efivars operations Jan 23 17:57:41.835410 kernel: vgaarb: loaded Jan 23 17:57:41.835417 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 17:57:41.835425 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 17:57:41.835432 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 17:57:41.835443 kernel: pnp: PnP ACPI init Jan 23 17:57:41.835522 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 23 17:57:41.835534 kernel: pnp: PnP ACPI: found 1 devices Jan 23 17:57:41.835542 kernel: NET: Registered PF_INET protocol family Jan 23 17:57:41.835549 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 17:57:41.835557 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 17:57:41.835564 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 17:57:41.835572 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 17:57:41.835582 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 17:57:41.835589 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 17:57:41.835597 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 17:57:41.835604 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 17:57:41.835612 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 17:57:41.835679 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 23 17:57:41.835690 kernel: PCI: CLS 0 bytes, default 64 Jan 23 17:57:41.835698 kernel: kvm [1]: HYP mode not available Jan 23 17:57:41.835705 kernel: Initialise system trusted keyrings Jan 23 17:57:41.835714 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 17:57:41.835722 kernel: Key type asymmetric registered Jan 23 17:57:41.835729 kernel: Asymmetric key parser 'x509' registered Jan 23 17:57:41.835737 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 23 17:57:41.835744 kernel: io scheduler mq-deadline registered Jan 23 17:57:41.835752 kernel: io scheduler kyber registered Jan 23 17:57:41.835759 kernel: io scheduler bfq registered Jan 23 17:57:41.835767 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 23 17:57:41.835830 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 23 17:57:41.835910 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 23 17:57:41.835972 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:57:41.836037 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 23 17:57:41.836098 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 23 17:57:41.836157 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:57:41.836225 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 23 17:57:41.836299 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 23 17:57:41.836361 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:57:41.836430 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 23 17:57:41.836490 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 23 17:57:41.836548 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:57:41.836611 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 23 17:57:41.836670 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 23 17:57:41.836729 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:57:41.836790 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 23 17:57:41.836861 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 23 17:57:41.836927 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:57:41.836991 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 23 17:57:41.837051 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 23 17:57:41.837109 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:57:41.837172 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 23 17:57:41.837231 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 23 17:57:41.837327 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:57:41.837342 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 23 17:57:41.837403 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 23 17:57:41.837464 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 23 17:57:41.837522 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:57:41.837532 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 23 17:57:41.837540 kernel: ACPI: button: Power Button [PWRB] Jan 23 17:57:41.837548 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 23 17:57:41.837612 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 23 17:57:41.837678 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 23 17:57:41.837691 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 17:57:41.837699 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 23 17:57:41.837763 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 23 17:57:41.837773 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 23 17:57:41.837787 kernel: thunder_xcv, ver 1.0 Jan 23 17:57:41.837795 kernel: thunder_bgx, ver 1.0 Jan 23 17:57:41.837803 kernel: nicpf, ver 1.0 Jan 23 17:57:41.837812 kernel: nicvf, ver 1.0 Jan 23 17:57:41.837949 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 17:57:41.838028 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T17:57:41 UTC (1769191061) Jan 23 17:57:41.838038 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 17:57:41.838046 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jan 23 17:57:41.838053 kernel: watchdog: NMI not fully supported Jan 23 17:57:41.838061 kernel: watchdog: Hard watchdog permanently disabled Jan 23 17:57:41.838069 kernel: NET: Registered PF_INET6 protocol family Jan 23 17:57:41.838076 kernel: Segment Routing with IPv6 Jan 23 17:57:41.838084 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 17:57:41.838094 kernel: NET: Registered PF_PACKET protocol family Jan 23 17:57:41.838102 kernel: Key type dns_resolver registered Jan 23 17:57:41.838109 kernel: registered taskstats version 1 Jan 23 17:57:41.838116 kernel: Loading compiled-in X.509 certificates Jan 23 17:57:41.838124 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 3b281aa2bfe49764dd224485ec54e6070c82b8fb' Jan 23 17:57:41.838131 kernel: Demotion targets for Node 0: null Jan 23 17:57:41.838139 kernel: Key type .fscrypt registered Jan 23 17:57:41.838146 kernel: Key type fscrypt-provisioning registered Jan 23 17:57:41.838154 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 17:57:41.838163 kernel: ima: Allocated hash algorithm: sha1 Jan 23 17:57:41.838170 kernel: ima: No architecture policies found Jan 23 17:57:41.838178 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 17:57:41.838186 kernel: clk: Disabling unused clocks Jan 23 17:57:41.838193 kernel: PM: genpd: Disabling unused power domains Jan 23 17:57:41.838201 kernel: Warning: unable to open an initial console. Jan 23 17:57:41.838209 kernel: Freeing unused kernel memory: 39552K Jan 23 17:57:41.838216 kernel: Run /init as init process Jan 23 17:57:41.838224 kernel: with arguments: Jan 23 17:57:41.838233 kernel: /init Jan 23 17:57:41.838241 kernel: with environment: Jan 23 17:57:41.838248 kernel: HOME=/ Jan 23 17:57:41.838255 kernel: TERM=linux Jan 23 17:57:41.838275 systemd[1]: Successfully made /usr/ read-only. Jan 23 17:57:41.838289 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 17:57:41.838297 systemd[1]: Detected virtualization kvm. Jan 23 17:57:41.838308 systemd[1]: Detected architecture arm64. Jan 23 17:57:41.838316 systemd[1]: Running in initrd. Jan 23 17:57:41.838324 systemd[1]: No hostname configured, using default hostname. Jan 23 17:57:41.838333 systemd[1]: Hostname set to . Jan 23 17:57:41.838341 systemd[1]: Initializing machine ID from VM UUID. Jan 23 17:57:41.838349 systemd[1]: Queued start job for default target initrd.target. Jan 23 17:57:41.838357 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:57:41.838365 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:57:41.838374 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 17:57:41.838383 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 17:57:41.838391 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 17:57:41.838400 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 17:57:41.838409 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 17:57:41.838417 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 17:57:41.838425 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:57:41.838434 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:57:41.838442 systemd[1]: Reached target paths.target - Path Units. Jan 23 17:57:41.838450 systemd[1]: Reached target slices.target - Slice Units. Jan 23 17:57:41.838458 systemd[1]: Reached target swap.target - Swaps. Jan 23 17:57:41.838466 systemd[1]: Reached target timers.target - Timer Units. Jan 23 17:57:41.838473 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 17:57:41.838482 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 17:57:41.838490 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 17:57:41.838498 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 17:57:41.838507 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:57:41.838515 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 17:57:41.838523 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:57:41.838531 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 17:57:41.838539 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 17:57:41.838547 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 17:57:41.838554 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 17:57:41.838563 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 17:57:41.838572 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 17:57:41.838580 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 17:57:41.838588 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 17:57:41.838596 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:57:41.838632 systemd-journald[244]: Collecting audit messages is disabled. Jan 23 17:57:41.838654 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 17:57:41.838663 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:57:41.838671 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 17:57:41.838679 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 17:57:41.838690 systemd-journald[244]: Journal started Jan 23 17:57:41.838708 systemd-journald[244]: Runtime Journal (/run/log/journal/e2722aefcadd4b7699d6911262a6db0c) is 8M, max 76.5M, 68.5M free. Jan 23 17:57:41.839539 systemd-modules-load[246]: Inserted module 'overlay' Jan 23 17:57:41.841410 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 17:57:41.853306 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 17:57:41.855788 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:57:41.861300 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 17:57:41.862757 systemd-modules-load[246]: Inserted module 'br_netfilter' Jan 23 17:57:41.863438 kernel: Bridge firewalling registered Jan 23 17:57:41.863700 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 17:57:41.866124 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 17:57:41.871439 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 17:57:41.872098 systemd-tmpfiles[261]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 17:57:41.876133 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 17:57:41.879166 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 17:57:41.881547 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:57:41.907803 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:57:41.908755 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:57:41.914795 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 17:57:41.920495 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 17:57:41.924457 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 17:57:41.948286 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=5fc6d8e43735a6d26d13c2f5b234025ac82c601a45144671feeb457ddade8f9d Jan 23 17:57:41.974696 systemd-resolved[284]: Positive Trust Anchors: Jan 23 17:57:41.974713 systemd-resolved[284]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 17:57:41.974745 systemd-resolved[284]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 17:57:41.980899 systemd-resolved[284]: Defaulting to hostname 'linux'. Jan 23 17:57:41.981997 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 17:57:41.983212 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:57:42.053309 kernel: SCSI subsystem initialized Jan 23 17:57:42.058316 kernel: Loading iSCSI transport class v2.0-870. Jan 23 17:57:42.066358 kernel: iscsi: registered transport (tcp) Jan 23 17:57:42.079344 kernel: iscsi: registered transport (qla4xxx) Jan 23 17:57:42.079448 kernel: QLogic iSCSI HBA Driver Jan 23 17:57:42.099928 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 17:57:42.121471 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:57:42.126574 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 17:57:42.181375 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 17:57:42.184307 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 17:57:42.255319 kernel: raid6: neonx8 gen() 15672 MB/s Jan 23 17:57:42.272340 kernel: raid6: neonx4 gen() 15742 MB/s Jan 23 17:57:42.289325 kernel: raid6: neonx2 gen() 13127 MB/s Jan 23 17:57:42.306363 kernel: raid6: neonx1 gen() 10398 MB/s Jan 23 17:57:42.323345 kernel: raid6: int64x8 gen() 6865 MB/s Jan 23 17:57:42.340324 kernel: raid6: int64x4 gen() 7318 MB/s Jan 23 17:57:42.357352 kernel: raid6: int64x2 gen() 6067 MB/s Jan 23 17:57:42.374336 kernel: raid6: int64x1 gen() 5030 MB/s Jan 23 17:57:42.374410 kernel: raid6: using algorithm neonx4 gen() 15742 MB/s Jan 23 17:57:42.391332 kernel: raid6: .... xor() 12269 MB/s, rmw enabled Jan 23 17:57:42.391401 kernel: raid6: using neon recovery algorithm Jan 23 17:57:42.396395 kernel: xor: measuring software checksum speed Jan 23 17:57:42.396476 kernel: 8regs : 18263 MB/sec Jan 23 17:57:42.396503 kernel: 32regs : 18713 MB/sec Jan 23 17:57:42.397530 kernel: arm64_neon : 28022 MB/sec Jan 23 17:57:42.397577 kernel: xor: using function: arm64_neon (28022 MB/sec) Jan 23 17:57:42.451330 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 17:57:42.460016 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 17:57:42.466242 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:57:42.494407 systemd-udevd[496]: Using default interface naming scheme 'v255'. Jan 23 17:57:42.498808 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:57:42.505060 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 17:57:42.535882 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation Jan 23 17:57:42.566172 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 17:57:42.569060 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 17:57:42.628800 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:57:42.632727 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 17:57:42.734904 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Jan 23 17:57:42.748315 kernel: scsi host0: Virtio SCSI HBA Jan 23 17:57:42.749473 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 23 17:57:42.749507 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 23 17:57:42.765299 kernel: ACPI: bus type USB registered Jan 23 17:57:42.767304 kernel: usbcore: registered new interface driver usbfs Jan 23 17:57:42.767357 kernel: usbcore: registered new interface driver hub Jan 23 17:57:42.772344 kernel: usbcore: registered new device driver usb Jan 23 17:57:42.772952 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 17:57:42.773081 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:57:42.776060 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:57:42.779658 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:57:42.791485 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 23 17:57:42.791681 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 23 17:57:42.791796 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 23 17:57:42.791906 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 23 17:57:42.792014 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 23 17:57:42.796744 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 23 17:57:42.796987 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 23 17:57:42.797074 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 17:57:42.801301 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 23 17:57:42.805528 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 17:57:42.805578 kernel: GPT:17805311 != 80003071 Jan 23 17:57:42.805588 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 17:57:42.805605 kernel: GPT:17805311 != 80003071 Jan 23 17:57:42.806461 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 17:57:42.806493 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 17:57:42.807322 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 23 17:57:42.817214 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:57:42.821470 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 23 17:57:42.821674 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 23 17:57:42.822947 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 23 17:57:42.826362 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 23 17:57:42.826544 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 23 17:57:42.827315 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 23 17:57:42.828330 kernel: hub 1-0:1.0: USB hub found Jan 23 17:57:42.829307 kernel: hub 1-0:1.0: 4 ports detected Jan 23 17:57:42.832293 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 23 17:57:42.834291 kernel: hub 2-0:1.0: USB hub found Jan 23 17:57:42.835318 kernel: hub 2-0:1.0: 4 ports detected Jan 23 17:57:42.891868 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 23 17:57:42.900517 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 23 17:57:42.907693 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 23 17:57:42.908481 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 23 17:57:42.917225 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 23 17:57:42.919350 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 17:57:42.935092 disk-uuid[600]: Primary Header is updated. Jan 23 17:57:42.935092 disk-uuid[600]: Secondary Entries is updated. Jan 23 17:57:42.935092 disk-uuid[600]: Secondary Header is updated. Jan 23 17:57:42.950344 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 17:57:42.968343 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 17:57:43.072597 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 23 17:57:43.126258 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 17:57:43.141801 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 17:57:43.142595 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:57:43.145012 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 17:57:43.148494 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 17:57:43.193656 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 17:57:43.203817 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 23 17:57:43.203884 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 23 17:57:43.204583 kernel: usbcore: registered new interface driver usbhid Jan 23 17:57:43.205286 kernel: usbhid: USB HID core driver Jan 23 17:57:43.310314 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 23 17:57:43.439316 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 23 17:57:43.491465 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 23 17:57:43.965307 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 17:57:43.966893 disk-uuid[601]: The operation has completed successfully. Jan 23 17:57:44.031693 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 17:57:44.033323 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 17:57:44.061822 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 17:57:44.084967 sh[631]: Success Jan 23 17:57:44.104775 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 17:57:44.104954 kernel: device-mapper: uevent: version 1.0.3 Jan 23 17:57:44.104990 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 17:57:44.119306 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 23 17:57:44.169695 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 17:57:44.174217 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 17:57:44.191889 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 17:57:44.205568 kernel: BTRFS: device fsid 8784b097-3924-47e8-98b3-06e8cbe78a64 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (643) Jan 23 17:57:44.205716 kernel: BTRFS info (device dm-0): first mount of filesystem 8784b097-3924-47e8-98b3-06e8cbe78a64 Jan 23 17:57:44.205809 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:57:44.215376 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 17:57:44.215440 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 17:57:44.215452 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 17:57:44.218322 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 17:57:44.219463 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 17:57:44.220586 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 17:57:44.222406 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 17:57:44.225483 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 17:57:44.255293 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (678) Jan 23 17:57:44.259879 kernel: BTRFS info (device sda6): first mount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:57:44.259956 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:57:44.265443 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 17:57:44.265496 kernel: BTRFS info (device sda6): turning on async discard Jan 23 17:57:44.265507 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 17:57:44.271362 kernel: BTRFS info (device sda6): last unmount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:57:44.272683 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 17:57:44.276934 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 17:57:44.384076 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 17:57:44.386676 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 17:57:44.419709 ignition[725]: Ignition 2.22.0 Jan 23 17:57:44.420475 ignition[725]: Stage: fetch-offline Jan 23 17:57:44.420992 ignition[725]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:57:44.421592 ignition[725]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 17:57:44.421715 ignition[725]: parsed url from cmdline: "" Jan 23 17:57:44.421718 ignition[725]: no config URL provided Jan 23 17:57:44.421725 ignition[725]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 17:57:44.421733 ignition[725]: no config at "/usr/lib/ignition/user.ign" Jan 23 17:57:44.421739 ignition[725]: failed to fetch config: resource requires networking Jan 23 17:57:44.421932 ignition[725]: Ignition finished successfully Jan 23 17:57:44.427338 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 17:57:44.432754 systemd-networkd[819]: lo: Link UP Jan 23 17:57:44.432770 systemd-networkd[819]: lo: Gained carrier Jan 23 17:57:44.434911 systemd-networkd[819]: Enumeration completed Jan 23 17:57:44.435573 systemd-networkd[819]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:57:44.435577 systemd-networkd[819]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 17:57:44.436704 systemd-networkd[819]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:57:44.436708 systemd-networkd[819]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 17:57:44.437659 systemd-networkd[819]: eth0: Link UP Jan 23 17:57:44.437742 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 17:57:44.437856 systemd-networkd[819]: eth1: Link UP Jan 23 17:57:44.438015 systemd-networkd[819]: eth0: Gained carrier Jan 23 17:57:44.438027 systemd-networkd[819]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:57:44.438721 systemd[1]: Reached target network.target - Network. Jan 23 17:57:44.442353 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 17:57:44.446971 systemd-networkd[819]: eth1: Gained carrier Jan 23 17:57:44.447006 systemd-networkd[819]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:57:44.478435 ignition[823]: Ignition 2.22.0 Jan 23 17:57:44.479112 ignition[823]: Stage: fetch Jan 23 17:57:44.479294 ignition[823]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:57:44.479306 ignition[823]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 17:57:44.479385 ignition[823]: parsed url from cmdline: "" Jan 23 17:57:44.479389 ignition[823]: no config URL provided Jan 23 17:57:44.479394 ignition[823]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 17:57:44.479401 ignition[823]: no config at "/usr/lib/ignition/user.ign" Jan 23 17:57:44.479435 ignition[823]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 23 17:57:44.480100 ignition[823]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 23 17:57:44.489380 systemd-networkd[819]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 23 17:57:44.497380 systemd-networkd[819]: eth0: DHCPv4 address 49.12.73.152/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 23 17:57:44.681223 ignition[823]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 23 17:57:44.690652 ignition[823]: GET result: OK Jan 23 17:57:44.690798 ignition[823]: parsing config with SHA512: 6152a28e165d73f42245f4f8b839f1c76a57efb26d6d3424c247578f00545e0490cf0063f1478925fb6ecb76e3d5213cb7a3e0f8ea6e445d7bdec52bac5becd6 Jan 23 17:57:44.695521 unknown[823]: fetched base config from "system" Jan 23 17:57:44.695537 unknown[823]: fetched base config from "system" Jan 23 17:57:44.695543 unknown[823]: fetched user config from "hetzner" Jan 23 17:57:44.697561 ignition[823]: fetch: fetch complete Jan 23 17:57:44.697569 ignition[823]: fetch: fetch passed Jan 23 17:57:44.697651 ignition[823]: Ignition finished successfully Jan 23 17:57:44.702349 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 17:57:44.706330 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 17:57:44.750889 ignition[830]: Ignition 2.22.0 Jan 23 17:57:44.750908 ignition[830]: Stage: kargs Jan 23 17:57:44.751075 ignition[830]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:57:44.751084 ignition[830]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 17:57:44.752006 ignition[830]: kargs: kargs passed Jan 23 17:57:44.752072 ignition[830]: Ignition finished successfully Jan 23 17:57:44.755438 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 17:57:44.758930 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 17:57:44.798071 ignition[837]: Ignition 2.22.0 Jan 23 17:57:44.798088 ignition[837]: Stage: disks Jan 23 17:57:44.798261 ignition[837]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:57:44.799461 ignition[837]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 17:57:44.800782 ignition[837]: disks: disks passed Jan 23 17:57:44.800893 ignition[837]: Ignition finished successfully Jan 23 17:57:44.803138 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 17:57:44.804440 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 17:57:44.805625 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 17:57:44.806619 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 17:57:44.807257 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 17:57:44.808623 systemd[1]: Reached target basic.target - Basic System. Jan 23 17:57:44.810717 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 17:57:44.842149 systemd-fsck[845]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jan 23 17:57:44.849395 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 17:57:44.851938 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 17:57:44.937312 kernel: EXT4-fs (sda9): mounted filesystem 5f1f19a2-81b4-48e9-bfdb-d3843ff70e8e r/w with ordered data mode. Quota mode: none. Jan 23 17:57:44.939514 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 17:57:44.942377 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 17:57:44.945669 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 17:57:44.948123 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 17:57:44.951981 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 23 17:57:44.955170 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 17:57:44.956728 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 17:57:44.964408 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 17:57:44.968438 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 17:57:44.977295 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (853) Jan 23 17:57:44.980735 kernel: BTRFS info (device sda6): first mount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:57:44.980806 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:57:44.994690 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 17:57:44.994758 kernel: BTRFS info (device sda6): turning on async discard Jan 23 17:57:44.994769 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 17:57:45.001690 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 17:57:45.020348 coreos-metadata[855]: Jan 23 17:57:45.020 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 23 17:57:45.023539 coreos-metadata[855]: Jan 23 17:57:45.023 INFO Fetch successful Jan 23 17:57:45.023539 coreos-metadata[855]: Jan 23 17:57:45.023 INFO wrote hostname ci-4459-2-3-9-0be39219fc to /sysroot/etc/hostname Jan 23 17:57:45.025081 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 17:57:45.032066 initrd-setup-root[881]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 17:57:45.037071 initrd-setup-root[888]: cut: /sysroot/etc/group: No such file or directory Jan 23 17:57:45.042018 initrd-setup-root[895]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 17:57:45.047152 initrd-setup-root[902]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 17:57:45.146012 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 17:57:45.149013 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 17:57:45.151210 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 17:57:45.170374 kernel: BTRFS info (device sda6): last unmount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:57:45.187549 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 17:57:45.200891 ignition[970]: INFO : Ignition 2.22.0 Jan 23 17:57:45.200891 ignition[970]: INFO : Stage: mount Jan 23 17:57:45.202323 ignition[970]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:57:45.202323 ignition[970]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 17:57:45.202323 ignition[970]: INFO : mount: mount passed Jan 23 17:57:45.202323 ignition[970]: INFO : Ignition finished successfully Jan 23 17:57:45.204923 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 17:57:45.206630 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 17:57:45.209443 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 17:57:45.235046 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 17:57:45.259316 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (982) Jan 23 17:57:45.261333 kernel: BTRFS info (device sda6): first mount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:57:45.261470 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:57:45.266085 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 17:57:45.266152 kernel: BTRFS info (device sda6): turning on async discard Jan 23 17:57:45.266175 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 17:57:45.270409 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 17:57:45.310526 ignition[998]: INFO : Ignition 2.22.0 Jan 23 17:57:45.310526 ignition[998]: INFO : Stage: files Jan 23 17:57:45.311984 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:57:45.311984 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 17:57:45.311984 ignition[998]: DEBUG : files: compiled without relabeling support, skipping Jan 23 17:57:45.315462 ignition[998]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 17:57:45.315462 ignition[998]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 17:57:45.315462 ignition[998]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 17:57:45.319717 ignition[998]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 17:57:45.319717 ignition[998]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 17:57:45.318008 unknown[998]: wrote ssh authorized keys file for user: core Jan 23 17:57:45.323220 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 17:57:45.323220 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 23 17:57:45.409203 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 17:57:45.495596 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 17:57:45.495596 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 17:57:45.500032 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 17:57:45.500032 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 17:57:45.500032 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 17:57:45.500032 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 17:57:45.500032 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 17:57:45.500032 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 17:57:45.500032 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 17:57:45.510727 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 17:57:45.510727 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 17:57:45.510727 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 17:57:45.510727 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 17:57:45.510727 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 17:57:45.510727 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Jan 23 17:57:45.587483 systemd-networkd[819]: eth1: Gained IPv6LL Jan 23 17:57:45.779449 systemd-networkd[819]: eth0: Gained IPv6LL Jan 23 17:57:45.836462 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 17:57:46.476887 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 17:57:46.476887 ignition[998]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 17:57:46.481709 ignition[998]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 17:57:46.484459 ignition[998]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 17:57:46.484459 ignition[998]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 17:57:46.488135 ignition[998]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 23 17:57:46.488135 ignition[998]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 23 17:57:46.488135 ignition[998]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 23 17:57:46.488135 ignition[998]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 23 17:57:46.488135 ignition[998]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 23 17:57:46.488135 ignition[998]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 17:57:46.488135 ignition[998]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 17:57:46.488135 ignition[998]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 17:57:46.488135 ignition[998]: INFO : files: files passed Jan 23 17:57:46.488135 ignition[998]: INFO : Ignition finished successfully Jan 23 17:57:46.489232 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 17:57:46.492512 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 17:57:46.497475 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 17:57:46.513813 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 17:57:46.514007 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 17:57:46.525238 initrd-setup-root-after-ignition[1028]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:57:46.525238 initrd-setup-root-after-ignition[1028]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:57:46.529521 initrd-setup-root-after-ignition[1032]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:57:46.532148 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 17:57:46.533606 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 17:57:46.535936 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 17:57:46.591615 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 17:57:46.591786 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 17:57:46.594020 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 17:57:46.595440 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 17:57:46.597185 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 17:57:46.598185 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 17:57:46.637952 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 17:57:46.643122 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 17:57:46.668713 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:57:46.671184 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:57:46.672808 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 17:57:46.674117 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 17:57:46.674363 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 17:57:46.676326 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 17:57:46.678125 systemd[1]: Stopped target basic.target - Basic System. Jan 23 17:57:46.679409 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 17:57:46.680472 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 17:57:46.681658 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 17:57:46.683004 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 17:57:46.684160 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 17:57:46.685325 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 17:57:46.686611 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 17:57:46.687816 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 17:57:46.688944 systemd[1]: Stopped target swap.target - Swaps. Jan 23 17:57:46.689879 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 17:57:46.690054 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 17:57:46.691427 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:57:46.692687 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:57:46.693797 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 17:57:46.693930 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:57:46.695176 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 17:57:46.695363 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 17:57:46.697127 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 17:57:46.697316 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 17:57:46.698541 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 17:57:46.698700 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 17:57:46.699622 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 23 17:57:46.699763 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 17:57:46.703503 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 17:57:46.704776 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 17:57:46.706411 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:57:46.710469 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 17:57:46.711049 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 17:57:46.711234 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:57:46.712746 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 17:57:46.712905 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 17:57:46.720029 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 17:57:46.720146 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 17:57:46.741175 ignition[1052]: INFO : Ignition 2.22.0 Jan 23 17:57:46.743721 ignition[1052]: INFO : Stage: umount Jan 23 17:57:46.743721 ignition[1052]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:57:46.743721 ignition[1052]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 17:57:46.743721 ignition[1052]: INFO : umount: umount passed Jan 23 17:57:46.743721 ignition[1052]: INFO : Ignition finished successfully Jan 23 17:57:46.745207 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 17:57:46.748838 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 17:57:46.749351 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 17:57:46.751114 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 17:57:46.751196 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 17:57:46.752472 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 17:57:46.752523 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 17:57:46.753571 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 17:57:46.753616 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 17:57:46.754564 systemd[1]: Stopped target network.target - Network. Jan 23 17:57:46.755470 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 17:57:46.755531 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 17:57:46.756669 systemd[1]: Stopped target paths.target - Path Units. Jan 23 17:57:46.757621 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 17:57:46.761365 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:57:46.764053 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 17:57:46.765027 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 17:57:46.766299 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 17:57:46.766369 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 17:57:46.767771 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 17:57:46.767809 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 17:57:46.769726 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 17:57:46.769791 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 17:57:46.771740 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 17:57:46.771784 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 17:57:46.773153 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 17:57:46.774293 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 17:57:46.775687 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 17:57:46.776359 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 17:57:46.777679 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 17:57:46.777759 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 17:57:46.782437 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 17:57:46.782565 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 17:57:46.786223 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 17:57:46.786491 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 17:57:46.786536 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:57:46.788812 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 17:57:46.791480 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 17:57:46.791588 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 17:57:46.794439 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 17:57:46.795091 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 17:57:46.795991 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 17:57:46.796038 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:57:46.798106 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 17:57:46.800871 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 17:57:46.800949 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 17:57:46.803454 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 17:57:46.803532 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:57:46.807671 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 17:57:46.807731 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 17:57:46.808452 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:57:46.811723 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 17:57:46.834378 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 17:57:46.834580 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:57:46.836164 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 17:57:46.836223 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 17:57:46.838305 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 17:57:46.838346 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:57:46.839812 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 17:57:46.839891 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 17:57:46.841735 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 17:57:46.841794 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 17:57:46.843802 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 17:57:46.843916 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 17:57:46.846702 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 17:57:46.848381 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 17:57:46.848452 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:57:46.851450 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 17:57:46.851500 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:57:46.854650 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 17:57:46.854698 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:57:46.857213 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 17:57:46.858706 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 17:57:46.865485 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 17:57:46.865601 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 17:57:46.867201 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 17:57:46.869170 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 17:57:46.908020 systemd[1]: Switching root. Jan 23 17:57:46.946386 systemd-journald[244]: Journal stopped Jan 23 17:57:47.954344 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Jan 23 17:57:47.954439 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 17:57:47.954456 kernel: SELinux: policy capability open_perms=1 Jan 23 17:57:47.954466 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 17:57:47.954484 kernel: SELinux: policy capability always_check_network=0 Jan 23 17:57:47.954498 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 17:57:47.954508 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 17:57:47.954521 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 17:57:47.954537 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 17:57:47.954547 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 17:57:47.954557 kernel: audit: type=1403 audit(1769191067.131:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 17:57:47.954568 systemd[1]: Successfully loaded SELinux policy in 65.815ms. Jan 23 17:57:47.954585 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.134ms. Jan 23 17:57:47.954597 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 17:57:47.954609 systemd[1]: Detected virtualization kvm. Jan 23 17:57:47.954619 systemd[1]: Detected architecture arm64. Jan 23 17:57:47.954629 systemd[1]: Detected first boot. Jan 23 17:57:47.954640 systemd[1]: Hostname set to . Jan 23 17:57:47.954650 systemd[1]: Initializing machine ID from VM UUID. Jan 23 17:57:47.954664 zram_generator::config[1096]: No configuration found. Jan 23 17:57:47.954680 kernel: NET: Registered PF_VSOCK protocol family Jan 23 17:57:47.954690 systemd[1]: Populated /etc with preset unit settings. Jan 23 17:57:47.954702 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 17:57:47.954713 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 17:57:47.954724 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 17:57:47.954737 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 17:57:47.954751 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 17:57:47.954763 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 17:57:47.954773 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 17:57:47.954784 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 17:57:47.954794 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 17:57:47.954805 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 17:57:47.954825 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 17:57:47.954842 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 17:57:47.954854 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:57:47.954865 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:57:47.954876 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 17:57:47.954886 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 17:57:47.954897 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 17:57:47.954908 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 17:57:47.954919 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 23 17:57:47.954931 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:57:47.954941 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:57:47.954952 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 17:57:47.954962 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 17:57:47.954977 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 17:57:47.954988 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 17:57:47.954998 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:57:47.955009 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 17:57:47.955022 systemd[1]: Reached target slices.target - Slice Units. Jan 23 17:57:47.955033 systemd[1]: Reached target swap.target - Swaps. Jan 23 17:57:47.955044 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 17:57:47.955055 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 17:57:47.955066 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 17:57:47.955076 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:57:47.955087 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 17:57:47.955098 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:57:47.955109 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 17:57:47.955120 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 17:57:47.955133 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 17:57:47.955144 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 17:57:47.955155 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 17:57:47.955166 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 17:57:47.955176 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 17:57:47.955188 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 17:57:47.955199 systemd[1]: Reached target machines.target - Containers. Jan 23 17:57:47.955209 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 17:57:47.955222 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:57:47.955233 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 17:57:47.955244 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 17:57:47.955254 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 17:57:47.956342 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 17:57:47.956387 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 17:57:47.956400 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 17:57:47.956411 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 17:57:47.956422 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 17:57:47.956440 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 17:57:47.956451 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 17:57:47.956463 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 17:57:47.956474 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 17:57:47.956485 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:57:47.956496 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 17:57:47.956507 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 17:57:47.956521 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 17:57:47.956550 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 17:57:47.956562 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 17:57:47.956573 kernel: fuse: init (API version 7.41) Jan 23 17:57:47.956584 kernel: loop: module loaded Jan 23 17:57:47.956596 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 17:57:47.956609 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 17:57:47.956633 systemd[1]: Stopped verity-setup.service. Jan 23 17:57:47.956646 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 17:57:47.956657 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 17:57:47.956670 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 17:57:47.956682 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 17:57:47.956693 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 17:57:47.956704 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 17:57:47.956715 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:57:47.956725 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 17:57:47.956736 kernel: ACPI: bus type drm_connector registered Jan 23 17:57:47.956747 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 17:57:47.956758 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 17:57:47.956771 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 17:57:47.956782 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 17:57:47.956793 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 17:57:47.956805 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 17:57:47.956829 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 17:57:47.956842 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 17:57:47.956852 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 17:57:47.956863 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 17:57:47.956875 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 17:57:47.956889 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 17:57:47.956903 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 17:57:47.956954 systemd-journald[1158]: Collecting audit messages is disabled. Jan 23 17:57:47.957040 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:57:47.957057 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 17:57:47.957068 systemd-journald[1158]: Journal started Jan 23 17:57:47.957093 systemd-journald[1158]: Runtime Journal (/run/log/journal/e2722aefcadd4b7699d6911262a6db0c) is 8M, max 76.5M, 68.5M free. Jan 23 17:57:47.650284 systemd[1]: Queued start job for default target multi-user.target. Jan 23 17:57:47.677527 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 23 17:57:47.678422 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 17:57:47.960659 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 17:57:47.967157 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 17:57:47.978207 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 17:57:47.980215 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 17:57:47.984477 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 17:57:47.986425 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 17:57:47.986463 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 17:57:47.988124 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 17:57:48.002455 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 17:57:48.003253 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:57:48.006526 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 17:57:48.009974 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 17:57:48.010878 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 17:57:48.013996 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 17:57:48.014970 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 17:57:48.024660 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 17:57:48.029256 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 17:57:48.033339 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 17:57:48.037699 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 17:57:48.039593 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 17:57:48.057315 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 17:57:48.058618 systemd-journald[1158]: Time spent on flushing to /var/log/journal/e2722aefcadd4b7699d6911262a6db0c is 46.287ms for 1171 entries. Jan 23 17:57:48.058618 systemd-journald[1158]: System Journal (/var/log/journal/e2722aefcadd4b7699d6911262a6db0c) is 8M, max 584.8M, 576.8M free. Jan 23 17:57:48.119415 systemd-journald[1158]: Received client request to flush runtime journal. Jan 23 17:57:48.119538 kernel: loop0: detected capacity change from 0 to 100632 Jan 23 17:57:48.059791 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 17:57:48.070338 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 17:57:48.109576 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:57:48.124337 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 17:57:48.140284 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 17:57:48.142664 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 17:57:48.149924 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:57:48.161371 kernel: loop1: detected capacity change from 0 to 8 Jan 23 17:57:48.177173 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 17:57:48.181552 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 17:57:48.186300 kernel: loop2: detected capacity change from 0 to 119840 Jan 23 17:57:48.218299 kernel: loop3: detected capacity change from 0 to 200800 Jan 23 17:57:48.232241 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Jan 23 17:57:48.232296 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Jan 23 17:57:48.243705 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:57:48.262292 kernel: loop4: detected capacity change from 0 to 100632 Jan 23 17:57:48.285018 kernel: loop5: detected capacity change from 0 to 8 Jan 23 17:57:48.285127 kernel: loop6: detected capacity change from 0 to 119840 Jan 23 17:57:48.299444 kernel: loop7: detected capacity change from 0 to 200800 Jan 23 17:57:48.319469 (sd-merge)[1241]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 23 17:57:48.319984 (sd-merge)[1241]: Merged extensions into '/usr'. Jan 23 17:57:48.325091 systemd[1]: Reload requested from client PID 1215 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 17:57:48.325120 systemd[1]: Reloading... Jan 23 17:57:48.467762 zram_generator::config[1274]: No configuration found. Jan 23 17:57:48.612094 ldconfig[1210]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 17:57:48.683942 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 17:57:48.684548 systemd[1]: Reloading finished in 357 ms. Jan 23 17:57:48.709141 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 17:57:48.710985 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 17:57:48.728969 systemd[1]: Starting ensure-sysext.service... Jan 23 17:57:48.734444 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 17:57:48.745102 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 17:57:48.753981 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:57:48.760579 systemd[1]: Reload requested from client PID 1305 ('systemctl') (unit ensure-sysext.service)... Jan 23 17:57:48.760593 systemd[1]: Reloading... Jan 23 17:57:48.777387 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 17:57:48.777416 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 17:57:48.778381 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 17:57:48.778613 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 17:57:48.779920 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 17:57:48.780599 systemd-tmpfiles[1306]: ACLs are not supported, ignoring. Jan 23 17:57:48.780886 systemd-tmpfiles[1306]: ACLs are not supported, ignoring. Jan 23 17:57:48.785677 systemd-tmpfiles[1306]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 17:57:48.785687 systemd-tmpfiles[1306]: Skipping /boot Jan 23 17:57:48.795309 systemd-tmpfiles[1306]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 17:57:48.795452 systemd-tmpfiles[1306]: Skipping /boot Jan 23 17:57:48.827186 systemd-udevd[1308]: Using default interface naming scheme 'v255'. Jan 23 17:57:48.863632 zram_generator::config[1346]: No configuration found. Jan 23 17:57:49.102974 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 23 17:57:49.103185 systemd[1]: Reloading finished in 342 ms. Jan 23 17:57:49.113297 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 17:57:49.113565 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:57:49.116333 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:57:49.132508 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 17:57:49.136229 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 17:57:49.139574 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 17:57:49.147537 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 17:57:49.152427 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 17:57:49.160543 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 17:57:49.168940 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:57:49.170893 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 17:57:49.177719 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 17:57:49.185137 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 17:57:49.186727 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:57:49.186889 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:57:49.191662 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 17:57:49.195989 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:57:49.196152 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:57:49.196240 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:57:49.199547 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:57:49.201755 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 17:57:49.202894 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:57:49.203015 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:57:49.209949 systemd[1]: Finished ensure-sysext.service. Jan 23 17:57:49.216421 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 23 17:57:49.217683 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 17:57:49.225470 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 17:57:49.225650 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 17:57:49.226536 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 17:57:49.230644 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 17:57:49.240988 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 17:57:49.275262 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 17:57:49.275499 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 17:57:49.298455 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 17:57:49.299865 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 17:57:49.303679 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 17:57:49.303929 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 17:57:49.307200 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 17:57:49.311103 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 17:57:49.323094 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 17:57:49.326494 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 17:57:49.339676 augenrules[1461]: No rules Jan 23 17:57:49.343557 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 17:57:49.343955 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 17:57:49.347854 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 17:57:49.389172 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 23 17:57:49.392738 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 17:57:49.468708 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 17:57:49.470479 systemd-resolved[1420]: Positive Trust Anchors: Jan 23 17:57:49.470505 systemd-resolved[1420]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 17:57:49.470537 systemd-resolved[1420]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 17:57:49.479192 systemd-resolved[1420]: Using system hostname 'ci-4459-2-3-9-0be39219fc'. Jan 23 17:57:49.485189 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 23 17:57:49.487058 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 17:57:49.490149 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 23 17:57:49.490209 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:57:49.493426 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 17:57:49.494570 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:57:49.497484 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 17:57:49.500490 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 17:57:49.503935 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 17:57:49.505242 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:57:49.505301 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:57:49.505324 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 17:57:49.517764 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 17:57:49.519624 systemd-networkd[1419]: lo: Link UP Jan 23 17:57:49.519633 systemd-networkd[1419]: lo: Gained carrier Jan 23 17:57:49.519737 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 17:57:49.520889 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 17:57:49.526458 systemd-networkd[1419]: Enumeration completed Jan 23 17:57:49.526939 systemd-networkd[1419]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:57:49.526948 systemd-networkd[1419]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 17:57:49.527358 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 17:57:49.528179 systemd[1]: Reached target network.target - Network. Jan 23 17:57:49.531975 systemd-networkd[1419]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:57:49.531984 systemd-networkd[1419]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 17:57:49.533453 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 17:57:49.536427 systemd-networkd[1419]: eth0: Link UP Jan 23 17:57:49.536587 systemd-networkd[1419]: eth0: Gained carrier Jan 23 17:57:49.536617 systemd-networkd[1419]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:57:49.537312 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 17:57:49.542179 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 17:57:49.542572 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 17:57:49.545597 systemd-networkd[1419]: eth1: Link UP Jan 23 17:57:49.547262 systemd-networkd[1419]: eth1: Gained carrier Jan 23 17:57:49.547314 systemd-networkd[1419]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:57:49.552661 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 23 17:57:49.552748 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 23 17:57:49.552766 kernel: [drm] features: -context_init Jan 23 17:57:49.553687 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 17:57:49.554426 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 17:57:49.556565 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 17:57:49.558740 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 17:57:49.561539 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 17:57:49.563937 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 17:57:49.564774 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 17:57:49.568452 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 17:57:49.570370 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 17:57:49.570404 systemd[1]: Reached target paths.target - Path Units. Jan 23 17:57:49.572359 systemd[1]: Reached target timers.target - Timer Units. Jan 23 17:57:49.575159 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 17:57:49.580865 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 17:57:49.585783 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 17:57:49.588480 kernel: [drm] number of scanouts: 1 Jan 23 17:57:49.588547 kernel: [drm] number of cap sets: 0 Jan 23 17:57:49.590574 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 17:57:49.592350 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 17:57:49.593022 systemd-networkd[1419]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 23 17:57:49.595149 systemd-timesyncd[1435]: Network configuration changed, trying to establish connection. Jan 23 17:57:49.597429 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 17:57:49.599623 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 17:57:49.600710 systemd-networkd[1419]: eth0: DHCPv4 address 49.12.73.152/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 23 17:57:49.601239 systemd-timesyncd[1435]: Network configuration changed, trying to establish connection. Jan 23 17:57:49.601438 systemd-timesyncd[1435]: Network configuration changed, trying to establish connection. Jan 23 17:57:49.601789 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 17:57:49.604326 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 17:57:49.605701 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 17:57:49.609121 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 17:57:49.611438 systemd[1]: Reached target basic.target - Basic System. Jan 23 17:57:49.612110 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 17:57:49.612145 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 17:57:49.612440 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Jan 23 17:57:49.614957 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 17:57:49.620179 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 17:57:49.623603 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 17:57:49.625994 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 17:57:49.631550 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 17:57:49.640202 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 17:57:49.640994 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 17:57:49.643435 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 17:57:49.651945 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 17:57:49.655205 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 23 17:57:49.660545 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 17:57:49.666095 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 17:57:49.670708 jq[1510]: false Jan 23 17:57:49.679513 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 17:57:49.681188 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 17:57:49.683297 kernel: Console: switching to colour frame buffer device 160x50 Jan 23 17:57:49.690605 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 17:57:49.693613 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 17:57:49.708431 extend-filesystems[1511]: Found /dev/sda6 Jan 23 17:57:49.766622 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 23 17:57:49.766852 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 23 17:57:49.766870 coreos-metadata[1507]: Jan 23 17:57:49.729 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 23 17:57:49.766870 coreos-metadata[1507]: Jan 23 17:57:49.731 INFO Fetch successful Jan 23 17:57:49.766870 coreos-metadata[1507]: Jan 23 17:57:49.731 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 23 17:57:49.766870 coreos-metadata[1507]: Jan 23 17:57:49.732 INFO Fetch successful Jan 23 17:57:49.767109 extend-filesystems[1511]: Found /dev/sda9 Jan 23 17:57:49.767109 extend-filesystems[1511]: Checking size of /dev/sda9 Jan 23 17:57:49.767109 extend-filesystems[1511]: Resized partition /dev/sda9 Jan 23 17:57:49.792222 extend-filesystems[1540]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 17:57:49.770142 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 17:57:49.790319 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 17:57:49.791956 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 17:57:49.792344 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 17:57:49.792641 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 17:57:49.792801 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 17:57:49.799794 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 17:57:49.800072 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 17:57:49.817545 jq[1541]: true Jan 23 17:57:49.849819 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:57:49.865353 update_engine[1524]: I20260123 17:57:49.861090 1524 main.cc:92] Flatcar Update Engine starting Jan 23 17:57:49.865983 tar[1545]: linux-arm64/LICENSE Jan 23 17:57:49.869405 tar[1545]: linux-arm64/helm Jan 23 17:57:49.868510 (ntainerd)[1554]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 17:57:49.890219 jq[1553]: true Jan 23 17:57:49.895162 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 17:57:49.894605 dbus-daemon[1508]: [system] SELinux support is enabled Jan 23 17:57:49.903227 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 17:57:49.903290 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 17:57:49.906610 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 17:57:49.906646 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 17:57:49.932830 systemd[1]: Started update-engine.service - Update Engine. Jan 23 17:57:49.933377 update_engine[1524]: I20260123 17:57:49.932977 1524 update_check_scheduler.cc:74] Next update check in 10m31s Jan 23 17:57:49.954177 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 23 17:57:49.964488 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 17:57:49.965736 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 17:57:49.966684 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:57:49.968023 extend-filesystems[1540]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 23 17:57:49.968023 extend-filesystems[1540]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 23 17:57:49.968023 extend-filesystems[1540]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 23 17:57:49.981399 extend-filesystems[1511]: Resized filesystem in /dev/sda9 Jan 23 17:57:49.968365 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 17:57:49.968560 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 17:57:49.978249 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 17:57:49.980960 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 17:57:49.987646 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:57:50.031674 bash[1594]: Updated "/home/core/.ssh/authorized_keys" Jan 23 17:57:50.032241 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 17:57:50.038574 systemd[1]: Starting sshkeys.service... Jan 23 17:57:50.080461 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 17:57:50.085755 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 17:57:50.165949 containerd[1554]: time="2026-01-23T17:57:50Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 17:57:50.175365 containerd[1554]: time="2026-01-23T17:57:50.175144800Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 17:57:50.194616 coreos-metadata[1597]: Jan 23 17:57:50.194 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 23 17:57:50.202277 coreos-metadata[1597]: Jan 23 17:57:50.201 INFO Fetch successful Jan 23 17:57:50.205632 unknown[1597]: wrote ssh authorized keys file for user: core Jan 23 17:57:50.232276 containerd[1554]: time="2026-01-23T17:57:50.230813440Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12µs" Jan 23 17:57:50.232276 containerd[1554]: time="2026-01-23T17:57:50.230873200Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 17:57:50.232276 containerd[1554]: time="2026-01-23T17:57:50.230898120Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 17:57:50.232276 containerd[1554]: time="2026-01-23T17:57:50.231062720Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 17:57:50.232276 containerd[1554]: time="2026-01-23T17:57:50.231082880Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 17:57:50.232276 containerd[1554]: time="2026-01-23T17:57:50.231112560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 17:57:50.232276 containerd[1554]: time="2026-01-23T17:57:50.231171600Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 17:57:50.232276 containerd[1554]: time="2026-01-23T17:57:50.231187040Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 17:57:50.232276 containerd[1554]: time="2026-01-23T17:57:50.231523680Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 17:57:50.232276 containerd[1554]: time="2026-01-23T17:57:50.231543480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 17:57:50.232276 containerd[1554]: time="2026-01-23T17:57:50.231561040Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 17:57:50.232276 containerd[1554]: time="2026-01-23T17:57:50.231572840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 17:57:50.232564 containerd[1554]: time="2026-01-23T17:57:50.231657120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 17:57:50.232564 containerd[1554]: time="2026-01-23T17:57:50.231895760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 17:57:50.232564 containerd[1554]: time="2026-01-23T17:57:50.231933800Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 17:57:50.232564 containerd[1554]: time="2026-01-23T17:57:50.231944920Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 17:57:50.232564 containerd[1554]: time="2026-01-23T17:57:50.231980520Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 17:57:50.240083 containerd[1554]: time="2026-01-23T17:57:50.240033480Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 17:57:50.241094 containerd[1554]: time="2026-01-23T17:57:50.241059680Z" level=info msg="metadata content store policy set" policy=shared Jan 23 17:57:50.257299 containerd[1554]: time="2026-01-23T17:57:50.256355040Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 17:57:50.257299 containerd[1554]: time="2026-01-23T17:57:50.256431760Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 17:57:50.257299 containerd[1554]: time="2026-01-23T17:57:50.256447160Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 17:57:50.257299 containerd[1554]: time="2026-01-23T17:57:50.256460360Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 17:57:50.257299 containerd[1554]: time="2026-01-23T17:57:50.256475520Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 17:57:50.257299 containerd[1554]: time="2026-01-23T17:57:50.256487400Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 17:57:50.257299 containerd[1554]: time="2026-01-23T17:57:50.256502480Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 17:57:50.257299 containerd[1554]: time="2026-01-23T17:57:50.256515240Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 17:57:50.257299 containerd[1554]: time="2026-01-23T17:57:50.256527800Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 17:57:50.257299 containerd[1554]: time="2026-01-23T17:57:50.256558800Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 17:57:50.257299 containerd[1554]: time="2026-01-23T17:57:50.256569680Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 17:57:50.257299 containerd[1554]: time="2026-01-23T17:57:50.256582880Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 17:57:50.257299 containerd[1554]: time="2026-01-23T17:57:50.256719960Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 17:57:50.257299 containerd[1554]: time="2026-01-23T17:57:50.256742880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 17:57:50.257607 containerd[1554]: time="2026-01-23T17:57:50.256759600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 17:57:50.257607 containerd[1554]: time="2026-01-23T17:57:50.256772320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 17:57:50.257607 containerd[1554]: time="2026-01-23T17:57:50.256784080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 17:57:50.257607 containerd[1554]: time="2026-01-23T17:57:50.256796080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 17:57:50.257607 containerd[1554]: time="2026-01-23T17:57:50.256851640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 17:57:50.257607 containerd[1554]: time="2026-01-23T17:57:50.256871760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 17:57:50.257607 containerd[1554]: time="2026-01-23T17:57:50.256888520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 17:57:50.257607 containerd[1554]: time="2026-01-23T17:57:50.256900240Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 17:57:50.257607 containerd[1554]: time="2026-01-23T17:57:50.256911440Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 17:57:50.257607 containerd[1554]: time="2026-01-23T17:57:50.257139760Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 17:57:50.257607 containerd[1554]: time="2026-01-23T17:57:50.257158800Z" level=info msg="Start snapshots syncer" Jan 23 17:57:50.257607 containerd[1554]: time="2026-01-23T17:57:50.257184240Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 17:57:50.262655 containerd[1554]: time="2026-01-23T17:57:50.261531560Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 17:57:50.263256 containerd[1554]: time="2026-01-23T17:57:50.262958720Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 17:57:50.263256 containerd[1554]: time="2026-01-23T17:57:50.263061600Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 17:57:50.270593 containerd[1554]: time="2026-01-23T17:57:50.268418160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 17:57:50.270593 containerd[1554]: time="2026-01-23T17:57:50.268515120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 17:57:50.270593 containerd[1554]: time="2026-01-23T17:57:50.268530240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 17:57:50.270593 containerd[1554]: time="2026-01-23T17:57:50.268554480Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 17:57:50.270593 containerd[1554]: time="2026-01-23T17:57:50.268569680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 17:57:50.270593 containerd[1554]: time="2026-01-23T17:57:50.268581560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 17:57:50.270593 containerd[1554]: time="2026-01-23T17:57:50.268593920Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 17:57:50.270593 containerd[1554]: time="2026-01-23T17:57:50.268634640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 17:57:50.270593 containerd[1554]: time="2026-01-23T17:57:50.268647880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 17:57:50.270593 containerd[1554]: time="2026-01-23T17:57:50.268667120Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 17:57:50.270593 containerd[1554]: time="2026-01-23T17:57:50.268732480Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 17:57:50.270593 containerd[1554]: time="2026-01-23T17:57:50.268750080Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 17:57:50.270593 containerd[1554]: time="2026-01-23T17:57:50.268761600Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 17:57:50.270946 containerd[1554]: time="2026-01-23T17:57:50.268771320Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 17:57:50.270946 containerd[1554]: time="2026-01-23T17:57:50.268872240Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 17:57:50.270946 containerd[1554]: time="2026-01-23T17:57:50.268884800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 17:57:50.270946 containerd[1554]: time="2026-01-23T17:57:50.268897680Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 17:57:50.270946 containerd[1554]: time="2026-01-23T17:57:50.268989040Z" level=info msg="runtime interface created" Jan 23 17:57:50.270946 containerd[1554]: time="2026-01-23T17:57:50.268995480Z" level=info msg="created NRI interface" Jan 23 17:57:50.270946 containerd[1554]: time="2026-01-23T17:57:50.269005000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 17:57:50.270946 containerd[1554]: time="2026-01-23T17:57:50.269027440Z" level=info msg="Connect containerd service" Jan 23 17:57:50.270946 containerd[1554]: time="2026-01-23T17:57:50.269412320Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 17:57:50.277400 update-ssh-keys[1604]: Updated "/home/core/.ssh/authorized_keys" Jan 23 17:57:50.279433 containerd[1554]: time="2026-01-23T17:57:50.278383280Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 17:57:50.279521 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 17:57:50.285651 systemd[1]: Finished sshkeys.service. Jan 23 17:57:50.332509 locksmithd[1570]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 17:57:50.351226 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:57:50.376219 systemd-logind[1520]: New seat seat0. Jan 23 17:57:50.377936 systemd-logind[1520]: Watching system buttons on /dev/input/event0 (Power Button) Jan 23 17:57:50.378093 systemd-logind[1520]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 23 17:57:50.378401 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 17:57:50.452139 containerd[1554]: time="2026-01-23T17:57:50.451982240Z" level=info msg="Start subscribing containerd event" Jan 23 17:57:50.452341 containerd[1554]: time="2026-01-23T17:57:50.452320920Z" level=info msg="Start recovering state" Jan 23 17:57:50.452653 containerd[1554]: time="2026-01-23T17:57:50.452625560Z" level=info msg="Start event monitor" Jan 23 17:57:50.452751 containerd[1554]: time="2026-01-23T17:57:50.452739320Z" level=info msg="Start cni network conf syncer for default" Jan 23 17:57:50.452903 containerd[1554]: time="2026-01-23T17:57:50.452887680Z" level=info msg="Start streaming server" Jan 23 17:57:50.452986 containerd[1554]: time="2026-01-23T17:57:50.452972880Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 17:57:50.453229 containerd[1554]: time="2026-01-23T17:57:50.453213320Z" level=info msg="runtime interface starting up..." Jan 23 17:57:50.453333 containerd[1554]: time="2026-01-23T17:57:50.453322080Z" level=info msg="starting plugins..." Jan 23 17:57:50.453400 containerd[1554]: time="2026-01-23T17:57:50.453192520Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 17:57:50.453562 containerd[1554]: time="2026-01-23T17:57:50.453485880Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 17:57:50.453788 containerd[1554]: time="2026-01-23T17:57:50.453734040Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 17:57:50.454482 containerd[1554]: time="2026-01-23T17:57:50.454454840Z" level=info msg="containerd successfully booted in 0.289534s" Jan 23 17:57:50.454568 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 17:57:50.645209 tar[1545]: linux-arm64/README.md Jan 23 17:57:50.665320 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 17:57:51.028419 systemd-networkd[1419]: eth1: Gained IPv6LL Jan 23 17:57:51.029106 systemd-timesyncd[1435]: Network configuration changed, trying to establish connection. Jan 23 17:57:51.034546 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 17:57:51.036114 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 17:57:51.041520 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:57:51.045397 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 17:57:51.092422 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 17:57:51.411381 systemd-networkd[1419]: eth0: Gained IPv6LL Jan 23 17:57:51.411950 systemd-timesyncd[1435]: Network configuration changed, trying to establish connection. Jan 23 17:57:51.562570 sshd_keygen[1530]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 17:57:51.591497 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 17:57:51.597589 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 17:57:51.615863 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 17:57:51.616778 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 17:57:51.623543 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 17:57:51.643157 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 17:57:51.647850 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 17:57:51.652860 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 23 17:57:51.653867 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 17:57:51.884261 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:57:51.886522 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 17:57:51.888496 systemd[1]: Startup finished in 2.335s (kernel) + 5.511s (initrd) + 4.821s (userspace) = 12.669s. Jan 23 17:57:51.894775 (kubelet)[1667]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:57:52.365290 kubelet[1667]: E0123 17:57:52.365135 1667 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:57:52.369443 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:57:52.369877 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:57:52.370906 systemd[1]: kubelet.service: Consumed 829ms CPU time, 248.3M memory peak. Jan 23 17:58:02.620186 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 17:58:02.622954 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:58:02.774577 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:58:02.797854 (kubelet)[1686]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:58:02.848320 kubelet[1686]: E0123 17:58:02.848246 1686 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:58:02.852035 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:58:02.852218 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:58:02.854438 systemd[1]: kubelet.service: Consumed 175ms CPU time, 104.8M memory peak. Jan 23 17:58:13.103195 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 17:58:13.105978 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:58:13.268646 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:58:13.280856 (kubelet)[1700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:58:13.327320 kubelet[1700]: E0123 17:58:13.327246 1700 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:58:13.330112 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:58:13.330305 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:58:13.331014 systemd[1]: kubelet.service: Consumed 171ms CPU time, 107.5M memory peak. Jan 23 17:58:21.778226 systemd-timesyncd[1435]: Contacted time server 93.177.65.20:123 (2.flatcar.pool.ntp.org). Jan 23 17:58:21.778501 systemd-timesyncd[1435]: Initial clock synchronization to Fri 2026-01-23 17:58:22.142636 UTC. Jan 23 17:58:22.516188 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 17:58:22.517803 systemd[1]: Started sshd@0-49.12.73.152:22-68.220.241.50:34350.service - OpenSSH per-connection server daemon (68.220.241.50:34350). Jan 23 17:58:23.199605 sshd[1708]: Accepted publickey for core from 68.220.241.50 port 34350 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:58:23.202545 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:23.214409 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 17:58:23.215768 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 17:58:23.224067 systemd-logind[1520]: New session 1 of user core. Jan 23 17:58:23.244366 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 17:58:23.247760 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 17:58:23.261292 (systemd)[1713]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 17:58:23.264970 systemd-logind[1520]: New session c1 of user core. Jan 23 17:58:23.400372 systemd[1713]: Queued start job for default target default.target. Jan 23 17:58:23.411436 systemd[1713]: Created slice app.slice - User Application Slice. Jan 23 17:58:23.411471 systemd[1713]: Reached target paths.target - Paths. Jan 23 17:58:23.411513 systemd[1713]: Reached target timers.target - Timers. Jan 23 17:58:23.412592 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 17:58:23.415600 systemd[1713]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 17:58:23.418758 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:58:23.434377 systemd[1713]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 17:58:23.434513 systemd[1713]: Reached target sockets.target - Sockets. Jan 23 17:58:23.434687 systemd[1713]: Reached target basic.target - Basic System. Jan 23 17:58:23.434777 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 17:58:23.435260 systemd[1713]: Reached target default.target - Main User Target. Jan 23 17:58:23.435326 systemd[1713]: Startup finished in 161ms. Jan 23 17:58:23.440114 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 17:58:23.581383 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:58:23.593989 (kubelet)[1730]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:58:23.643392 kubelet[1730]: E0123 17:58:23.643339 1730 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:58:23.646847 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:58:23.646992 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:58:23.649504 systemd[1]: kubelet.service: Consumed 176ms CPU time, 106.7M memory peak. Jan 23 17:58:23.916687 systemd[1]: Started sshd@1-49.12.73.152:22-68.220.241.50:34356.service - OpenSSH per-connection server daemon (68.220.241.50:34356). Jan 23 17:58:24.591518 sshd[1739]: Accepted publickey for core from 68.220.241.50 port 34356 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:58:24.593709 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:24.600287 systemd-logind[1520]: New session 2 of user core. Jan 23 17:58:24.606673 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 17:58:25.050737 sshd[1742]: Connection closed by 68.220.241.50 port 34356 Jan 23 17:58:25.051770 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:25.060057 systemd-logind[1520]: Session 2 logged out. Waiting for processes to exit. Jan 23 17:58:25.060646 systemd[1]: sshd@1-49.12.73.152:22-68.220.241.50:34356.service: Deactivated successfully. Jan 23 17:58:25.064436 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 17:58:25.067770 systemd-logind[1520]: Removed session 2. Jan 23 17:58:25.167545 systemd[1]: Started sshd@2-49.12.73.152:22-68.220.241.50:34368.service - OpenSSH per-connection server daemon (68.220.241.50:34368). Jan 23 17:58:25.817340 sshd[1748]: Accepted publickey for core from 68.220.241.50 port 34368 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:58:25.819486 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:25.825203 systemd-logind[1520]: New session 3 of user core. Jan 23 17:58:25.836633 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 17:58:26.253780 sshd[1751]: Connection closed by 68.220.241.50 port 34368 Jan 23 17:58:26.253592 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:26.263840 systemd[1]: sshd@2-49.12.73.152:22-68.220.241.50:34368.service: Deactivated successfully. Jan 23 17:58:26.268029 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 17:58:26.272576 systemd-logind[1520]: Session 3 logged out. Waiting for processes to exit. Jan 23 17:58:26.274180 systemd-logind[1520]: Removed session 3. Jan 23 17:58:26.376571 systemd[1]: Started sshd@3-49.12.73.152:22-68.220.241.50:34384.service - OpenSSH per-connection server daemon (68.220.241.50:34384). Jan 23 17:58:27.013339 sshd[1757]: Accepted publickey for core from 68.220.241.50 port 34384 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:58:27.015219 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:27.024371 systemd-logind[1520]: New session 4 of user core. Jan 23 17:58:27.033627 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 17:58:27.447952 sshd[1760]: Connection closed by 68.220.241.50 port 34384 Jan 23 17:58:27.449224 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:27.455257 systemd-logind[1520]: Session 4 logged out. Waiting for processes to exit. Jan 23 17:58:27.455509 systemd[1]: sshd@3-49.12.73.152:22-68.220.241.50:34384.service: Deactivated successfully. Jan 23 17:58:27.458651 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 17:58:27.460520 systemd-logind[1520]: Removed session 4. Jan 23 17:58:27.562490 systemd[1]: Started sshd@4-49.12.73.152:22-68.220.241.50:34394.service - OpenSSH per-connection server daemon (68.220.241.50:34394). Jan 23 17:58:28.193912 sshd[1766]: Accepted publickey for core from 68.220.241.50 port 34394 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:58:28.196141 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:28.201944 systemd-logind[1520]: New session 5 of user core. Jan 23 17:58:28.210627 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 17:58:28.542742 sudo[1770]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 17:58:28.543033 sudo[1770]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:58:28.561715 sudo[1770]: pam_unix(sudo:session): session closed for user root Jan 23 17:58:28.659633 sshd[1769]: Connection closed by 68.220.241.50 port 34394 Jan 23 17:58:28.660456 sshd-session[1766]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:28.665644 systemd[1]: sshd@4-49.12.73.152:22-68.220.241.50:34394.service: Deactivated successfully. Jan 23 17:58:28.668379 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 17:58:28.669818 systemd-logind[1520]: Session 5 logged out. Waiting for processes to exit. Jan 23 17:58:28.672463 systemd-logind[1520]: Removed session 5. Jan 23 17:58:28.778746 systemd[1]: Started sshd@5-49.12.73.152:22-68.220.241.50:34406.service - OpenSSH per-connection server daemon (68.220.241.50:34406). Jan 23 17:58:29.442684 sshd[1776]: Accepted publickey for core from 68.220.241.50 port 34406 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:58:29.445129 sshd-session[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:29.450717 systemd-logind[1520]: New session 6 of user core. Jan 23 17:58:29.458626 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 17:58:29.794940 sudo[1781]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 17:58:29.795836 sudo[1781]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:58:29.803038 sudo[1781]: pam_unix(sudo:session): session closed for user root Jan 23 17:58:29.810546 sudo[1780]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 17:58:29.810837 sudo[1780]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:58:29.823748 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 17:58:29.882537 augenrules[1803]: No rules Jan 23 17:58:29.884710 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 17:58:29.886394 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 17:58:29.888754 sudo[1780]: pam_unix(sudo:session): session closed for user root Jan 23 17:58:29.990973 sshd[1779]: Connection closed by 68.220.241.50 port 34406 Jan 23 17:58:29.991990 sshd-session[1776]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:29.997777 systemd[1]: sshd@5-49.12.73.152:22-68.220.241.50:34406.service: Deactivated successfully. Jan 23 17:58:30.001064 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 17:58:30.002828 systemd-logind[1520]: Session 6 logged out. Waiting for processes to exit. Jan 23 17:58:30.005368 systemd-logind[1520]: Removed session 6. Jan 23 17:58:30.103123 systemd[1]: Started sshd@6-49.12.73.152:22-68.220.241.50:34416.service - OpenSSH per-connection server daemon (68.220.241.50:34416). Jan 23 17:58:30.727429 sshd[1812]: Accepted publickey for core from 68.220.241.50 port 34416 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:58:30.729211 sshd-session[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:30.737255 systemd-logind[1520]: New session 7 of user core. Jan 23 17:58:30.747723 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 17:58:31.062165 sudo[1816]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 17:58:31.062507 sudo[1816]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:58:31.394030 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 17:58:31.408365 (dockerd)[1834]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 17:58:31.640398 dockerd[1834]: time="2026-01-23T17:58:31.640309891Z" level=info msg="Starting up" Jan 23 17:58:31.641527 dockerd[1834]: time="2026-01-23T17:58:31.641447915Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 17:58:31.657605 dockerd[1834]: time="2026-01-23T17:58:31.657081947Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 17:58:31.678772 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport189038292-merged.mount: Deactivated successfully. Jan 23 17:58:31.708986 dockerd[1834]: time="2026-01-23T17:58:31.708922728Z" level=info msg="Loading containers: start." Jan 23 17:58:31.719322 kernel: Initializing XFRM netlink socket Jan 23 17:58:31.983950 systemd-networkd[1419]: docker0: Link UP Jan 23 17:58:31.988934 dockerd[1834]: time="2026-01-23T17:58:31.988851771Z" level=info msg="Loading containers: done." Jan 23 17:58:32.012006 dockerd[1834]: time="2026-01-23T17:58:32.011900863Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 17:58:32.012484 dockerd[1834]: time="2026-01-23T17:58:32.012044323Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 17:58:32.012484 dockerd[1834]: time="2026-01-23T17:58:32.012158152Z" level=info msg="Initializing buildkit" Jan 23 17:58:32.045833 dockerd[1834]: time="2026-01-23T17:58:32.045769166Z" level=info msg="Completed buildkit initialization" Jan 23 17:58:32.057863 dockerd[1834]: time="2026-01-23T17:58:32.057185072Z" level=info msg="Daemon has completed initialization" Jan 23 17:58:32.057863 dockerd[1834]: time="2026-01-23T17:58:32.057259311Z" level=info msg="API listen on /run/docker.sock" Jan 23 17:58:32.058244 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 17:58:32.677386 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3617791660-merged.mount: Deactivated successfully. Jan 23 17:58:33.130587 containerd[1554]: time="2026-01-23T17:58:33.129892474Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 23 17:58:33.720339 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 17:58:33.724370 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:58:33.737858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1446936202.mount: Deactivated successfully. Jan 23 17:58:33.904256 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:58:33.915530 (kubelet)[2067]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:58:33.975521 kubelet[2067]: E0123 17:58:33.975158 2067 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:58:33.978939 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:58:33.979224 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:58:33.981372 systemd[1]: kubelet.service: Consumed 172ms CPU time, 106.6M memory peak. Jan 23 17:58:34.640309 containerd[1554]: time="2026-01-23T17:58:34.640056606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:34.641335 containerd[1554]: time="2026-01-23T17:58:34.641152334Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=24571138" Jan 23 17:58:34.642357 containerd[1554]: time="2026-01-23T17:58:34.642312367Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:34.645376 containerd[1554]: time="2026-01-23T17:58:34.645270316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:34.646683 containerd[1554]: time="2026-01-23T17:58:34.646383119Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 1.515870984s" Jan 23 17:58:34.646683 containerd[1554]: time="2026-01-23T17:58:34.646429299Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Jan 23 17:58:34.647017 containerd[1554]: time="2026-01-23T17:58:34.646983299Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 23 17:58:35.223420 update_engine[1524]: I20260123 17:58:35.222875 1524 update_attempter.cc:509] Updating boot flags... Jan 23 17:58:35.849948 containerd[1554]: time="2026-01-23T17:58:35.849865777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:35.852488 containerd[1554]: time="2026-01-23T17:58:35.852435898Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=19135497" Jan 23 17:58:35.853742 containerd[1554]: time="2026-01-23T17:58:35.853687311Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:35.859311 containerd[1554]: time="2026-01-23T17:58:35.857937863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:35.860784 containerd[1554]: time="2026-01-23T17:58:35.860721123Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 1.213691946s" Jan 23 17:58:35.860991 containerd[1554]: time="2026-01-23T17:58:35.860955350Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Jan 23 17:58:35.861748 containerd[1554]: time="2026-01-23T17:58:35.861702625Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 23 17:58:36.758907 containerd[1554]: time="2026-01-23T17:58:36.758846454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:36.759684 containerd[1554]: time="2026-01-23T17:58:36.759598302Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=14191736" Jan 23 17:58:36.761309 containerd[1554]: time="2026-01-23T17:58:36.760575361Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:36.764344 containerd[1554]: time="2026-01-23T17:58:36.763442776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:36.765053 containerd[1554]: time="2026-01-23T17:58:36.765009672Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 903.235487ms" Jan 23 17:58:36.765053 containerd[1554]: time="2026-01-23T17:58:36.765046248Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Jan 23 17:58:36.765724 containerd[1554]: time="2026-01-23T17:58:36.765688692Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 23 17:58:37.764706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1889513472.mount: Deactivated successfully. Jan 23 17:58:37.992301 containerd[1554]: time="2026-01-23T17:58:37.991948760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:37.992966 containerd[1554]: time="2026-01-23T17:58:37.992908345Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=22805279" Jan 23 17:58:37.993954 containerd[1554]: time="2026-01-23T17:58:37.993754799Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:37.997300 containerd[1554]: time="2026-01-23T17:58:37.997227910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:37.998042 containerd[1554]: time="2026-01-23T17:58:37.998003692Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 1.232278998s" Jan 23 17:58:37.998160 containerd[1554]: time="2026-01-23T17:58:37.998144351Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Jan 23 17:58:37.998760 containerd[1554]: time="2026-01-23T17:58:37.998713792Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 23 17:58:38.557255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1913817682.mount: Deactivated successfully. Jan 23 17:58:39.395673 containerd[1554]: time="2026-01-23T17:58:39.395597920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:39.397694 containerd[1554]: time="2026-01-23T17:58:39.397139666Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395498" Jan 23 17:58:39.398761 containerd[1554]: time="2026-01-23T17:58:39.398705766Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:39.404207 containerd[1554]: time="2026-01-23T17:58:39.404164710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:39.405545 containerd[1554]: time="2026-01-23T17:58:39.405510940Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.406753696s" Jan 23 17:58:39.405666 containerd[1554]: time="2026-01-23T17:58:39.405648424Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Jan 23 17:58:39.406410 containerd[1554]: time="2026-01-23T17:58:39.406335645Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 23 17:58:39.941201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3707965669.mount: Deactivated successfully. Jan 23 17:58:39.947294 containerd[1554]: time="2026-01-23T17:58:39.947028493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:39.948339 containerd[1554]: time="2026-01-23T17:58:39.948307165Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268729" Jan 23 17:58:39.949048 containerd[1554]: time="2026-01-23T17:58:39.948983455Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:39.951564 containerd[1554]: time="2026-01-23T17:58:39.951503426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:39.953239 containerd[1554]: time="2026-01-23T17:58:39.952781616Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 546.165818ms" Jan 23 17:58:39.953239 containerd[1554]: time="2026-01-23T17:58:39.952832695Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Jan 23 17:58:39.953585 containerd[1554]: time="2026-01-23T17:58:39.953560547Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 23 17:58:40.551737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2049474585.mount: Deactivated successfully. Jan 23 17:58:42.761663 containerd[1554]: time="2026-01-23T17:58:42.761577009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:42.763549 containerd[1554]: time="2026-01-23T17:58:42.763484769Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=98063043" Jan 23 17:58:42.765094 containerd[1554]: time="2026-01-23T17:58:42.764997365Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:42.769293 containerd[1554]: time="2026-01-23T17:58:42.767937370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:42.769652 containerd[1554]: time="2026-01-23T17:58:42.769621306Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 2.815803327s" Jan 23 17:58:42.769748 containerd[1554]: time="2026-01-23T17:58:42.769731532Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Jan 23 17:58:44.009701 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 23 17:58:44.014577 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:58:44.186462 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:58:44.195951 (kubelet)[2296]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:58:44.243282 kubelet[2296]: E0123 17:58:44.242463 2296 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:58:44.245204 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:58:44.245379 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:58:44.245967 systemd[1]: kubelet.service: Consumed 177ms CPU time, 106.9M memory peak. Jan 23 17:58:48.358582 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:58:48.358994 systemd[1]: kubelet.service: Consumed 177ms CPU time, 106.9M memory peak. Jan 23 17:58:48.369243 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:58:48.397467 systemd[1]: Reload requested from client PID 2310 ('systemctl') (unit session-7.scope)... Jan 23 17:58:48.397670 systemd[1]: Reloading... Jan 23 17:58:48.545296 zram_generator::config[2356]: No configuration found. Jan 23 17:58:48.724820 systemd[1]: Reloading finished in 326 ms. Jan 23 17:58:48.780994 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 17:58:48.781091 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 17:58:48.781552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:58:48.781612 systemd[1]: kubelet.service: Consumed 105ms CPU time, 95M memory peak. Jan 23 17:58:48.784011 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:58:48.938825 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:58:48.954745 (kubelet)[2401]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 17:58:49.009141 kubelet[2401]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 17:58:49.009141 kubelet[2401]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:58:49.009141 kubelet[2401]: I0123 17:58:49.008880 2401 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 17:58:50.155304 kubelet[2401]: I0123 17:58:50.155226 2401 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 17:58:50.155304 kubelet[2401]: I0123 17:58:50.155261 2401 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 17:58:50.155304 kubelet[2401]: I0123 17:58:50.155314 2401 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 17:58:50.155304 kubelet[2401]: I0123 17:58:50.155321 2401 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 17:58:50.155858 kubelet[2401]: I0123 17:58:50.155595 2401 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 17:58:50.162364 kubelet[2401]: E0123 17:58:50.162303 2401 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://49.12.73.152:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 49.12.73.152:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 17:58:50.163113 kubelet[2401]: I0123 17:58:50.162994 2401 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 17:58:50.167202 kubelet[2401]: I0123 17:58:50.167182 2401 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 17:58:50.171543 kubelet[2401]: I0123 17:58:50.171507 2401 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 17:58:50.172565 kubelet[2401]: I0123 17:58:50.172519 2401 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 17:58:50.172752 kubelet[2401]: I0123 17:58:50.172561 2401 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-3-9-0be39219fc","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 17:58:50.172855 kubelet[2401]: I0123 17:58:50.172757 2401 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 17:58:50.172855 kubelet[2401]: I0123 17:58:50.172766 2401 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 17:58:50.172906 kubelet[2401]: I0123 17:58:50.172877 2401 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 17:58:50.175091 kubelet[2401]: I0123 17:58:50.175070 2401 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:58:50.178081 kubelet[2401]: I0123 17:58:50.177600 2401 kubelet.go:475] "Attempting to sync node with API server" Jan 23 17:58:50.178081 kubelet[2401]: I0123 17:58:50.177700 2401 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 17:58:50.178384 kubelet[2401]: E0123 17:58:50.178351 2401 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://49.12.73.152:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-3-9-0be39219fc&limit=500&resourceVersion=0\": dial tcp 49.12.73.152:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 17:58:50.178908 kubelet[2401]: I0123 17:58:50.178882 2401 kubelet.go:387] "Adding apiserver pod source" Jan 23 17:58:50.178956 kubelet[2401]: I0123 17:58:50.178920 2401 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 17:58:50.180965 kubelet[2401]: E0123 17:58:50.180912 2401 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://49.12.73.152:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 49.12.73.152:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 17:58:50.181695 kubelet[2401]: I0123 17:58:50.181574 2401 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 17:58:50.182998 kubelet[2401]: I0123 17:58:50.182972 2401 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 17:58:50.183074 kubelet[2401]: I0123 17:58:50.183010 2401 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 17:58:50.183074 kubelet[2401]: W0123 17:58:50.183059 2401 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 17:58:50.187727 kubelet[2401]: I0123 17:58:50.187699 2401 server.go:1262] "Started kubelet" Jan 23 17:58:50.188780 kubelet[2401]: I0123 17:58:50.188714 2401 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 17:58:50.193862 kubelet[2401]: E0123 17:58:50.192420 2401 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://49.12.73.152:6443/api/v1/namespaces/default/events\": dial tcp 49.12.73.152:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-3-9-0be39219fc.188d6df5969ac631 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-3-9-0be39219fc,UID:ci-4459-2-3-9-0be39219fc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-3-9-0be39219fc,},FirstTimestamp:2026-01-23 17:58:50.187613745 +0000 UTC m=+1.225474686,LastTimestamp:2026-01-23 17:58:50.187613745 +0000 UTC m=+1.225474686,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-3-9-0be39219fc,}" Jan 23 17:58:50.195298 kubelet[2401]: I0123 17:58:50.194873 2401 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 17:58:50.196022 kubelet[2401]: I0123 17:58:50.195997 2401 server.go:310] "Adding debug handlers to kubelet server" Jan 23 17:58:50.197983 kubelet[2401]: I0123 17:58:50.197115 2401 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 17:58:50.197983 kubelet[2401]: E0123 17:58:50.197465 2401 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-3-9-0be39219fc\" not found" Jan 23 17:58:50.199474 kubelet[2401]: I0123 17:58:50.199422 2401 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 17:58:50.199607 kubelet[2401]: I0123 17:58:50.199592 2401 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 17:58:50.199745 kubelet[2401]: I0123 17:58:50.199717 2401 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 17:58:50.199784 kubelet[2401]: I0123 17:58:50.199776 2401 reconciler.go:29] "Reconciler: start to sync state" Jan 23 17:58:50.199956 kubelet[2401]: I0123 17:58:50.199940 2401 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 17:58:50.200314 kubelet[2401]: I0123 17:58:50.200264 2401 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 17:58:50.203300 kubelet[2401]: E0123 17:58:50.202868 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.12.73.152:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-3-9-0be39219fc?timeout=10s\": dial tcp 49.12.73.152:6443: connect: connection refused" interval="200ms" Jan 23 17:58:50.203300 kubelet[2401]: I0123 17:58:50.203165 2401 factory.go:223] Registration of the systemd container factory successfully Jan 23 17:58:50.203420 kubelet[2401]: I0123 17:58:50.203263 2401 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 17:58:50.204941 kubelet[2401]: E0123 17:58:50.204705 2401 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://49.12.73.152:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 49.12.73.152:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 17:58:50.205486 kubelet[2401]: I0123 17:58:50.205466 2401 factory.go:223] Registration of the containerd container factory successfully Jan 23 17:58:50.213004 kubelet[2401]: I0123 17:58:50.212948 2401 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 17:58:50.214096 kubelet[2401]: I0123 17:58:50.214065 2401 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 17:58:50.214096 kubelet[2401]: I0123 17:58:50.214088 2401 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 17:58:50.214194 kubelet[2401]: I0123 17:58:50.214133 2401 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 17:58:50.214194 kubelet[2401]: E0123 17:58:50.214172 2401 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 17:58:50.221777 kubelet[2401]: E0123 17:58:50.221718 2401 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://49.12.73.152:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 49.12.73.152:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 17:58:50.222053 kubelet[2401]: E0123 17:58:50.222012 2401 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 17:58:50.241512 kubelet[2401]: I0123 17:58:50.241479 2401 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 17:58:50.241820 kubelet[2401]: I0123 17:58:50.241659 2401 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 17:58:50.242538 kubelet[2401]: I0123 17:58:50.242512 2401 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:58:50.246566 kubelet[2401]: I0123 17:58:50.246488 2401 policy_none.go:49] "None policy: Start" Jan 23 17:58:50.246566 kubelet[2401]: I0123 17:58:50.246517 2401 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 17:58:50.246566 kubelet[2401]: I0123 17:58:50.246529 2401 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 17:58:50.248196 kubelet[2401]: I0123 17:58:50.248175 2401 policy_none.go:47] "Start" Jan 23 17:58:50.255120 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 17:58:50.271510 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 17:58:50.276704 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 17:58:50.284361 kubelet[2401]: E0123 17:58:50.283328 2401 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 17:58:50.284361 kubelet[2401]: I0123 17:58:50.283554 2401 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 17:58:50.284361 kubelet[2401]: I0123 17:58:50.283568 2401 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 17:58:50.286391 kubelet[2401]: I0123 17:58:50.286368 2401 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 17:58:50.288636 kubelet[2401]: E0123 17:58:50.288524 2401 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 17:58:50.289010 kubelet[2401]: E0123 17:58:50.288960 2401 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-3-9-0be39219fc\" not found" Jan 23 17:58:50.330332 systemd[1]: Created slice kubepods-burstable-pod30c27a3782de662aaa9429213a1c088e.slice - libcontainer container kubepods-burstable-pod30c27a3782de662aaa9429213a1c088e.slice. Jan 23 17:58:50.352370 kubelet[2401]: E0123 17:58:50.352239 2401 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-9-0be39219fc\" not found" node="ci-4459-2-3-9-0be39219fc" Jan 23 17:58:50.356697 systemd[1]: Created slice kubepods-burstable-pode2e3a600cf90f383aaf7fdcecef38945.slice - libcontainer container kubepods-burstable-pode2e3a600cf90f383aaf7fdcecef38945.slice. Jan 23 17:58:50.368187 kubelet[2401]: E0123 17:58:50.368139 2401 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-9-0be39219fc\" not found" node="ci-4459-2-3-9-0be39219fc" Jan 23 17:58:50.371985 systemd[1]: Created slice kubepods-burstable-podf8c63ee866c7a98fe98327cf938fdb0d.slice - libcontainer container kubepods-burstable-podf8c63ee866c7a98fe98327cf938fdb0d.slice. Jan 23 17:58:50.375483 kubelet[2401]: E0123 17:58:50.375099 2401 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-9-0be39219fc\" not found" node="ci-4459-2-3-9-0be39219fc" Jan 23 17:58:50.386299 kubelet[2401]: I0123 17:58:50.386235 2401 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-9-0be39219fc" Jan 23 17:58:50.386955 kubelet[2401]: E0123 17:58:50.386917 2401 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.12.73.152:6443/api/v1/nodes\": dial tcp 49.12.73.152:6443: connect: connection refused" node="ci-4459-2-3-9-0be39219fc" Jan 23 17:58:50.401817 kubelet[2401]: I0123 17:58:50.401767 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/30c27a3782de662aaa9429213a1c088e-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-3-9-0be39219fc\" (UID: \"30c27a3782de662aaa9429213a1c088e\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-9-0be39219fc" Jan 23 17:58:50.402414 kubelet[2401]: I0123 17:58:50.402137 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e2e3a600cf90f383aaf7fdcecef38945-kubeconfig\") pod \"kube-scheduler-ci-4459-2-3-9-0be39219fc\" (UID: \"e2e3a600cf90f383aaf7fdcecef38945\") " pod="kube-system/kube-scheduler-ci-4459-2-3-9-0be39219fc" Jan 23 17:58:50.402414 kubelet[2401]: I0123 17:58:50.402177 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8c63ee866c7a98fe98327cf938fdb0d-ca-certs\") pod \"kube-apiserver-ci-4459-2-3-9-0be39219fc\" (UID: \"f8c63ee866c7a98fe98327cf938fdb0d\") " pod="kube-system/kube-apiserver-ci-4459-2-3-9-0be39219fc" Jan 23 17:58:50.402414 kubelet[2401]: I0123 17:58:50.402206 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/30c27a3782de662aaa9429213a1c088e-ca-certs\") pod \"kube-controller-manager-ci-4459-2-3-9-0be39219fc\" (UID: \"30c27a3782de662aaa9429213a1c088e\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-9-0be39219fc" Jan 23 17:58:50.402414 kubelet[2401]: I0123 17:58:50.402230 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/30c27a3782de662aaa9429213a1c088e-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-3-9-0be39219fc\" (UID: \"30c27a3782de662aaa9429213a1c088e\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-9-0be39219fc" Jan 23 17:58:50.402414 kubelet[2401]: I0123 17:58:50.402253 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/30c27a3782de662aaa9429213a1c088e-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-3-9-0be39219fc\" (UID: \"30c27a3782de662aaa9429213a1c088e\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-9-0be39219fc" Jan 23 17:58:50.402633 kubelet[2401]: I0123 17:58:50.402304 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/30c27a3782de662aaa9429213a1c088e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-3-9-0be39219fc\" (UID: \"30c27a3782de662aaa9429213a1c088e\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-9-0be39219fc" Jan 23 17:58:50.402633 kubelet[2401]: I0123 17:58:50.402329 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8c63ee866c7a98fe98327cf938fdb0d-k8s-certs\") pod \"kube-apiserver-ci-4459-2-3-9-0be39219fc\" (UID: \"f8c63ee866c7a98fe98327cf938fdb0d\") " pod="kube-system/kube-apiserver-ci-4459-2-3-9-0be39219fc" Jan 23 17:58:50.402633 kubelet[2401]: I0123 17:58:50.402351 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8c63ee866c7a98fe98327cf938fdb0d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-3-9-0be39219fc\" (UID: \"f8c63ee866c7a98fe98327cf938fdb0d\") " pod="kube-system/kube-apiserver-ci-4459-2-3-9-0be39219fc" Jan 23 17:58:50.404545 kubelet[2401]: E0123 17:58:50.404488 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.12.73.152:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-3-9-0be39219fc?timeout=10s\": dial tcp 49.12.73.152:6443: connect: connection refused" interval="400ms" Jan 23 17:58:50.590823 kubelet[2401]: I0123 17:58:50.590754 2401 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-9-0be39219fc" Jan 23 17:58:50.591504 kubelet[2401]: E0123 17:58:50.591442 2401 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.12.73.152:6443/api/v1/nodes\": dial tcp 49.12.73.152:6443: connect: connection refused" node="ci-4459-2-3-9-0be39219fc" Jan 23 17:58:50.657229 containerd[1554]: time="2026-01-23T17:58:50.657181051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-3-9-0be39219fc,Uid:30c27a3782de662aaa9429213a1c088e,Namespace:kube-system,Attempt:0,}" Jan 23 17:58:50.671783 containerd[1554]: time="2026-01-23T17:58:50.671713025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-3-9-0be39219fc,Uid:e2e3a600cf90f383aaf7fdcecef38945,Namespace:kube-system,Attempt:0,}" Jan 23 17:58:50.678132 containerd[1554]: time="2026-01-23T17:58:50.677760413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-3-9-0be39219fc,Uid:f8c63ee866c7a98fe98327cf938fdb0d,Namespace:kube-system,Attempt:0,}" Jan 23 17:58:50.805776 kubelet[2401]: E0123 17:58:50.805685 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.12.73.152:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-3-9-0be39219fc?timeout=10s\": dial tcp 49.12.73.152:6443: connect: connection refused" interval="800ms" Jan 23 17:58:50.994775 kubelet[2401]: I0123 17:58:50.994671 2401 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-9-0be39219fc" Jan 23 17:58:50.995393 kubelet[2401]: E0123 17:58:50.995343 2401 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.12.73.152:6443/api/v1/nodes\": dial tcp 49.12.73.152:6443: connect: connection refused" node="ci-4459-2-3-9-0be39219fc" Jan 23 17:58:51.069653 kubelet[2401]: E0123 17:58:51.069606 2401 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://49.12.73.152:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 49.12.73.152:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 17:58:51.162911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount653546685.mount: Deactivated successfully. Jan 23 17:58:51.170314 containerd[1554]: time="2026-01-23T17:58:51.169562783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:58:51.172905 containerd[1554]: time="2026-01-23T17:58:51.172859586Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Jan 23 17:58:51.175818 containerd[1554]: time="2026-01-23T17:58:51.175766341Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:58:51.176666 containerd[1554]: time="2026-01-23T17:58:51.176622032Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:58:51.177054 containerd[1554]: time="2026-01-23T17:58:51.177027456Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 17:58:51.178996 containerd[1554]: time="2026-01-23T17:58:51.178953160Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:58:51.180758 containerd[1554]: time="2026-01-23T17:58:51.180701057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:58:51.181541 containerd[1554]: time="2026-01-23T17:58:51.181500694Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 17:58:51.184011 containerd[1554]: time="2026-01-23T17:58:51.183571937Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 509.936117ms" Jan 23 17:58:51.184868 containerd[1554]: time="2026-01-23T17:58:51.184817437Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 525.398607ms" Jan 23 17:58:51.198490 containerd[1554]: time="2026-01-23T17:58:51.198429135Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 519.048009ms" Jan 23 17:58:51.221879 kubelet[2401]: E0123 17:58:51.221545 2401 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://49.12.73.152:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-3-9-0be39219fc&limit=500&resourceVersion=0\": dial tcp 49.12.73.152:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 17:58:51.230963 containerd[1554]: time="2026-01-23T17:58:51.229311437Z" level=info msg="connecting to shim 996667d842348a4f23606e4a63544b05e57ca3bafdbb8c9b32fff6ed76322160" address="unix:///run/containerd/s/6f2367f314f6d3d045d900e57ddecfdc84b5db6697d369683d3a6855670761f2" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:58:51.239646 containerd[1554]: time="2026-01-23T17:58:51.239601066Z" level=info msg="connecting to shim 49956425480b0ac059bc78719ebb412c29f0e3a7d71725d5be87fdf4a62db7ae" address="unix:///run/containerd/s/fd7c4e19c81dac7021e1bdba368e2a09e03113ba894f45616e384c7dc87bb878" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:58:51.246892 containerd[1554]: time="2026-01-23T17:58:51.246693707Z" level=info msg="connecting to shim 56edca07bc33373f86d26a72ec37f81c87139324d23e0b1e36967fdaf5cb2af8" address="unix:///run/containerd/s/e65bce3d3c903d347805a8a7e16ad72d6231519170555afb43e4feb9e8fb2281" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:58:51.270513 systemd[1]: Started cri-containerd-996667d842348a4f23606e4a63544b05e57ca3bafdbb8c9b32fff6ed76322160.scope - libcontainer container 996667d842348a4f23606e4a63544b05e57ca3bafdbb8c9b32fff6ed76322160. Jan 23 17:58:51.276863 systemd[1]: Started cri-containerd-49956425480b0ac059bc78719ebb412c29f0e3a7d71725d5be87fdf4a62db7ae.scope - libcontainer container 49956425480b0ac059bc78719ebb412c29f0e3a7d71725d5be87fdf4a62db7ae. Jan 23 17:58:51.295857 systemd[1]: Started cri-containerd-56edca07bc33373f86d26a72ec37f81c87139324d23e0b1e36967fdaf5cb2af8.scope - libcontainer container 56edca07bc33373f86d26a72ec37f81c87139324d23e0b1e36967fdaf5cb2af8. Jan 23 17:58:51.348554 containerd[1554]: time="2026-01-23T17:58:51.348488682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-3-9-0be39219fc,Uid:e2e3a600cf90f383aaf7fdcecef38945,Namespace:kube-system,Attempt:0,} returns sandbox id \"49956425480b0ac059bc78719ebb412c29f0e3a7d71725d5be87fdf4a62db7ae\"" Jan 23 17:58:51.358740 containerd[1554]: time="2026-01-23T17:58:51.358626968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-3-9-0be39219fc,Uid:30c27a3782de662aaa9429213a1c088e,Namespace:kube-system,Attempt:0,} returns sandbox id \"996667d842348a4f23606e4a63544b05e57ca3bafdbb8c9b32fff6ed76322160\"" Jan 23 17:58:51.358740 containerd[1554]: time="2026-01-23T17:58:51.358696434Z" level=info msg="CreateContainer within sandbox \"49956425480b0ac059bc78719ebb412c29f0e3a7d71725d5be87fdf4a62db7ae\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 17:58:51.365185 containerd[1554]: time="2026-01-23T17:58:51.365145785Z" level=info msg="CreateContainer within sandbox \"996667d842348a4f23606e4a63544b05e57ca3bafdbb8c9b32fff6ed76322160\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 17:58:51.369779 kubelet[2401]: E0123 17:58:51.369729 2401 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://49.12.73.152:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 49.12.73.152:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 17:58:51.371113 containerd[1554]: time="2026-01-23T17:58:51.371012544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-3-9-0be39219fc,Uid:f8c63ee866c7a98fe98327cf938fdb0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"56edca07bc33373f86d26a72ec37f81c87139324d23e0b1e36967fdaf5cb2af8\"" Jan 23 17:58:51.379557 containerd[1554]: time="2026-01-23T17:58:51.379513839Z" level=info msg="CreateContainer within sandbox \"56edca07bc33373f86d26a72ec37f81c87139324d23e0b1e36967fdaf5cb2af8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 17:58:51.380825 containerd[1554]: time="2026-01-23T17:58:51.380760981Z" level=info msg="Container e4f6e590de81e50d75ff50251b3fd5a652c3353d53bab3959d7d36722b3b9e24: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:58:51.384451 containerd[1554]: time="2026-01-23T17:58:51.384413121Z" level=info msg="Container 2c313bc070d536d33ec58cf4b48fcf29e0dd22f50fb557ad8e210ba82b792de0: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:58:51.397064 containerd[1554]: time="2026-01-23T17:58:51.397008896Z" level=info msg="CreateContainer within sandbox \"49956425480b0ac059bc78719ebb412c29f0e3a7d71725d5be87fdf4a62db7ae\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e4f6e590de81e50d75ff50251b3fd5a652c3353d53bab3959d7d36722b3b9e24\"" Jan 23 17:58:51.399334 containerd[1554]: time="2026-01-23T17:58:51.399279728Z" level=info msg="StartContainer for \"e4f6e590de81e50d75ff50251b3fd5a652c3353d53bab3959d7d36722b3b9e24\"" Jan 23 17:58:51.400100 containerd[1554]: time="2026-01-23T17:58:51.400050859Z" level=info msg="Container c81af20be246f7ed3b08b2dca9a732ce1d3b8e611404f013745a19f1bd4ef26b: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:58:51.402073 containerd[1554]: time="2026-01-23T17:58:51.402031736Z" level=info msg="connecting to shim e4f6e590de81e50d75ff50251b3fd5a652c3353d53bab3959d7d36722b3b9e24" address="unix:///run/containerd/s/fd7c4e19c81dac7021e1bdba368e2a09e03113ba894f45616e384c7dc87bb878" protocol=ttrpc version=3 Jan 23 17:58:51.403460 containerd[1554]: time="2026-01-23T17:58:51.403423574Z" level=info msg="CreateContainer within sandbox \"996667d842348a4f23606e4a63544b05e57ca3bafdbb8c9b32fff6ed76322160\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2c313bc070d536d33ec58cf4b48fcf29e0dd22f50fb557ad8e210ba82b792de0\"" Jan 23 17:58:51.405938 containerd[1554]: time="2026-01-23T17:58:51.405901362Z" level=info msg="StartContainer for \"2c313bc070d536d33ec58cf4b48fcf29e0dd22f50fb557ad8e210ba82b792de0\"" Jan 23 17:58:51.407946 containerd[1554]: time="2026-01-23T17:58:51.407828228Z" level=info msg="connecting to shim 2c313bc070d536d33ec58cf4b48fcf29e0dd22f50fb557ad8e210ba82b792de0" address="unix:///run/containerd/s/6f2367f314f6d3d045d900e57ddecfdc84b5db6697d369683d3a6855670761f2" protocol=ttrpc version=3 Jan 23 17:58:51.414895 containerd[1554]: time="2026-01-23T17:58:51.414829622Z" level=info msg="CreateContainer within sandbox \"56edca07bc33373f86d26a72ec37f81c87139324d23e0b1e36967fdaf5cb2af8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c81af20be246f7ed3b08b2dca9a732ce1d3b8e611404f013745a19f1bd4ef26b\"" Jan 23 17:58:51.415844 containerd[1554]: time="2026-01-23T17:58:51.415798420Z" level=info msg="StartContainer for \"c81af20be246f7ed3b08b2dca9a732ce1d3b8e611404f013745a19f1bd4ef26b\"" Jan 23 17:58:51.418685 containerd[1554]: time="2026-01-23T17:58:51.418501341Z" level=info msg="connecting to shim c81af20be246f7ed3b08b2dca9a732ce1d3b8e611404f013745a19f1bd4ef26b" address="unix:///run/containerd/s/e65bce3d3c903d347805a8a7e16ad72d6231519170555afb43e4feb9e8fb2281" protocol=ttrpc version=3 Jan 23 17:58:51.432801 systemd[1]: Started cri-containerd-e4f6e590de81e50d75ff50251b3fd5a652c3353d53bab3959d7d36722b3b9e24.scope - libcontainer container e4f6e590de81e50d75ff50251b3fd5a652c3353d53bab3959d7d36722b3b9e24. Jan 23 17:58:51.446900 systemd[1]: Started cri-containerd-2c313bc070d536d33ec58cf4b48fcf29e0dd22f50fb557ad8e210ba82b792de0.scope - libcontainer container 2c313bc070d536d33ec58cf4b48fcf29e0dd22f50fb557ad8e210ba82b792de0. Jan 23 17:58:51.460410 systemd[1]: Started cri-containerd-c81af20be246f7ed3b08b2dca9a732ce1d3b8e611404f013745a19f1bd4ef26b.scope - libcontainer container c81af20be246f7ed3b08b2dca9a732ce1d3b8e611404f013745a19f1bd4ef26b. Jan 23 17:58:51.510252 containerd[1554]: time="2026-01-23T17:58:51.510209398Z" level=info msg="StartContainer for \"e4f6e590de81e50d75ff50251b3fd5a652c3353d53bab3959d7d36722b3b9e24\" returns successfully" Jan 23 17:58:51.546220 containerd[1554]: time="2026-01-23T17:58:51.546095802Z" level=info msg="StartContainer for \"2c313bc070d536d33ec58cf4b48fcf29e0dd22f50fb557ad8e210ba82b792de0\" returns successfully" Jan 23 17:58:51.559631 containerd[1554]: time="2026-01-23T17:58:51.558581953Z" level=info msg="StartContainer for \"c81af20be246f7ed3b08b2dca9a732ce1d3b8e611404f013745a19f1bd4ef26b\" returns successfully" Jan 23 17:58:51.607036 kubelet[2401]: E0123 17:58:51.606915 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.12.73.152:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-3-9-0be39219fc?timeout=10s\": dial tcp 49.12.73.152:6443: connect: connection refused" interval="1.6s" Jan 23 17:58:51.798772 kubelet[2401]: I0123 17:58:51.798671 2401 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-9-0be39219fc" Jan 23 17:58:52.254688 kubelet[2401]: E0123 17:58:52.254542 2401 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-9-0be39219fc\" not found" node="ci-4459-2-3-9-0be39219fc" Jan 23 17:58:52.255204 kubelet[2401]: E0123 17:58:52.255179 2401 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-9-0be39219fc\" not found" node="ci-4459-2-3-9-0be39219fc" Jan 23 17:58:52.259531 kubelet[2401]: E0123 17:58:52.259506 2401 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-9-0be39219fc\" not found" node="ci-4459-2-3-9-0be39219fc" Jan 23 17:58:53.261965 kubelet[2401]: E0123 17:58:53.261932 2401 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-9-0be39219fc\" not found" node="ci-4459-2-3-9-0be39219fc" Jan 23 17:58:53.262766 kubelet[2401]: E0123 17:58:53.262734 2401 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-9-0be39219fc\" not found" node="ci-4459-2-3-9-0be39219fc" Jan 23 17:58:53.263339 kubelet[2401]: E0123 17:58:53.263196 2401 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-9-0be39219fc\" not found" node="ci-4459-2-3-9-0be39219fc" Jan 23 17:58:54.182157 kubelet[2401]: I0123 17:58:54.181876 2401 apiserver.go:52] "Watching apiserver" Jan 23 17:58:54.242367 kubelet[2401]: E0123 17:58:54.242304 2401 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-2-3-9-0be39219fc\" not found" node="ci-4459-2-3-9-0be39219fc" Jan 23 17:58:54.264521 kubelet[2401]: E0123 17:58:54.264420 2401 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-9-0be39219fc\" not found" node="ci-4459-2-3-9-0be39219fc" Jan 23 17:58:54.299935 kubelet[2401]: I0123 17:58:54.299877 2401 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 17:58:54.360144 kubelet[2401]: I0123 17:58:54.359946 2401 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-3-9-0be39219fc" Jan 23 17:58:54.360144 kubelet[2401]: E0123 17:58:54.359985 2401 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4459-2-3-9-0be39219fc\": node \"ci-4459-2-3-9-0be39219fc\" not found" Jan 23 17:58:54.398711 kubelet[2401]: I0123 17:58:54.398673 2401 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-3-9-0be39219fc" Jan 23 17:58:54.431978 kubelet[2401]: E0123 17:58:54.431925 2401 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-3-9-0be39219fc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-2-3-9-0be39219fc" Jan 23 17:58:54.431978 kubelet[2401]: I0123 17:58:54.431963 2401 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-3-9-0be39219fc" Jan 23 17:58:54.435491 kubelet[2401]: E0123 17:58:54.435373 2401 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-3-9-0be39219fc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-3-9-0be39219fc" Jan 23 17:58:54.435491 kubelet[2401]: I0123 17:58:54.435408 2401 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-3-9-0be39219fc" Jan 23 17:58:54.438634 kubelet[2401]: E0123 17:58:54.438597 2401 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-3-9-0be39219fc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-3-9-0be39219fc" Jan 23 17:58:55.055563 kubelet[2401]: I0123 17:58:55.055524 2401 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-3-9-0be39219fc" Jan 23 17:58:56.628137 systemd[1]: Reload requested from client PID 2684 ('systemctl') (unit session-7.scope)... Jan 23 17:58:56.628316 systemd[1]: Reloading... Jan 23 17:58:56.755313 zram_generator::config[2731]: No configuration found. Jan 23 17:58:56.986067 systemd[1]: Reloading finished in 357 ms. Jan 23 17:58:57.028489 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:58:57.040179 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 17:58:57.042349 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:58:57.042423 systemd[1]: kubelet.service: Consumed 1.688s CPU time, 121.4M memory peak. Jan 23 17:58:57.046258 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:58:57.228536 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:58:57.241685 (kubelet)[2773]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 17:58:57.303323 kubelet[2773]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 17:58:57.303323 kubelet[2773]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:58:57.303678 kubelet[2773]: I0123 17:58:57.303396 2773 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 17:58:57.311817 kubelet[2773]: I0123 17:58:57.311771 2773 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 17:58:57.311817 kubelet[2773]: I0123 17:58:57.311804 2773 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 17:58:57.311965 kubelet[2773]: I0123 17:58:57.311837 2773 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 17:58:57.311965 kubelet[2773]: I0123 17:58:57.311845 2773 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 17:58:57.313161 kubelet[2773]: I0123 17:58:57.312398 2773 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 17:58:57.314074 kubelet[2773]: I0123 17:58:57.314051 2773 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 17:58:57.316706 kubelet[2773]: I0123 17:58:57.316657 2773 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 17:58:57.321439 kubelet[2773]: I0123 17:58:57.321369 2773 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 17:58:57.323920 kubelet[2773]: I0123 17:58:57.323884 2773 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 17:58:57.324130 kubelet[2773]: I0123 17:58:57.324058 2773 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 17:58:57.324248 kubelet[2773]: I0123 17:58:57.324087 2773 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-3-9-0be39219fc","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 17:58:57.324340 kubelet[2773]: I0123 17:58:57.324251 2773 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 17:58:57.324340 kubelet[2773]: I0123 17:58:57.324260 2773 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 17:58:57.324340 kubelet[2773]: I0123 17:58:57.324311 2773 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 17:58:57.325406 kubelet[2773]: I0123 17:58:57.325387 2773 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:58:57.325609 kubelet[2773]: I0123 17:58:57.325572 2773 kubelet.go:475] "Attempting to sync node with API server" Jan 23 17:58:57.326050 kubelet[2773]: I0123 17:58:57.325608 2773 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 17:58:57.326181 kubelet[2773]: I0123 17:58:57.326102 2773 kubelet.go:387] "Adding apiserver pod source" Jan 23 17:58:57.329258 kubelet[2773]: I0123 17:58:57.326116 2773 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 17:58:57.341278 kubelet[2773]: I0123 17:58:57.340427 2773 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 17:58:57.341278 kubelet[2773]: I0123 17:58:57.341026 2773 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 17:58:57.341278 kubelet[2773]: I0123 17:58:57.341052 2773 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 17:58:57.345085 kubelet[2773]: I0123 17:58:57.344670 2773 server.go:1262] "Started kubelet" Jan 23 17:58:57.346762 kubelet[2773]: I0123 17:58:57.346724 2773 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 17:58:57.357897 kubelet[2773]: I0123 17:58:57.357821 2773 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 17:58:57.359491 kubelet[2773]: I0123 17:58:57.358882 2773 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 17:58:57.359491 kubelet[2773]: I0123 17:58:57.358947 2773 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 17:58:57.359863 kubelet[2773]: I0123 17:58:57.359841 2773 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 17:58:57.361304 kubelet[2773]: I0123 17:58:57.360195 2773 server.go:310] "Adding debug handlers to kubelet server" Jan 23 17:58:57.362033 kubelet[2773]: I0123 17:58:57.361742 2773 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 17:58:57.364313 kubelet[2773]: I0123 17:58:57.364194 2773 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 17:58:57.365600 kubelet[2773]: I0123 17:58:57.365023 2773 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 17:58:57.365600 kubelet[2773]: I0123 17:58:57.365451 2773 reconciler.go:29] "Reconciler: start to sync state" Jan 23 17:58:57.367593 kubelet[2773]: I0123 17:58:57.367541 2773 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 17:58:57.369042 kubelet[2773]: I0123 17:58:57.368985 2773 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 17:58:57.369042 kubelet[2773]: I0123 17:58:57.369027 2773 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 17:58:57.369042 kubelet[2773]: I0123 17:58:57.369053 2773 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 17:58:57.369201 kubelet[2773]: E0123 17:58:57.369113 2773 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 17:58:57.375147 kubelet[2773]: I0123 17:58:57.375028 2773 factory.go:223] Registration of the systemd container factory successfully Jan 23 17:58:57.376405 kubelet[2773]: I0123 17:58:57.376371 2773 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 17:58:57.378559 kubelet[2773]: E0123 17:58:57.378048 2773 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 17:58:57.379311 kubelet[2773]: I0123 17:58:57.379076 2773 factory.go:223] Registration of the containerd container factory successfully Jan 23 17:58:57.445575 kubelet[2773]: I0123 17:58:57.445535 2773 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 17:58:57.445575 kubelet[2773]: I0123 17:58:57.445552 2773 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 17:58:57.445575 kubelet[2773]: I0123 17:58:57.445578 2773 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:58:57.445784 kubelet[2773]: I0123 17:58:57.445723 2773 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 17:58:57.445784 kubelet[2773]: I0123 17:58:57.445733 2773 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 17:58:57.445784 kubelet[2773]: I0123 17:58:57.445764 2773 policy_none.go:49] "None policy: Start" Jan 23 17:58:57.445784 kubelet[2773]: I0123 17:58:57.445773 2773 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 17:58:57.445784 kubelet[2773]: I0123 17:58:57.445780 2773 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 17:58:57.445975 kubelet[2773]: I0123 17:58:57.445882 2773 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 23 17:58:57.445975 kubelet[2773]: I0123 17:58:57.445892 2773 policy_none.go:47] "Start" Jan 23 17:58:57.453807 kubelet[2773]: E0123 17:58:57.453766 2773 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 17:58:57.453989 kubelet[2773]: I0123 17:58:57.453971 2773 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 17:58:57.454020 kubelet[2773]: I0123 17:58:57.453991 2773 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 17:58:57.455688 kubelet[2773]: I0123 17:58:57.454922 2773 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 17:58:57.461531 kubelet[2773]: E0123 17:58:57.460026 2773 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 17:58:57.471586 kubelet[2773]: I0123 17:58:57.471552 2773 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-3-9-0be39219fc" Jan 23 17:58:57.472139 kubelet[2773]: I0123 17:58:57.472103 2773 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-3-9-0be39219fc" Jan 23 17:58:57.473288 kubelet[2773]: I0123 17:58:57.473193 2773 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-3-9-0be39219fc" Jan 23 17:58:57.483626 kubelet[2773]: E0123 17:58:57.483571 2773 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-3-9-0be39219fc\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-3-9-0be39219fc" Jan 23 17:58:57.564411 kubelet[2773]: I0123 17:58:57.564176 2773 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-9-0be39219fc" Jan 23 17:58:57.567179 kubelet[2773]: I0123 17:58:57.567130 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8c63ee866c7a98fe98327cf938fdb0d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-3-9-0be39219fc\" (UID: \"f8c63ee866c7a98fe98327cf938fdb0d\") " pod="kube-system/kube-apiserver-ci-4459-2-3-9-0be39219fc" Jan 23 17:58:57.567179 kubelet[2773]: I0123 17:58:57.567179 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/30c27a3782de662aaa9429213a1c088e-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-3-9-0be39219fc\" (UID: \"30c27a3782de662aaa9429213a1c088e\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-9-0be39219fc" Jan 23 17:58:57.567672 kubelet[2773]: I0123 17:58:57.567239 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/30c27a3782de662aaa9429213a1c088e-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-3-9-0be39219fc\" (UID: \"30c27a3782de662aaa9429213a1c088e\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-9-0be39219fc" Jan 23 17:58:57.567672 kubelet[2773]: I0123 17:58:57.567261 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8c63ee866c7a98fe98327cf938fdb0d-ca-certs\") pod \"kube-apiserver-ci-4459-2-3-9-0be39219fc\" (UID: \"f8c63ee866c7a98fe98327cf938fdb0d\") " pod="kube-system/kube-apiserver-ci-4459-2-3-9-0be39219fc" Jan 23 17:58:57.567672 kubelet[2773]: I0123 17:58:57.567304 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8c63ee866c7a98fe98327cf938fdb0d-k8s-certs\") pod \"kube-apiserver-ci-4459-2-3-9-0be39219fc\" (UID: \"f8c63ee866c7a98fe98327cf938fdb0d\") " pod="kube-system/kube-apiserver-ci-4459-2-3-9-0be39219fc" Jan 23 17:58:57.567672 kubelet[2773]: I0123 17:58:57.567391 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/30c27a3782de662aaa9429213a1c088e-ca-certs\") pod \"kube-controller-manager-ci-4459-2-3-9-0be39219fc\" (UID: \"30c27a3782de662aaa9429213a1c088e\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-9-0be39219fc" Jan 23 17:58:57.567672 kubelet[2773]: I0123 17:58:57.567431 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/30c27a3782de662aaa9429213a1c088e-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-3-9-0be39219fc\" (UID: \"30c27a3782de662aaa9429213a1c088e\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-9-0be39219fc" Jan 23 17:58:57.567913 kubelet[2773]: I0123 17:58:57.567468 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/30c27a3782de662aaa9429213a1c088e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-3-9-0be39219fc\" (UID: \"30c27a3782de662aaa9429213a1c088e\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-9-0be39219fc" Jan 23 17:58:57.567913 kubelet[2773]: I0123 17:58:57.567498 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e2e3a600cf90f383aaf7fdcecef38945-kubeconfig\") pod \"kube-scheduler-ci-4459-2-3-9-0be39219fc\" (UID: \"e2e3a600cf90f383aaf7fdcecef38945\") " pod="kube-system/kube-scheduler-ci-4459-2-3-9-0be39219fc" Jan 23 17:58:57.576500 kubelet[2773]: I0123 17:58:57.576462 2773 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-2-3-9-0be39219fc" Jan 23 17:58:57.576868 kubelet[2773]: I0123 17:58:57.576689 2773 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-3-9-0be39219fc" Jan 23 17:58:58.332386 kubelet[2773]: I0123 17:58:58.332259 2773 apiserver.go:52] "Watching apiserver" Jan 23 17:58:58.366696 kubelet[2773]: I0123 17:58:58.365806 2773 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 17:58:58.422136 kubelet[2773]: I0123 17:58:58.422098 2773 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-3-9-0be39219fc" Jan 23 17:58:58.429903 kubelet[2773]: E0123 17:58:58.429860 2773 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-3-9-0be39219fc\" already exists" pod="kube-system/kube-scheduler-ci-4459-2-3-9-0be39219fc" Jan 23 17:58:58.435638 kubelet[2773]: I0123 17:58:58.434976 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-3-9-0be39219fc" podStartSLOduration=1.434957916 podStartE2EDuration="1.434957916s" podCreationTimestamp="2026-01-23 17:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:58:58.431917034 +0000 UTC m=+1.184532488" watchObservedRunningTime="2026-01-23 17:58:58.434957916 +0000 UTC m=+1.187573370" Jan 23 17:58:58.436854 kubelet[2773]: I0123 17:58:58.436812 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-3-9-0be39219fc" podStartSLOduration=1.436794023 podStartE2EDuration="1.436794023s" podCreationTimestamp="2026-01-23 17:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:58:58.416384048 +0000 UTC m=+1.168999502" watchObservedRunningTime="2026-01-23 17:58:58.436794023 +0000 UTC m=+1.189409477" Jan 23 17:58:58.467020 kubelet[2773]: I0123 17:58:58.466953 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-3-9-0be39219fc" podStartSLOduration=3.466932192 podStartE2EDuration="3.466932192s" podCreationTimestamp="2026-01-23 17:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:58:58.448633775 +0000 UTC m=+1.201249229" watchObservedRunningTime="2026-01-23 17:58:58.466932192 +0000 UTC m=+1.219547646" Jan 23 17:59:02.820541 kubelet[2773]: I0123 17:59:02.820501 2773 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 17:59:02.821364 containerd[1554]: time="2026-01-23T17:59:02.821330341Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 17:59:02.822119 kubelet[2773]: I0123 17:59:02.821550 2773 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 17:59:03.538347 systemd[1]: Created slice kubepods-besteffort-pod2ab97068_811d_43ff_9706_95d2f058ff06.slice - libcontainer container kubepods-besteffort-pod2ab97068_811d_43ff_9706_95d2f058ff06.slice. Jan 23 17:59:03.605614 kubelet[2773]: I0123 17:59:03.605568 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ab97068-811d-43ff-9706-95d2f058ff06-lib-modules\") pod \"kube-proxy-4snch\" (UID: \"2ab97068-811d-43ff-9706-95d2f058ff06\") " pod="kube-system/kube-proxy-4snch" Jan 23 17:59:03.605614 kubelet[2773]: I0123 17:59:03.605619 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2ab97068-811d-43ff-9706-95d2f058ff06-kube-proxy\") pod \"kube-proxy-4snch\" (UID: \"2ab97068-811d-43ff-9706-95d2f058ff06\") " pod="kube-system/kube-proxy-4snch" Jan 23 17:59:03.605787 kubelet[2773]: I0123 17:59:03.605639 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2ab97068-811d-43ff-9706-95d2f058ff06-xtables-lock\") pod \"kube-proxy-4snch\" (UID: \"2ab97068-811d-43ff-9706-95d2f058ff06\") " pod="kube-system/kube-proxy-4snch" Jan 23 17:59:03.605787 kubelet[2773]: I0123 17:59:03.605658 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn8lq\" (UniqueName: \"kubernetes.io/projected/2ab97068-811d-43ff-9706-95d2f058ff06-kube-api-access-vn8lq\") pod \"kube-proxy-4snch\" (UID: \"2ab97068-811d-43ff-9706-95d2f058ff06\") " pod="kube-system/kube-proxy-4snch" Jan 23 17:59:03.716884 kubelet[2773]: E0123 17:59:03.716825 2773 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 23 17:59:03.716884 kubelet[2773]: E0123 17:59:03.716891 2773 projected.go:196] Error preparing data for projected volume kube-api-access-vn8lq for pod kube-system/kube-proxy-4snch: configmap "kube-root-ca.crt" not found Jan 23 17:59:03.717043 kubelet[2773]: E0123 17:59:03.716974 2773 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2ab97068-811d-43ff-9706-95d2f058ff06-kube-api-access-vn8lq podName:2ab97068-811d-43ff-9706-95d2f058ff06 nodeName:}" failed. No retries permitted until 2026-01-23 17:59:04.216951155 +0000 UTC m=+6.969566609 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vn8lq" (UniqueName: "kubernetes.io/projected/2ab97068-811d-43ff-9706-95d2f058ff06-kube-api-access-vn8lq") pod "kube-proxy-4snch" (UID: "2ab97068-811d-43ff-9706-95d2f058ff06") : configmap "kube-root-ca.crt" not found Jan 23 17:59:04.061332 systemd[1]: Created slice kubepods-besteffort-podf9ac43e2_f5bf_4819_8941_ac40921de4ed.slice - libcontainer container kubepods-besteffort-podf9ac43e2_f5bf_4819_8941_ac40921de4ed.slice. Jan 23 17:59:04.110073 kubelet[2773]: I0123 17:59:04.109957 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f9ac43e2-f5bf-4819-8941-ac40921de4ed-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-vdqnm\" (UID: \"f9ac43e2-f5bf-4819-8941-ac40921de4ed\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-vdqnm" Jan 23 17:59:04.110815 kubelet[2773]: I0123 17:59:04.110221 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d6zj\" (UniqueName: \"kubernetes.io/projected/f9ac43e2-f5bf-4819-8941-ac40921de4ed-kube-api-access-8d6zj\") pod \"tigera-operator-65cdcdfd6d-vdqnm\" (UID: \"f9ac43e2-f5bf-4819-8941-ac40921de4ed\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-vdqnm" Jan 23 17:59:04.371430 containerd[1554]: time="2026-01-23T17:59:04.371302383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-vdqnm,Uid:f9ac43e2-f5bf-4819-8941-ac40921de4ed,Namespace:tigera-operator,Attempt:0,}" Jan 23 17:59:04.399558 containerd[1554]: time="2026-01-23T17:59:04.399504378Z" level=info msg="connecting to shim 9edb551d4f5b872770196dc65df30a2021baa5932a8b68b2482e048c0b8bf39e" address="unix:///run/containerd/s/c11eebfe9ef545d04104e422788724b6ce3bd4e714cdb504cfaa65b66d33e530" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:59:04.431491 systemd[1]: Started cri-containerd-9edb551d4f5b872770196dc65df30a2021baa5932a8b68b2482e048c0b8bf39e.scope - libcontainer container 9edb551d4f5b872770196dc65df30a2021baa5932a8b68b2482e048c0b8bf39e. Jan 23 17:59:04.453874 containerd[1554]: time="2026-01-23T17:59:04.453787340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4snch,Uid:2ab97068-811d-43ff-9706-95d2f058ff06,Namespace:kube-system,Attempt:0,}" Jan 23 17:59:04.479896 containerd[1554]: time="2026-01-23T17:59:04.479764559Z" level=info msg="connecting to shim 27d1ed46abe9003aa1ed16037b6579a8b4679baf782d9ea5375198771b0b730b" address="unix:///run/containerd/s/303115f6c2c445aae8a4596c0127a4d1cecb52df71155a4afa00557aaab9b491" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:59:04.481943 containerd[1554]: time="2026-01-23T17:59:04.481511583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-vdqnm,Uid:f9ac43e2-f5bf-4819-8941-ac40921de4ed,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9edb551d4f5b872770196dc65df30a2021baa5932a8b68b2482e048c0b8bf39e\"" Jan 23 17:59:04.485873 containerd[1554]: time="2026-01-23T17:59:04.485784142Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 17:59:04.528525 systemd[1]: Started cri-containerd-27d1ed46abe9003aa1ed16037b6579a8b4679baf782d9ea5375198771b0b730b.scope - libcontainer container 27d1ed46abe9003aa1ed16037b6579a8b4679baf782d9ea5375198771b0b730b. Jan 23 17:59:04.557012 containerd[1554]: time="2026-01-23T17:59:04.556900572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4snch,Uid:2ab97068-811d-43ff-9706-95d2f058ff06,Namespace:kube-system,Attempt:0,} returns sandbox id \"27d1ed46abe9003aa1ed16037b6579a8b4679baf782d9ea5375198771b0b730b\"" Jan 23 17:59:04.564680 containerd[1554]: time="2026-01-23T17:59:04.563715077Z" level=info msg="CreateContainer within sandbox \"27d1ed46abe9003aa1ed16037b6579a8b4679baf782d9ea5375198771b0b730b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 17:59:04.575674 containerd[1554]: time="2026-01-23T17:59:04.575614472Z" level=info msg="Container ffaf0cb549122451761115446414df8b05a11286bced307fca5248bc6673c655: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:59:04.587568 containerd[1554]: time="2026-01-23T17:59:04.587492254Z" level=info msg="CreateContainer within sandbox \"27d1ed46abe9003aa1ed16037b6579a8b4679baf782d9ea5375198771b0b730b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ffaf0cb549122451761115446414df8b05a11286bced307fca5248bc6673c655\"" Jan 23 17:59:04.588719 containerd[1554]: time="2026-01-23T17:59:04.588695802Z" level=info msg="StartContainer for \"ffaf0cb549122451761115446414df8b05a11286bced307fca5248bc6673c655\"" Jan 23 17:59:04.590643 containerd[1554]: time="2026-01-23T17:59:04.590582758Z" level=info msg="connecting to shim ffaf0cb549122451761115446414df8b05a11286bced307fca5248bc6673c655" address="unix:///run/containerd/s/303115f6c2c445aae8a4596c0127a4d1cecb52df71155a4afa00557aaab9b491" protocol=ttrpc version=3 Jan 23 17:59:04.613468 systemd[1]: Started cri-containerd-ffaf0cb549122451761115446414df8b05a11286bced307fca5248bc6673c655.scope - libcontainer container ffaf0cb549122451761115446414df8b05a11286bced307fca5248bc6673c655. Jan 23 17:59:04.700418 containerd[1554]: time="2026-01-23T17:59:04.699950728Z" level=info msg="StartContainer for \"ffaf0cb549122451761115446414df8b05a11286bced307fca5248bc6673c655\" returns successfully" Jan 23 17:59:05.454939 kubelet[2773]: I0123 17:59:05.454876 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4snch" podStartSLOduration=2.4548609040000002 podStartE2EDuration="2.454860904s" podCreationTimestamp="2026-01-23 17:59:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:59:05.454549791 +0000 UTC m=+8.207165245" watchObservedRunningTime="2026-01-23 17:59:05.454860904 +0000 UTC m=+8.207476358" Jan 23 17:59:06.225105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1795806549.mount: Deactivated successfully. Jan 23 17:59:06.697708 containerd[1554]: time="2026-01-23T17:59:06.697543152Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:59:06.699343 containerd[1554]: time="2026-01-23T17:59:06.698912721Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Jan 23 17:59:06.700481 containerd[1554]: time="2026-01-23T17:59:06.700433579Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:59:06.705802 containerd[1554]: time="2026-01-23T17:59:06.705206476Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.219312585s" Jan 23 17:59:06.705802 containerd[1554]: time="2026-01-23T17:59:06.705307816Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 23 17:59:06.706224 containerd[1554]: time="2026-01-23T17:59:06.706148953Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:59:06.712761 containerd[1554]: time="2026-01-23T17:59:06.712673084Z" level=info msg="CreateContainer within sandbox \"9edb551d4f5b872770196dc65df30a2021baa5932a8b68b2482e048c0b8bf39e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 17:59:06.723561 containerd[1554]: time="2026-01-23T17:59:06.723512523Z" level=info msg="Container f21a543d8db5be5338534568d21d4ef39a746502d5e9756d692fde55c7e3ce59: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:59:06.726263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2276693692.mount: Deactivated successfully. Jan 23 17:59:06.732869 containerd[1554]: time="2026-01-23T17:59:06.732730204Z" level=info msg="CreateContainer within sandbox \"9edb551d4f5b872770196dc65df30a2021baa5932a8b68b2482e048c0b8bf39e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f21a543d8db5be5338534568d21d4ef39a746502d5e9756d692fde55c7e3ce59\"" Jan 23 17:59:06.733779 containerd[1554]: time="2026-01-23T17:59:06.733641342Z" level=info msg="StartContainer for \"f21a543d8db5be5338534568d21d4ef39a746502d5e9756d692fde55c7e3ce59\"" Jan 23 17:59:06.737132 containerd[1554]: time="2026-01-23T17:59:06.737083334Z" level=info msg="connecting to shim f21a543d8db5be5338534568d21d4ef39a746502d5e9756d692fde55c7e3ce59" address="unix:///run/containerd/s/c11eebfe9ef545d04104e422788724b6ce3bd4e714cdb504cfaa65b66d33e530" protocol=ttrpc version=3 Jan 23 17:59:06.766575 systemd[1]: Started cri-containerd-f21a543d8db5be5338534568d21d4ef39a746502d5e9756d692fde55c7e3ce59.scope - libcontainer container f21a543d8db5be5338534568d21d4ef39a746502d5e9756d692fde55c7e3ce59. Jan 23 17:59:06.807709 containerd[1554]: time="2026-01-23T17:59:06.807615010Z" level=info msg="StartContainer for \"f21a543d8db5be5338534568d21d4ef39a746502d5e9756d692fde55c7e3ce59\" returns successfully" Jan 23 17:59:10.645644 kubelet[2773]: I0123 17:59:10.643310 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-vdqnm" podStartSLOduration=4.420107961 podStartE2EDuration="6.64329154s" podCreationTimestamp="2026-01-23 17:59:04 +0000 UTC" firstStartedPulling="2026-01-23 17:59:04.484157597 +0000 UTC m=+7.236773051" lastFinishedPulling="2026-01-23 17:59:06.707341176 +0000 UTC m=+9.459956630" observedRunningTime="2026-01-23 17:59:07.463175791 +0000 UTC m=+10.215791245" watchObservedRunningTime="2026-01-23 17:59:10.64329154 +0000 UTC m=+13.395906994" Jan 23 17:59:11.274516 sudo[1816]: pam_unix(sudo:session): session closed for user root Jan 23 17:59:11.369838 sshd[1815]: Connection closed by 68.220.241.50 port 34416 Jan 23 17:59:11.371673 sshd-session[1812]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:11.379341 systemd[1]: sshd@6-49.12.73.152:22-68.220.241.50:34416.service: Deactivated successfully. Jan 23 17:59:11.386143 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 17:59:11.386374 systemd[1]: session-7.scope: Consumed 7.312s CPU time, 221.3M memory peak. Jan 23 17:59:11.391488 systemd-logind[1520]: Session 7 logged out. Waiting for processes to exit. Jan 23 17:59:11.398341 systemd-logind[1520]: Removed session 7. Jan 23 17:59:23.325520 systemd[1]: Created slice kubepods-besteffort-podae8b975a_67c1_46c3_a7b8_fd547890cf22.slice - libcontainer container kubepods-besteffort-podae8b975a_67c1_46c3_a7b8_fd547890cf22.slice. Jan 23 17:59:23.332580 kubelet[2773]: I0123 17:59:23.332524 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr2pm\" (UniqueName: \"kubernetes.io/projected/ae8b975a-67c1-46c3-a7b8-fd547890cf22-kube-api-access-tr2pm\") pod \"calico-typha-5864767665-fx79m\" (UID: \"ae8b975a-67c1-46c3-a7b8-fd547890cf22\") " pod="calico-system/calico-typha-5864767665-fx79m" Jan 23 17:59:23.332580 kubelet[2773]: I0123 17:59:23.332575 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae8b975a-67c1-46c3-a7b8-fd547890cf22-tigera-ca-bundle\") pod \"calico-typha-5864767665-fx79m\" (UID: \"ae8b975a-67c1-46c3-a7b8-fd547890cf22\") " pod="calico-system/calico-typha-5864767665-fx79m" Jan 23 17:59:23.333032 kubelet[2773]: I0123 17:59:23.332594 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ae8b975a-67c1-46c3-a7b8-fd547890cf22-typha-certs\") pod \"calico-typha-5864767665-fx79m\" (UID: \"ae8b975a-67c1-46c3-a7b8-fd547890cf22\") " pod="calico-system/calico-typha-5864767665-fx79m" Jan 23 17:59:23.542653 systemd[1]: Created slice kubepods-besteffort-pod499ec2ea_f8bf_4e03_90f3_ff6c6b2416e3.slice - libcontainer container kubepods-besteffort-pod499ec2ea_f8bf_4e03_90f3_ff6c6b2416e3.slice. Jan 23 17:59:23.632201 containerd[1554]: time="2026-01-23T17:59:23.632090615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5864767665-fx79m,Uid:ae8b975a-67c1-46c3-a7b8-fd547890cf22,Namespace:calico-system,Attempt:0,}" Jan 23 17:59:23.634456 kubelet[2773]: I0123 17:59:23.634396 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/499ec2ea-f8bf-4e03-90f3-ff6c6b2416e3-node-certs\") pod \"calico-node-fkbl8\" (UID: \"499ec2ea-f8bf-4e03-90f3-ff6c6b2416e3\") " pod="calico-system/calico-node-fkbl8" Jan 23 17:59:23.635573 kubelet[2773]: I0123 17:59:23.635454 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/499ec2ea-f8bf-4e03-90f3-ff6c6b2416e3-flexvol-driver-host\") pod \"calico-node-fkbl8\" (UID: \"499ec2ea-f8bf-4e03-90f3-ff6c6b2416e3\") " pod="calico-system/calico-node-fkbl8" Jan 23 17:59:23.635962 kubelet[2773]: I0123 17:59:23.635938 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/499ec2ea-f8bf-4e03-90f3-ff6c6b2416e3-policysync\") pod \"calico-node-fkbl8\" (UID: \"499ec2ea-f8bf-4e03-90f3-ff6c6b2416e3\") " pod="calico-system/calico-node-fkbl8" Jan 23 17:59:23.636049 kubelet[2773]: I0123 17:59:23.635979 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/499ec2ea-f8bf-4e03-90f3-ff6c6b2416e3-var-lib-calico\") pod \"calico-node-fkbl8\" (UID: \"499ec2ea-f8bf-4e03-90f3-ff6c6b2416e3\") " pod="calico-system/calico-node-fkbl8" Jan 23 17:59:23.636049 kubelet[2773]: I0123 17:59:23.636015 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/499ec2ea-f8bf-4e03-90f3-ff6c6b2416e3-var-run-calico\") pod \"calico-node-fkbl8\" (UID: \"499ec2ea-f8bf-4e03-90f3-ff6c6b2416e3\") " pod="calico-system/calico-node-fkbl8" Jan 23 17:59:23.637541 kubelet[2773]: I0123 17:59:23.637334 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/499ec2ea-f8bf-4e03-90f3-ff6c6b2416e3-cni-bin-dir\") pod \"calico-node-fkbl8\" (UID: \"499ec2ea-f8bf-4e03-90f3-ff6c6b2416e3\") " pod="calico-system/calico-node-fkbl8" Jan 23 17:59:23.637541 kubelet[2773]: I0123 17:59:23.637379 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/499ec2ea-f8bf-4e03-90f3-ff6c6b2416e3-cni-net-dir\") pod \"calico-node-fkbl8\" (UID: \"499ec2ea-f8bf-4e03-90f3-ff6c6b2416e3\") " pod="calico-system/calico-node-fkbl8" Jan 23 17:59:23.637541 kubelet[2773]: I0123 17:59:23.637401 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/499ec2ea-f8bf-4e03-90f3-ff6c6b2416e3-lib-modules\") pod \"calico-node-fkbl8\" (UID: \"499ec2ea-f8bf-4e03-90f3-ff6c6b2416e3\") " pod="calico-system/calico-node-fkbl8" Jan 23 17:59:23.637541 kubelet[2773]: I0123 17:59:23.637424 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk6cz\" (UniqueName: \"kubernetes.io/projected/499ec2ea-f8bf-4e03-90f3-ff6c6b2416e3-kube-api-access-kk6cz\") pod \"calico-node-fkbl8\" (UID: \"499ec2ea-f8bf-4e03-90f3-ff6c6b2416e3\") " pod="calico-system/calico-node-fkbl8" Jan 23 17:59:23.637541 kubelet[2773]: I0123 17:59:23.637442 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/499ec2ea-f8bf-4e03-90f3-ff6c6b2416e3-cni-log-dir\") pod \"calico-node-fkbl8\" (UID: \"499ec2ea-f8bf-4e03-90f3-ff6c6b2416e3\") " pod="calico-system/calico-node-fkbl8" Jan 23 17:59:23.637797 kubelet[2773]: I0123 17:59:23.637458 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/499ec2ea-f8bf-4e03-90f3-ff6c6b2416e3-tigera-ca-bundle\") pod \"calico-node-fkbl8\" (UID: \"499ec2ea-f8bf-4e03-90f3-ff6c6b2416e3\") " pod="calico-system/calico-node-fkbl8" Jan 23 17:59:23.637797 kubelet[2773]: I0123 17:59:23.637481 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/499ec2ea-f8bf-4e03-90f3-ff6c6b2416e3-xtables-lock\") pod \"calico-node-fkbl8\" (UID: \"499ec2ea-f8bf-4e03-90f3-ff6c6b2416e3\") " pod="calico-system/calico-node-fkbl8" Jan 23 17:59:23.659393 containerd[1554]: time="2026-01-23T17:59:23.659345761Z" level=info msg="connecting to shim 55879c82cb5f6396062cbfc883ea3c76b3c9aef2adf4373374b94afac56cd1a5" address="unix:///run/containerd/s/1f268191c29b7fef34dfb0199f7152e6eb55e3a26ba80a6fcfea4473eb38b86f" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:59:23.695020 systemd[1]: Started cri-containerd-55879c82cb5f6396062cbfc883ea3c76b3c9aef2adf4373374b94afac56cd1a5.scope - libcontainer container 55879c82cb5f6396062cbfc883ea3c76b3c9aef2adf4373374b94afac56cd1a5. Jan 23 17:59:23.728513 kubelet[2773]: E0123 17:59:23.728458 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lc96b" podUID="19e7b4f8-da69-42b9-9afe-ffb180e828e6" Jan 23 17:59:23.732462 kubelet[2773]: E0123 17:59:23.732412 2773 status_manager.go:1018] "Failed to get status for pod" err="pods \"csi-node-driver-lc96b\" is forbidden: User \"system:node:ci-4459-2-3-9-0be39219fc\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4459-2-3-9-0be39219fc' and this object" podUID="19e7b4f8-da69-42b9-9afe-ffb180e828e6" pod="calico-system/csi-node-driver-lc96b" Jan 23 17:59:23.741210 kubelet[2773]: E0123 17:59:23.740384 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.741655 kubelet[2773]: W0123 17:59:23.741458 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.741655 kubelet[2773]: E0123 17:59:23.741500 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.743445 kubelet[2773]: E0123 17:59:23.743409 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.744048 kubelet[2773]: W0123 17:59:23.743620 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.744048 kubelet[2773]: E0123 17:59:23.743655 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.744533 kubelet[2773]: E0123 17:59:23.744509 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.744533 kubelet[2773]: W0123 17:59:23.744527 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.744731 kubelet[2773]: E0123 17:59:23.744692 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.745876 kubelet[2773]: E0123 17:59:23.745815 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.745876 kubelet[2773]: W0123 17:59:23.745836 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.745876 kubelet[2773]: E0123 17:59:23.745854 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.746257 kubelet[2773]: E0123 17:59:23.746181 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.746257 kubelet[2773]: W0123 17:59:23.746196 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.746257 kubelet[2773]: E0123 17:59:23.746235 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.746597 kubelet[2773]: E0123 17:59:23.746572 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.746641 kubelet[2773]: W0123 17:59:23.746621 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.746641 kubelet[2773]: E0123 17:59:23.746637 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.746886 kubelet[2773]: E0123 17:59:23.746867 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.746886 kubelet[2773]: W0123 17:59:23.746883 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.747027 kubelet[2773]: E0123 17:59:23.746893 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.747619 kubelet[2773]: E0123 17:59:23.747295 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.747619 kubelet[2773]: W0123 17:59:23.747315 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.747619 kubelet[2773]: E0123 17:59:23.747329 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.747619 kubelet[2773]: E0123 17:59:23.747581 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.747619 kubelet[2773]: W0123 17:59:23.747589 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.747783 kubelet[2773]: E0123 17:59:23.747717 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.748279 kubelet[2773]: E0123 17:59:23.748252 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.748507 kubelet[2773]: W0123 17:59:23.748405 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.748507 kubelet[2773]: E0123 17:59:23.748426 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.750765 kubelet[2773]: E0123 17:59:23.750739 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.751504 kubelet[2773]: W0123 17:59:23.751316 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.751504 kubelet[2773]: E0123 17:59:23.751353 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.757624 kubelet[2773]: E0123 17:59:23.757502 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.758085 kubelet[2773]: W0123 17:59:23.757835 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.758085 kubelet[2773]: E0123 17:59:23.757868 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.758372 kubelet[2773]: E0123 17:59:23.758346 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.758438 kubelet[2773]: W0123 17:59:23.758419 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.758504 kubelet[2773]: E0123 17:59:23.758493 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.758769 kubelet[2773]: E0123 17:59:23.758755 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.758847 kubelet[2773]: W0123 17:59:23.758835 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.758908 kubelet[2773]: E0123 17:59:23.758897 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.759184 kubelet[2773]: E0123 17:59:23.759171 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.759292 kubelet[2773]: W0123 17:59:23.759255 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.759357 kubelet[2773]: E0123 17:59:23.759346 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.759586 kubelet[2773]: E0123 17:59:23.759573 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.759656 kubelet[2773]: W0123 17:59:23.759645 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.759716 kubelet[2773]: E0123 17:59:23.759705 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.760037 kubelet[2773]: E0123 17:59:23.759967 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.760037 kubelet[2773]: W0123 17:59:23.759979 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.760037 kubelet[2773]: E0123 17:59:23.759990 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.760313 kubelet[2773]: E0123 17:59:23.760298 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.760446 kubelet[2773]: W0123 17:59:23.760377 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.760446 kubelet[2773]: E0123 17:59:23.760395 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.760667 kubelet[2773]: E0123 17:59:23.760655 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.760745 kubelet[2773]: W0123 17:59:23.760733 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.760798 kubelet[2773]: E0123 17:59:23.760788 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.761055 kubelet[2773]: E0123 17:59:23.761043 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.761234 kubelet[2773]: W0123 17:59:23.761119 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.761234 kubelet[2773]: E0123 17:59:23.761135 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.761573 kubelet[2773]: E0123 17:59:23.761557 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.761648 kubelet[2773]: W0123 17:59:23.761636 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.761699 kubelet[2773]: E0123 17:59:23.761690 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.762418 kubelet[2773]: E0123 17:59:23.762328 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.762418 kubelet[2773]: W0123 17:59:23.762344 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.762418 kubelet[2773]: E0123 17:59:23.762357 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.762699 kubelet[2773]: E0123 17:59:23.762686 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.763129 kubelet[2773]: W0123 17:59:23.763108 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.763188 kubelet[2773]: E0123 17:59:23.763177 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.763473 kubelet[2773]: E0123 17:59:23.763457 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.763541 kubelet[2773]: W0123 17:59:23.763530 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.763609 kubelet[2773]: E0123 17:59:23.763597 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.764396 kubelet[2773]: E0123 17:59:23.763861 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.764396 kubelet[2773]: W0123 17:59:23.764307 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.764396 kubelet[2773]: E0123 17:59:23.764329 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.764661 kubelet[2773]: E0123 17:59:23.764647 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.764727 kubelet[2773]: W0123 17:59:23.764715 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.764794 kubelet[2773]: E0123 17:59:23.764779 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.765094 kubelet[2773]: E0123 17:59:23.765081 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.766158 kubelet[2773]: W0123 17:59:23.765159 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.766158 kubelet[2773]: E0123 17:59:23.765176 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.766443 kubelet[2773]: E0123 17:59:23.766428 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.766520 kubelet[2773]: W0123 17:59:23.766507 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.766580 kubelet[2773]: E0123 17:59:23.766569 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.766852 kubelet[2773]: E0123 17:59:23.766839 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.766988 kubelet[2773]: W0123 17:59:23.766913 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.766988 kubelet[2773]: E0123 17:59:23.766929 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.767319 kubelet[2773]: E0123 17:59:23.767304 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.767401 kubelet[2773]: W0123 17:59:23.767389 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.767461 kubelet[2773]: E0123 17:59:23.767450 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.767766 kubelet[2773]: E0123 17:59:23.767701 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.767766 kubelet[2773]: W0123 17:59:23.767713 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.767766 kubelet[2773]: E0123 17:59:23.767723 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.768021 kubelet[2773]: E0123 17:59:23.768008 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.768143 kubelet[2773]: W0123 17:59:23.768080 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.768143 kubelet[2773]: E0123 17:59:23.768095 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.768405 kubelet[2773]: E0123 17:59:23.768393 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.768580 kubelet[2773]: W0123 17:59:23.768467 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.768580 kubelet[2773]: E0123 17:59:23.768483 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.768752 kubelet[2773]: E0123 17:59:23.768742 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.768812 kubelet[2773]: W0123 17:59:23.768802 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.768873 kubelet[2773]: E0123 17:59:23.768863 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.769470 kubelet[2773]: E0123 17:59:23.769395 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.769470 kubelet[2773]: W0123 17:59:23.769408 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.769470 kubelet[2773]: E0123 17:59:23.769419 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.769730 kubelet[2773]: E0123 17:59:23.769717 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.769878 kubelet[2773]: W0123 17:59:23.769788 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.769878 kubelet[2773]: E0123 17:59:23.769802 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.770118 kubelet[2773]: E0123 17:59:23.770105 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.770218 kubelet[2773]: W0123 17:59:23.770206 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.770283 kubelet[2773]: E0123 17:59:23.770261 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.772398 kubelet[2773]: E0123 17:59:23.772375 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.772398 kubelet[2773]: W0123 17:59:23.772394 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.772500 kubelet[2773]: E0123 17:59:23.772411 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.773844 kubelet[2773]: E0123 17:59:23.773498 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.773844 kubelet[2773]: W0123 17:59:23.773840 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.774074 kubelet[2773]: E0123 17:59:23.773860 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.774488 kubelet[2773]: E0123 17:59:23.774465 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.774488 kubelet[2773]: W0123 17:59:23.774482 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.774561 kubelet[2773]: E0123 17:59:23.774495 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.775204 kubelet[2773]: E0123 17:59:23.775175 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.776498 kubelet[2773]: W0123 17:59:23.775198 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.776581 kubelet[2773]: E0123 17:59:23.776506 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.781693 kubelet[2773]: E0123 17:59:23.781663 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.781904 kubelet[2773]: W0123 17:59:23.781821 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.781904 kubelet[2773]: E0123 17:59:23.781848 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.806536 kubelet[2773]: E0123 17:59:23.806493 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.807245 kubelet[2773]: W0123 17:59:23.806610 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.807245 kubelet[2773]: E0123 17:59:23.807098 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.815315 containerd[1554]: time="2026-01-23T17:59:23.815082959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5864767665-fx79m,Uid:ae8b975a-67c1-46c3-a7b8-fd547890cf22,Namespace:calico-system,Attempt:0,} returns sandbox id \"55879c82cb5f6396062cbfc883ea3c76b3c9aef2adf4373374b94afac56cd1a5\"" Jan 23 17:59:23.822700 containerd[1554]: time="2026-01-23T17:59:23.822655895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 17:59:23.827973 kubelet[2773]: E0123 17:59:23.827916 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.827973 kubelet[2773]: W0123 17:59:23.827940 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.828129 kubelet[2773]: E0123 17:59:23.827998 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.829297 kubelet[2773]: E0123 17:59:23.828220 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.829297 kubelet[2773]: W0123 17:59:23.828240 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.829297 kubelet[2773]: E0123 17:59:23.829156 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.829473 kubelet[2773]: E0123 17:59:23.829445 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.829473 kubelet[2773]: W0123 17:59:23.829456 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.829473 kubelet[2773]: E0123 17:59:23.829467 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.829650 kubelet[2773]: E0123 17:59:23.829621 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.829650 kubelet[2773]: W0123 17:59:23.829643 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.829650 kubelet[2773]: E0123 17:59:23.829651 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.829832 kubelet[2773]: E0123 17:59:23.829817 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.829832 kubelet[2773]: W0123 17:59:23.829830 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.829900 kubelet[2773]: E0123 17:59:23.829838 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.830043 kubelet[2773]: E0123 17:59:23.830015 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.830043 kubelet[2773]: W0123 17:59:23.830044 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.830124 kubelet[2773]: E0123 17:59:23.830055 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.830344 kubelet[2773]: E0123 17:59:23.830319 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.830344 kubelet[2773]: W0123 17:59:23.830334 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.830344 kubelet[2773]: E0123 17:59:23.830345 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.830762 kubelet[2773]: E0123 17:59:23.830734 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.830762 kubelet[2773]: W0123 17:59:23.830752 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.830896 kubelet[2773]: E0123 17:59:23.830777 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.831416 kubelet[2773]: E0123 17:59:23.831393 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.831416 kubelet[2773]: W0123 17:59:23.831409 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.831850 kubelet[2773]: E0123 17:59:23.831432 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.833015 kubelet[2773]: E0123 17:59:23.832988 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.833015 kubelet[2773]: W0123 17:59:23.833008 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.833159 kubelet[2773]: E0123 17:59:23.833027 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.833570 kubelet[2773]: E0123 17:59:23.833530 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.833570 kubelet[2773]: W0123 17:59:23.833549 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.833829 kubelet[2773]: E0123 17:59:23.833790 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.834256 kubelet[2773]: E0123 17:59:23.834226 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.834256 kubelet[2773]: W0123 17:59:23.834244 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.834413 kubelet[2773]: E0123 17:59:23.834261 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.834698 kubelet[2773]: E0123 17:59:23.834655 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.834698 kubelet[2773]: W0123 17:59:23.834672 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.834698 kubelet[2773]: E0123 17:59:23.834683 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.835105 kubelet[2773]: E0123 17:59:23.835076 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.835105 kubelet[2773]: W0123 17:59:23.835093 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.835188 kubelet[2773]: E0123 17:59:23.835104 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.836295 kubelet[2773]: E0123 17:59:23.835324 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.836295 kubelet[2773]: W0123 17:59:23.835336 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.836295 kubelet[2773]: E0123 17:59:23.835345 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.836295 kubelet[2773]: E0123 17:59:23.835494 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.836295 kubelet[2773]: W0123 17:59:23.835501 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.836295 kubelet[2773]: E0123 17:59:23.835513 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.836295 kubelet[2773]: E0123 17:59:23.835651 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.836295 kubelet[2773]: W0123 17:59:23.835661 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.836295 kubelet[2773]: E0123 17:59:23.835671 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.836295 kubelet[2773]: E0123 17:59:23.835783 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.836573 kubelet[2773]: W0123 17:59:23.835789 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.836573 kubelet[2773]: E0123 17:59:23.835796 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.836573 kubelet[2773]: E0123 17:59:23.835952 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.836573 kubelet[2773]: W0123 17:59:23.835974 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.836573 kubelet[2773]: E0123 17:59:23.836007 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.836573 kubelet[2773]: E0123 17:59:23.836361 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.836573 kubelet[2773]: W0123 17:59:23.836372 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.836573 kubelet[2773]: E0123 17:59:23.836382 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.855602 containerd[1554]: time="2026-01-23T17:59:23.855042702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fkbl8,Uid:499ec2ea-f8bf-4e03-90f3-ff6c6b2416e3,Namespace:calico-system,Attempt:0,}" Jan 23 17:59:23.865014 kubelet[2773]: E0123 17:59:23.864958 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.865014 kubelet[2773]: W0123 17:59:23.865006 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.865014 kubelet[2773]: E0123 17:59:23.865027 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.865348 kubelet[2773]: I0123 17:59:23.865091 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/19e7b4f8-da69-42b9-9afe-ffb180e828e6-kubelet-dir\") pod \"csi-node-driver-lc96b\" (UID: \"19e7b4f8-da69-42b9-9afe-ffb180e828e6\") " pod="calico-system/csi-node-driver-lc96b" Jan 23 17:59:23.865717 kubelet[2773]: E0123 17:59:23.865690 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.865872 kubelet[2773]: W0123 17:59:23.865719 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.865872 kubelet[2773]: E0123 17:59:23.865738 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.866303 kubelet[2773]: I0123 17:59:23.865893 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/19e7b4f8-da69-42b9-9afe-ffb180e828e6-registration-dir\") pod \"csi-node-driver-lc96b\" (UID: \"19e7b4f8-da69-42b9-9afe-ffb180e828e6\") " pod="calico-system/csi-node-driver-lc96b" Jan 23 17:59:23.867151 kubelet[2773]: E0123 17:59:23.867117 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.867407 kubelet[2773]: W0123 17:59:23.867158 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.867407 kubelet[2773]: E0123 17:59:23.867177 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.867407 kubelet[2773]: E0123 17:59:23.867400 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.867407 kubelet[2773]: W0123 17:59:23.867409 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.867545 kubelet[2773]: E0123 17:59:23.867419 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.868047 kubelet[2773]: E0123 17:59:23.868023 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.868047 kubelet[2773]: W0123 17:59:23.868038 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.868047 kubelet[2773]: E0123 17:59:23.868051 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.868223 kubelet[2773]: I0123 17:59:23.868093 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n92t2\" (UniqueName: \"kubernetes.io/projected/19e7b4f8-da69-42b9-9afe-ffb180e828e6-kube-api-access-n92t2\") pod \"csi-node-driver-lc96b\" (UID: \"19e7b4f8-da69-42b9-9afe-ffb180e828e6\") " pod="calico-system/csi-node-driver-lc96b" Jan 23 17:59:23.868768 kubelet[2773]: E0123 17:59:23.868717 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.868898 kubelet[2773]: W0123 17:59:23.868734 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.868898 kubelet[2773]: E0123 17:59:23.868882 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.869193 kubelet[2773]: I0123 17:59:23.868960 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/19e7b4f8-da69-42b9-9afe-ffb180e828e6-socket-dir\") pod \"csi-node-driver-lc96b\" (UID: \"19e7b4f8-da69-42b9-9afe-ffb180e828e6\") " pod="calico-system/csi-node-driver-lc96b" Jan 23 17:59:23.869683 kubelet[2773]: E0123 17:59:23.869645 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.869683 kubelet[2773]: W0123 17:59:23.869667 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.869683 kubelet[2773]: E0123 17:59:23.869681 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.871758 kubelet[2773]: E0123 17:59:23.871716 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.871758 kubelet[2773]: W0123 17:59:23.871741 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.872042 kubelet[2773]: E0123 17:59:23.871826 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.872800 kubelet[2773]: E0123 17:59:23.872761 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.872800 kubelet[2773]: W0123 17:59:23.872783 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.872800 kubelet[2773]: E0123 17:59:23.872804 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.872800 kubelet[2773]: I0123 17:59:23.872945 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/19e7b4f8-da69-42b9-9afe-ffb180e828e6-varrun\") pod \"csi-node-driver-lc96b\" (UID: \"19e7b4f8-da69-42b9-9afe-ffb180e828e6\") " pod="calico-system/csi-node-driver-lc96b" Jan 23 17:59:23.874784 kubelet[2773]: E0123 17:59:23.873607 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.874784 kubelet[2773]: W0123 17:59:23.873621 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.874784 kubelet[2773]: E0123 17:59:23.874210 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.874784 kubelet[2773]: E0123 17:59:23.874444 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.874784 kubelet[2773]: W0123 17:59:23.874454 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.874784 kubelet[2773]: E0123 17:59:23.874465 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.876130 kubelet[2773]: E0123 17:59:23.876100 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.876220 kubelet[2773]: W0123 17:59:23.876131 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.876220 kubelet[2773]: E0123 17:59:23.876155 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.876940 kubelet[2773]: E0123 17:59:23.876816 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.876940 kubelet[2773]: W0123 17:59:23.876841 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.876940 kubelet[2773]: E0123 17:59:23.876860 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.878447 kubelet[2773]: E0123 17:59:23.878334 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.878447 kubelet[2773]: W0123 17:59:23.878369 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.878447 kubelet[2773]: E0123 17:59:23.878396 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.878704 kubelet[2773]: E0123 17:59:23.878672 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.878704 kubelet[2773]: W0123 17:59:23.878692 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.878818 kubelet[2773]: E0123 17:59:23.878708 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.887917 containerd[1554]: time="2026-01-23T17:59:23.887546141Z" level=info msg="connecting to shim bd9a41320ac09b9c81b64b4f17b15bed9d488289949ebac7d1bae66b3ff79ce5" address="unix:///run/containerd/s/3d6f01a6a1486763c90c9f7380416f6b51cc99e828c717e2afbfbe5a4e0d7fbb" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:59:23.923510 systemd[1]: Started cri-containerd-bd9a41320ac09b9c81b64b4f17b15bed9d488289949ebac7d1bae66b3ff79ce5.scope - libcontainer container bd9a41320ac09b9c81b64b4f17b15bed9d488289949ebac7d1bae66b3ff79ce5. Jan 23 17:59:23.955849 containerd[1554]: time="2026-01-23T17:59:23.955724737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fkbl8,Uid:499ec2ea-f8bf-4e03-90f3-ff6c6b2416e3,Namespace:calico-system,Attempt:0,} returns sandbox id \"bd9a41320ac09b9c81b64b4f17b15bed9d488289949ebac7d1bae66b3ff79ce5\"" Jan 23 17:59:23.978345 kubelet[2773]: E0123 17:59:23.978299 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.978345 kubelet[2773]: W0123 17:59:23.978335 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.978593 kubelet[2773]: E0123 17:59:23.978362 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.978639 kubelet[2773]: E0123 17:59:23.978600 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.978680 kubelet[2773]: W0123 17:59:23.978637 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.978680 kubelet[2773]: E0123 17:59:23.978650 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.978947 kubelet[2773]: E0123 17:59:23.978930 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.978947 kubelet[2773]: W0123 17:59:23.978945 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.979104 kubelet[2773]: E0123 17:59:23.978957 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.979355 kubelet[2773]: E0123 17:59:23.979257 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.979355 kubelet[2773]: W0123 17:59:23.979350 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.979510 kubelet[2773]: E0123 17:59:23.979367 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.979593 kubelet[2773]: E0123 17:59:23.979574 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.979593 kubelet[2773]: W0123 17:59:23.979587 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.979714 kubelet[2773]: E0123 17:59:23.979598 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.979799 kubelet[2773]: E0123 17:59:23.979783 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.979799 kubelet[2773]: W0123 17:59:23.979795 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.979889 kubelet[2773]: E0123 17:59:23.979804 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.980169 kubelet[2773]: E0123 17:59:23.980148 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.980169 kubelet[2773]: W0123 17:59:23.980168 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.980355 kubelet[2773]: E0123 17:59:23.980179 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.980437 kubelet[2773]: E0123 17:59:23.980422 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.980437 kubelet[2773]: W0123 17:59:23.980434 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.980553 kubelet[2773]: E0123 17:59:23.980445 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.980645 kubelet[2773]: E0123 17:59:23.980631 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.980703 kubelet[2773]: W0123 17:59:23.980643 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.980703 kubelet[2773]: E0123 17:59:23.980664 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.980887 kubelet[2773]: E0123 17:59:23.980871 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.980887 kubelet[2773]: W0123 17:59:23.980886 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.980978 kubelet[2773]: E0123 17:59:23.980896 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.981112 kubelet[2773]: E0123 17:59:23.981098 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.981112 kubelet[2773]: W0123 17:59:23.981110 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.981188 kubelet[2773]: E0123 17:59:23.981120 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.981678 kubelet[2773]: E0123 17:59:23.981487 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.981678 kubelet[2773]: W0123 17:59:23.981501 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.981678 kubelet[2773]: E0123 17:59:23.981530 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.981797 kubelet[2773]: E0123 17:59:23.981763 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.981797 kubelet[2773]: W0123 17:59:23.981772 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.981797 kubelet[2773]: E0123 17:59:23.981780 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.981963 kubelet[2773]: E0123 17:59:23.981952 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.981963 kubelet[2773]: W0123 17:59:23.981963 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.982077 kubelet[2773]: E0123 17:59:23.981971 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.982359 kubelet[2773]: E0123 17:59:23.982242 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.982359 kubelet[2773]: W0123 17:59:23.982257 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.982359 kubelet[2773]: E0123 17:59:23.982294 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.982741 kubelet[2773]: E0123 17:59:23.982467 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.982741 kubelet[2773]: W0123 17:59:23.982476 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.982741 kubelet[2773]: E0123 17:59:23.982487 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.983074 kubelet[2773]: E0123 17:59:23.983053 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.983074 kubelet[2773]: W0123 17:59:23.983068 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.983219 kubelet[2773]: E0123 17:59:23.983078 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.983352 kubelet[2773]: E0123 17:59:23.983344 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.983460 kubelet[2773]: W0123 17:59:23.983354 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.983460 kubelet[2773]: E0123 17:59:23.983363 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.983612 kubelet[2773]: E0123 17:59:23.983536 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.983612 kubelet[2773]: W0123 17:59:23.983544 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.983612 kubelet[2773]: E0123 17:59:23.983551 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.983836 kubelet[2773]: E0123 17:59:23.983824 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.983836 kubelet[2773]: W0123 17:59:23.983834 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.984157 kubelet[2773]: E0123 17:59:23.983844 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.984344 kubelet[2773]: E0123 17:59:23.984320 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.984344 kubelet[2773]: W0123 17:59:23.984335 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.984344 kubelet[2773]: E0123 17:59:23.984346 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.984962 kubelet[2773]: E0123 17:59:23.984893 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.984962 kubelet[2773]: W0123 17:59:23.984916 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.984962 kubelet[2773]: E0123 17:59:23.984929 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.986590 kubelet[2773]: E0123 17:59:23.986558 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.986590 kubelet[2773]: W0123 17:59:23.986578 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.986590 kubelet[2773]: E0123 17:59:23.986593 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.986844 kubelet[2773]: E0123 17:59:23.986757 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.986844 kubelet[2773]: W0123 17:59:23.986764 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.986844 kubelet[2773]: E0123 17:59:23.986771 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:23.987501 kubelet[2773]: E0123 17:59:23.987439 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:23.987501 kubelet[2773]: W0123 17:59:23.987460 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:23.987501 kubelet[2773]: E0123 17:59:23.987472 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:24.006929 kubelet[2773]: E0123 17:59:24.006860 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:24.006929 kubelet[2773]: W0123 17:59:24.006898 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:24.006929 kubelet[2773]: E0123 17:59:24.006933 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:25.311965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2942015113.mount: Deactivated successfully. Jan 23 17:59:25.370293 kubelet[2773]: E0123 17:59:25.370107 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lc96b" podUID="19e7b4f8-da69-42b9-9afe-ffb180e828e6" Jan 23 17:59:25.810574 containerd[1554]: time="2026-01-23T17:59:25.810509194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:59:25.812015 containerd[1554]: time="2026-01-23T17:59:25.811970651Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Jan 23 17:59:25.814689 containerd[1554]: time="2026-01-23T17:59:25.814606971Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:59:25.819620 containerd[1554]: time="2026-01-23T17:59:25.819136299Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:59:25.820739 containerd[1554]: time="2026-01-23T17:59:25.820677536Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.997960264s" Jan 23 17:59:25.820739 containerd[1554]: time="2026-01-23T17:59:25.820716866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 23 17:59:25.825043 containerd[1554]: time="2026-01-23T17:59:25.824993169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 17:59:25.852975 containerd[1554]: time="2026-01-23T17:59:25.852932612Z" level=info msg="CreateContainer within sandbox \"55879c82cb5f6396062cbfc883ea3c76b3c9aef2adf4373374b94afac56cd1a5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 17:59:25.871963 containerd[1554]: time="2026-01-23T17:59:25.871073010Z" level=info msg="Container 84712aa54ab79fa346aef92632a36a0b5bc3567ef1566721ffb7a4ff2987d70e: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:59:25.876439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1906661304.mount: Deactivated successfully. Jan 23 17:59:25.889406 containerd[1554]: time="2026-01-23T17:59:25.889355884Z" level=info msg="CreateContainer within sandbox \"55879c82cb5f6396062cbfc883ea3c76b3c9aef2adf4373374b94afac56cd1a5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"84712aa54ab79fa346aef92632a36a0b5bc3567ef1566721ffb7a4ff2987d70e\"" Jan 23 17:59:25.890664 containerd[1554]: time="2026-01-23T17:59:25.890623490Z" level=info msg="StartContainer for \"84712aa54ab79fa346aef92632a36a0b5bc3567ef1566721ffb7a4ff2987d70e\"" Jan 23 17:59:25.893546 containerd[1554]: time="2026-01-23T17:59:25.893450419Z" level=info msg="connecting to shim 84712aa54ab79fa346aef92632a36a0b5bc3567ef1566721ffb7a4ff2987d70e" address="unix:///run/containerd/s/1f268191c29b7fef34dfb0199f7152e6eb55e3a26ba80a6fcfea4473eb38b86f" protocol=ttrpc version=3 Jan 23 17:59:25.926612 systemd[1]: Started cri-containerd-84712aa54ab79fa346aef92632a36a0b5bc3567ef1566721ffb7a4ff2987d70e.scope - libcontainer container 84712aa54ab79fa346aef92632a36a0b5bc3567ef1566721ffb7a4ff2987d70e. Jan 23 17:59:25.982141 containerd[1554]: time="2026-01-23T17:59:25.982077230Z" level=info msg="StartContainer for \"84712aa54ab79fa346aef92632a36a0b5bc3567ef1566721ffb7a4ff2987d70e\" returns successfully" Jan 23 17:59:26.522507 kubelet[2773]: I0123 17:59:26.522337 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5864767665-fx79m" podStartSLOduration=1.519610015 podStartE2EDuration="3.522317736s" podCreationTimestamp="2026-01-23 17:59:23 +0000 UTC" firstStartedPulling="2026-01-23 17:59:23.821002077 +0000 UTC m=+26.573617531" lastFinishedPulling="2026-01-23 17:59:25.823709798 +0000 UTC m=+28.576325252" observedRunningTime="2026-01-23 17:59:26.521664893 +0000 UTC m=+29.274280347" watchObservedRunningTime="2026-01-23 17:59:26.522317736 +0000 UTC m=+29.274933230" Jan 23 17:59:26.555879 kubelet[2773]: E0123 17:59:26.555605 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:26.555879 kubelet[2773]: W0123 17:59:26.555673 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:26.555879 kubelet[2773]: E0123 17:59:26.555709 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:26.556324 kubelet[2773]: E0123 17:59:26.556254 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:26.556552 kubelet[2773]: W0123 17:59:26.556473 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:26.556716 kubelet[2773]: E0123 17:59:26.556692 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:26.557291 kubelet[2773]: E0123 17:59:26.557097 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:26.557291 kubelet[2773]: W0123 17:59:26.557117 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:26.557291 kubelet[2773]: E0123 17:59:26.557135 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:26.557865 kubelet[2773]: E0123 17:59:26.557609 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:26.557865 kubelet[2773]: W0123 17:59:26.557631 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:26.557865 kubelet[2773]: E0123 17:59:26.557677 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:26.558179 kubelet[2773]: E0123 17:59:26.558158 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:26.558306 kubelet[2773]: W0123 17:59:26.558261 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:26.558549 kubelet[2773]: E0123 17:59:26.558400 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:26.558752 kubelet[2773]: E0123 17:59:26.558733 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:26.558855 kubelet[2773]: W0123 17:59:26.558837 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:26.558965 kubelet[2773]: E0123 17:59:26.558945 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:26.559531 kubelet[2773]: E0123 17:59:26.559352 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:26.559531 kubelet[2773]: W0123 17:59:26.559372 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:26.559531 kubelet[2773]: E0123 17:59:26.559390 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:26.559867 kubelet[2773]: E0123 17:59:26.559846 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:26.560158 kubelet[2773]: W0123 17:59:26.559967 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:26.560158 kubelet[2773]: E0123 17:59:26.559994 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:26.560562 kubelet[2773]: E0123 17:59:26.560402 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:26.560562 kubelet[2773]: W0123 17:59:26.560425 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:26.560562 kubelet[2773]: E0123 17:59:26.560442 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:26.560867 kubelet[2773]: E0123 17:59:26.560847 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:26.560982 kubelet[2773]: W0123 17:59:26.560961 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:26.561208 kubelet[2773]: E0123 17:59:26.561063 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:26.561433 kubelet[2773]: E0123 17:59:26.561414 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:26.561709 kubelet[2773]: W0123 17:59:26.561524 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:26.561709 kubelet[2773]: E0123 17:59:26.561549 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:26.561946 kubelet[2773]: E0123 17:59:26.561927 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:26.562056 kubelet[2773]: W0123 17:59:26.562035 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:26.562155 kubelet[2773]: E0123 17:59:26.562137 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:26.562640 kubelet[2773]: E0123 17:59:26.562491 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:26.562640 kubelet[2773]: W0123 17:59:26.562511 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:26.562640 kubelet[2773]: E0123 17:59:26.562526 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:26.563202 kubelet[2773]: E0123 17:59:26.562997 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:26.563202 kubelet[2773]: W0123 17:59:26.563020 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:26.563202 kubelet[2773]: E0123 17:59:26.563037 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:26.563581 kubelet[2773]: E0123 17:59:26.563561 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:26.563828 kubelet[2773]: W0123 17:59:26.563686 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:26.563828 kubelet[2773]: E0123 17:59:26.563712 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:26.603603 kubelet[2773]: E0123 17:59:26.603531 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:26.603603 kubelet[2773]: W0123 17:59:26.603572 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:26.603603 kubelet[2773]: E0123 17:59:26.603604 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:26.604190 kubelet[2773]: E0123 17:59:26.604034 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:26.604190 kubelet[2773]: W0123 17:59:26.604064 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:26.604190 kubelet[2773]: E0123 17:59:26.604084 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:26.604407 kubelet[2773]: E0123 17:59:26.604371 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:26.604407 kubelet[2773]: W0123 17:59:26.604385 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:26.604407 kubelet[2773]: E0123 17:59:26.604401 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:26.604737 kubelet[2773]: E0123 17:59:26.604716 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:26.604737 kubelet[2773]: W0123 17:59:26.604736 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:26.604860 kubelet[2773]: E0123 17:59:26.604753 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:26.605023 kubelet[2773]: E0123 17:59:26.604994 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:26.605023 kubelet[2773]: W0123 17:59:26.605005 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:26.605023 kubelet[2773]: E0123 17:59:26.605016 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:26.605221 kubelet[2773]: E0123 17:59:26.605207 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:26.605221 kubelet[2773]: W0123 17:59:26.605220 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:26.605359 kubelet[2773]: E0123 17:59:26.605234 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:26.605508 kubelet[2773]: E0123 17:59:26.605478 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:26.605508 kubelet[2773]: W0123 17:59:26.605499 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:26.605640 kubelet[2773]: E0123 17:59:26.605516 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:26.605814 kubelet[2773]: E0123 17:59:26.605789 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:26.605814 kubelet[2773]: W0123 17:59:26.605809 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:26.605904 kubelet[2773]: E0123 17:59:26.605824 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:26.605991 kubelet[2773]: E0123 17:59:26.605976 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:26.605991 kubelet[2773]: W0123 17:59:26.605989 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:26.606076 kubelet[2773]: E0123 17:59:26.606000 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:26.606399 kubelet[2773]: E0123 17:59:26.606374 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:26.606399 kubelet[2773]: W0123 17:59:26.606395 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:26.606510 kubelet[2773]: E0123 17:59:26.606406 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:26.606621 kubelet[2773]: E0123 17:59:26.606596 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:26.606621 kubelet[2773]: W0123 17:59:26.606611 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:26.606771 kubelet[2773]: E0123 17:59:26.606632 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:26.607283 kubelet[2773]: E0123 17:59:26.607234 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:26.607283 kubelet[2773]: W0123 17:59:26.607252 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:26.607411 kubelet[2773]: E0123 17:59:26.607278 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:26.607500 kubelet[2773]: E0123 17:59:26.607472 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:26.607500 kubelet[2773]: W0123 17:59:26.607488 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:26.607500 kubelet[2773]: E0123 17:59:26.607498 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:26.607814 kubelet[2773]: E0123 17:59:26.607692 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:26.607814 kubelet[2773]: W0123 17:59:26.607702 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:26.607814 kubelet[2773]: E0123 17:59:26.607716 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:26.607942 kubelet[2773]: E0123 17:59:26.607906 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:26.607942 kubelet[2773]: W0123 17:59:26.607916 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:26.607942 kubelet[2773]: E0123 17:59:26.607926 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:26.608215 kubelet[2773]: E0123 17:59:26.608191 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:26.608215 kubelet[2773]: W0123 17:59:26.608210 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:26.608330 kubelet[2773]: E0123 17:59:26.608223 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:26.608480 kubelet[2773]: E0123 17:59:26.608461 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:26.608480 kubelet[2773]: W0123 17:59:26.608476 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:26.608576 kubelet[2773]: E0123 17:59:26.608488 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:26.608999 kubelet[2773]: E0123 17:59:26.608973 2773 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:26.608999 kubelet[2773]: W0123 17:59:26.608994 2773 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:26.609109 kubelet[2773]: E0123 17:59:26.609009 2773 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:27.244343 containerd[1554]: time="2026-01-23T17:59:27.243958086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:59:27.245846 containerd[1554]: time="2026-01-23T17:59:27.245670379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Jan 23 17:59:27.245846 containerd[1554]: time="2026-01-23T17:59:27.245758160Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:59:27.250227 containerd[1554]: time="2026-01-23T17:59:27.249586603Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:59:27.250768 containerd[1554]: time="2026-01-23T17:59:27.250717716Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.425674894s" Jan 23 17:59:27.251021 containerd[1554]: time="2026-01-23T17:59:27.250986861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 23 17:59:27.257590 containerd[1554]: time="2026-01-23T17:59:27.257460902Z" level=info msg="CreateContainer within sandbox \"bd9a41320ac09b9c81b64b4f17b15bed9d488289949ebac7d1bae66b3ff79ce5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 17:59:27.272192 containerd[1554]: time="2026-01-23T17:59:27.270447953Z" level=info msg="Container 9da72879cb2bb599d39f4358b1be62b7a936c97f173f9e59f57f0275ddfde854: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:59:27.287997 containerd[1554]: time="2026-01-23T17:59:27.287944772Z" level=info msg="CreateContainer within sandbox \"bd9a41320ac09b9c81b64b4f17b15bed9d488289949ebac7d1bae66b3ff79ce5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9da72879cb2bb599d39f4358b1be62b7a936c97f173f9e59f57f0275ddfde854\"" Jan 23 17:59:27.290025 containerd[1554]: time="2026-01-23T17:59:27.289980982Z" level=info msg="StartContainer for \"9da72879cb2bb599d39f4358b1be62b7a936c97f173f9e59f57f0275ddfde854\"" Jan 23 17:59:27.293057 containerd[1554]: time="2026-01-23T17:59:27.293006672Z" level=info msg="connecting to shim 9da72879cb2bb599d39f4358b1be62b7a936c97f173f9e59f57f0275ddfde854" address="unix:///run/containerd/s/3d6f01a6a1486763c90c9f7380416f6b51cc99e828c717e2afbfbe5a4e0d7fbb" protocol=ttrpc version=3 Jan 23 17:59:27.323530 systemd[1]: Started cri-containerd-9da72879cb2bb599d39f4358b1be62b7a936c97f173f9e59f57f0275ddfde854.scope - libcontainer container 9da72879cb2bb599d39f4358b1be62b7a936c97f173f9e59f57f0275ddfde854. Jan 23 17:59:27.372601 kubelet[2773]: E0123 17:59:27.372536 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lc96b" podUID="19e7b4f8-da69-42b9-9afe-ffb180e828e6" Jan 23 17:59:27.417722 containerd[1554]: time="2026-01-23T17:59:27.417611755Z" level=info msg="StartContainer for \"9da72879cb2bb599d39f4358b1be62b7a936c97f173f9e59f57f0275ddfde854\" returns successfully" Jan 23 17:59:27.439584 systemd[1]: cri-containerd-9da72879cb2bb599d39f4358b1be62b7a936c97f173f9e59f57f0275ddfde854.scope: Deactivated successfully. Jan 23 17:59:27.445329 containerd[1554]: time="2026-01-23T17:59:27.445164999Z" level=info msg="received container exit event container_id:\"9da72879cb2bb599d39f4358b1be62b7a936c97f173f9e59f57f0275ddfde854\" id:\"9da72879cb2bb599d39f4358b1be62b7a936c97f173f9e59f57f0275ddfde854\" pid:3487 exited_at:{seconds:1769191167 nanos:444777785}" Jan 23 17:59:27.474976 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9da72879cb2bb599d39f4358b1be62b7a936c97f173f9e59f57f0275ddfde854-rootfs.mount: Deactivated successfully. Jan 23 17:59:27.515785 kubelet[2773]: I0123 17:59:27.515745 2773 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 17:59:28.524908 containerd[1554]: time="2026-01-23T17:59:28.524855646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 17:59:29.371521 kubelet[2773]: E0123 17:59:29.370570 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lc96b" podUID="19e7b4f8-da69-42b9-9afe-ffb180e828e6" Jan 23 17:59:31.180298 containerd[1554]: time="2026-01-23T17:59:31.180164018Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:59:31.183233 containerd[1554]: time="2026-01-23T17:59:31.183184143Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Jan 23 17:59:31.185302 containerd[1554]: time="2026-01-23T17:59:31.185234580Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:59:31.189015 containerd[1554]: time="2026-01-23T17:59:31.188953854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:59:31.190822 containerd[1554]: time="2026-01-23T17:59:31.190762040Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.665846701s" Jan 23 17:59:31.190967 containerd[1554]: time="2026-01-23T17:59:31.190831055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 23 17:59:31.199602 containerd[1554]: time="2026-01-23T17:59:31.199522431Z" level=info msg="CreateContainer within sandbox \"bd9a41320ac09b9c81b64b4f17b15bed9d488289949ebac7d1bae66b3ff79ce5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 17:59:31.219813 containerd[1554]: time="2026-01-23T17:59:31.218522847Z" level=info msg="Container 7cf5a5f0e17e1a74e8d84feda5fb97ff6fb678ada42cc868ed78168b48686004: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:59:31.231318 containerd[1554]: time="2026-01-23T17:59:31.231223199Z" level=info msg="CreateContainer within sandbox \"bd9a41320ac09b9c81b64b4f17b15bed9d488289949ebac7d1bae66b3ff79ce5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7cf5a5f0e17e1a74e8d84feda5fb97ff6fb678ada42cc868ed78168b48686004\"" Jan 23 17:59:31.232403 containerd[1554]: time="2026-01-23T17:59:31.232346558Z" level=info msg="StartContainer for \"7cf5a5f0e17e1a74e8d84feda5fb97ff6fb678ada42cc868ed78168b48686004\"" Jan 23 17:59:31.234461 containerd[1554]: time="2026-01-23T17:59:31.234412599Z" level=info msg="connecting to shim 7cf5a5f0e17e1a74e8d84feda5fb97ff6fb678ada42cc868ed78168b48686004" address="unix:///run/containerd/s/3d6f01a6a1486763c90c9f7380416f6b51cc99e828c717e2afbfbe5a4e0d7fbb" protocol=ttrpc version=3 Jan 23 17:59:31.268494 systemd[1]: Started cri-containerd-7cf5a5f0e17e1a74e8d84feda5fb97ff6fb678ada42cc868ed78168b48686004.scope - libcontainer container 7cf5a5f0e17e1a74e8d84feda5fb97ff6fb678ada42cc868ed78168b48686004. Jan 23 17:59:31.351633 containerd[1554]: time="2026-01-23T17:59:31.351521401Z" level=info msg="StartContainer for \"7cf5a5f0e17e1a74e8d84feda5fb97ff6fb678ada42cc868ed78168b48686004\" returns successfully" Jan 23 17:59:31.371836 kubelet[2773]: E0123 17:59:31.370607 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lc96b" podUID="19e7b4f8-da69-42b9-9afe-ffb180e828e6" Jan 23 17:59:31.922187 containerd[1554]: time="2026-01-23T17:59:31.922085772Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 17:59:31.927030 systemd[1]: cri-containerd-7cf5a5f0e17e1a74e8d84feda5fb97ff6fb678ada42cc868ed78168b48686004.scope: Deactivated successfully. Jan 23 17:59:31.928789 systemd[1]: cri-containerd-7cf5a5f0e17e1a74e8d84feda5fb97ff6fb678ada42cc868ed78168b48686004.scope: Consumed 536ms CPU time, 188.4M memory peak, 165.9M written to disk. Jan 23 17:59:31.932207 containerd[1554]: time="2026-01-23T17:59:31.932085387Z" level=info msg="received container exit event container_id:\"7cf5a5f0e17e1a74e8d84feda5fb97ff6fb678ada42cc868ed78168b48686004\" id:\"7cf5a5f0e17e1a74e8d84feda5fb97ff6fb678ada42cc868ed78168b48686004\" pid:3543 exited_at:{seconds:1769191171 nanos:931669738}" Jan 23 17:59:31.938468 kubelet[2773]: I0123 17:59:31.937449 2773 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 23 17:59:31.971777 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7cf5a5f0e17e1a74e8d84feda5fb97ff6fb678ada42cc868ed78168b48686004-rootfs.mount: Deactivated successfully. Jan 23 17:59:32.048049 systemd[1]: Created slice kubepods-burstable-pod1177e5f5_20db_4e39_bf2b_92e330c49d30.slice - libcontainer container kubepods-burstable-pod1177e5f5_20db_4e39_bf2b_92e330c49d30.slice. Jan 23 17:59:32.091208 systemd[1]: Created slice kubepods-burstable-pod15c8748c_1044_422b_bd07_3cf9093f7667.slice - libcontainer container kubepods-burstable-pod15c8748c_1044_422b_bd07_3cf9093f7667.slice. Jan 23 17:59:32.112414 systemd[1]: Created slice kubepods-besteffort-pod974cfe4b_f68c_4b1b_b7ed_efcbbe2e4c59.slice - libcontainer container kubepods-besteffort-pod974cfe4b_f68c_4b1b_b7ed_efcbbe2e4c59.slice. Jan 23 17:59:32.126401 systemd[1]: Created slice kubepods-besteffort-pod01f4753b_deaf_4717_98ef_d3ec1b50a10a.slice - libcontainer container kubepods-besteffort-pod01f4753b_deaf_4717_98ef_d3ec1b50a10a.slice. Jan 23 17:59:32.135549 systemd[1]: Created slice kubepods-besteffort-pod34ce0042_b647_451e_af06_1121a3b681b0.slice - libcontainer container kubepods-besteffort-pod34ce0042_b647_451e_af06_1121a3b681b0.slice. Jan 23 17:59:32.144809 systemd[1]: Created slice kubepods-besteffort-pod0960425a_3ec9_4577_9a39_48675b2f3498.slice - libcontainer container kubepods-besteffort-pod0960425a_3ec9_4577_9a39_48675b2f3498.slice. Jan 23 17:59:32.145850 kubelet[2773]: I0123 17:59:32.145740 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c22jf\" (UniqueName: \"kubernetes.io/projected/15c8748c-1044-422b-bd07-3cf9093f7667-kube-api-access-c22jf\") pod \"coredns-66bc5c9577-sw8bc\" (UID: \"15c8748c-1044-422b-bd07-3cf9093f7667\") " pod="kube-system/coredns-66bc5c9577-sw8bc" Jan 23 17:59:32.145850 kubelet[2773]: I0123 17:59:32.145786 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/12cc26cd-794b-4ec3-9b6f-99c6d249650c-calico-apiserver-certs\") pod \"calico-apiserver-6f4f6d594f-7pkzs\" (UID: \"12cc26cd-794b-4ec3-9b6f-99c6d249650c\") " pod="calico-apiserver/calico-apiserver-6f4f6d594f-7pkzs" Jan 23 17:59:32.145850 kubelet[2773]: I0123 17:59:32.145805 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/974cfe4b-f68c-4b1b-b7ed-efcbbe2e4c59-calico-apiserver-certs\") pod \"calico-apiserver-6f4f6d594f-f5jb7\" (UID: \"974cfe4b-f68c-4b1b-b7ed-efcbbe2e4c59\") " pod="calico-apiserver/calico-apiserver-6f4f6d594f-f5jb7" Jan 23 17:59:32.145850 kubelet[2773]: I0123 17:59:32.145825 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0960425a-3ec9-4577-9a39-48675b2f3498-goldmane-key-pair\") pod \"goldmane-7c778bb748-tqdk2\" (UID: \"0960425a-3ec9-4577-9a39-48675b2f3498\") " pod="calico-system/goldmane-7c778bb748-tqdk2" Jan 23 17:59:32.145850 kubelet[2773]: I0123 17:59:32.145840 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4pv4\" (UniqueName: \"kubernetes.io/projected/974cfe4b-f68c-4b1b-b7ed-efcbbe2e4c59-kube-api-access-c4pv4\") pod \"calico-apiserver-6f4f6d594f-f5jb7\" (UID: \"974cfe4b-f68c-4b1b-b7ed-efcbbe2e4c59\") " pod="calico-apiserver/calico-apiserver-6f4f6d594f-f5jb7" Jan 23 17:59:32.146136 kubelet[2773]: I0123 17:59:32.145860 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0960425a-3ec9-4577-9a39-48675b2f3498-config\") pod \"goldmane-7c778bb748-tqdk2\" (UID: \"0960425a-3ec9-4577-9a39-48675b2f3498\") " pod="calico-system/goldmane-7c778bb748-tqdk2" Jan 23 17:59:32.146136 kubelet[2773]: I0123 17:59:32.145877 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1177e5f5-20db-4e39-bf2b-92e330c49d30-config-volume\") pod \"coredns-66bc5c9577-5cgdl\" (UID: \"1177e5f5-20db-4e39-bf2b-92e330c49d30\") " pod="kube-system/coredns-66bc5c9577-5cgdl" Jan 23 17:59:32.146136 kubelet[2773]: I0123 17:59:32.145891 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34ce0042-b647-451e-af06-1121a3b681b0-whisker-ca-bundle\") pod \"whisker-bd6b74446-7sm6r\" (UID: \"34ce0042-b647-451e-af06-1121a3b681b0\") " pod="calico-system/whisker-bd6b74446-7sm6r" Jan 23 17:59:32.146136 kubelet[2773]: I0123 17:59:32.145967 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgv2b\" (UniqueName: \"kubernetes.io/projected/01f4753b-deaf-4717-98ef-d3ec1b50a10a-kube-api-access-wgv2b\") pod \"calico-kube-controllers-6b9444d76-tdrjj\" (UID: \"01f4753b-deaf-4717-98ef-d3ec1b50a10a\") " pod="calico-system/calico-kube-controllers-6b9444d76-tdrjj" Jan 23 17:59:32.146136 kubelet[2773]: I0123 17:59:32.145989 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0960425a-3ec9-4577-9a39-48675b2f3498-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-tqdk2\" (UID: \"0960425a-3ec9-4577-9a39-48675b2f3498\") " pod="calico-system/goldmane-7c778bb748-tqdk2" Jan 23 17:59:32.146249 kubelet[2773]: I0123 17:59:32.146010 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ckb6\" (UniqueName: \"kubernetes.io/projected/12cc26cd-794b-4ec3-9b6f-99c6d249650c-kube-api-access-5ckb6\") pod \"calico-apiserver-6f4f6d594f-7pkzs\" (UID: \"12cc26cd-794b-4ec3-9b6f-99c6d249650c\") " pod="calico-apiserver/calico-apiserver-6f4f6d594f-7pkzs" Jan 23 17:59:32.146249 kubelet[2773]: I0123 17:59:32.146025 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/34ce0042-b647-451e-af06-1121a3b681b0-whisker-backend-key-pair\") pod \"whisker-bd6b74446-7sm6r\" (UID: \"34ce0042-b647-451e-af06-1121a3b681b0\") " pod="calico-system/whisker-bd6b74446-7sm6r" Jan 23 17:59:32.146249 kubelet[2773]: I0123 17:59:32.146048 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4624\" (UniqueName: \"kubernetes.io/projected/0960425a-3ec9-4577-9a39-48675b2f3498-kube-api-access-w4624\") pod \"goldmane-7c778bb748-tqdk2\" (UID: \"0960425a-3ec9-4577-9a39-48675b2f3498\") " pod="calico-system/goldmane-7c778bb748-tqdk2" Jan 23 17:59:32.146249 kubelet[2773]: I0123 17:59:32.146063 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcqbr\" (UniqueName: \"kubernetes.io/projected/1177e5f5-20db-4e39-bf2b-92e330c49d30-kube-api-access-qcqbr\") pod \"coredns-66bc5c9577-5cgdl\" (UID: \"1177e5f5-20db-4e39-bf2b-92e330c49d30\") " pod="kube-system/coredns-66bc5c9577-5cgdl" Jan 23 17:59:32.146249 kubelet[2773]: I0123 17:59:32.146077 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8k2m\" (UniqueName: \"kubernetes.io/projected/34ce0042-b647-451e-af06-1121a3b681b0-kube-api-access-n8k2m\") pod \"whisker-bd6b74446-7sm6r\" (UID: \"34ce0042-b647-451e-af06-1121a3b681b0\") " pod="calico-system/whisker-bd6b74446-7sm6r" Jan 23 17:59:32.146891 kubelet[2773]: I0123 17:59:32.146101 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01f4753b-deaf-4717-98ef-d3ec1b50a10a-tigera-ca-bundle\") pod \"calico-kube-controllers-6b9444d76-tdrjj\" (UID: \"01f4753b-deaf-4717-98ef-d3ec1b50a10a\") " pod="calico-system/calico-kube-controllers-6b9444d76-tdrjj" Jan 23 17:59:32.146891 kubelet[2773]: I0123 17:59:32.146124 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15c8748c-1044-422b-bd07-3cf9093f7667-config-volume\") pod \"coredns-66bc5c9577-sw8bc\" (UID: \"15c8748c-1044-422b-bd07-3cf9093f7667\") " pod="kube-system/coredns-66bc5c9577-sw8bc" Jan 23 17:59:32.156090 systemd[1]: Created slice kubepods-besteffort-pod12cc26cd_794b_4ec3_9b6f_99c6d249650c.slice - libcontainer container kubepods-besteffort-pod12cc26cd_794b_4ec3_9b6f_99c6d249650c.slice. Jan 23 17:59:32.373283 containerd[1554]: time="2026-01-23T17:59:32.373222226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5cgdl,Uid:1177e5f5-20db-4e39-bf2b-92e330c49d30,Namespace:kube-system,Attempt:0,}" Jan 23 17:59:32.409170 containerd[1554]: time="2026-01-23T17:59:32.409121840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sw8bc,Uid:15c8748c-1044-422b-bd07-3cf9093f7667,Namespace:kube-system,Attempt:0,}" Jan 23 17:59:32.422502 containerd[1554]: time="2026-01-23T17:59:32.422429003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f4f6d594f-f5jb7,Uid:974cfe4b-f68c-4b1b-b7ed-efcbbe2e4c59,Namespace:calico-apiserver,Attempt:0,}" Jan 23 17:59:32.448248 containerd[1554]: time="2026-01-23T17:59:32.448209955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b9444d76-tdrjj,Uid:01f4753b-deaf-4717-98ef-d3ec1b50a10a,Namespace:calico-system,Attempt:0,}" Jan 23 17:59:32.449577 containerd[1554]: time="2026-01-23T17:59:32.449549274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bd6b74446-7sm6r,Uid:34ce0042-b647-451e-af06-1121a3b681b0,Namespace:calico-system,Attempt:0,}" Jan 23 17:59:32.453746 containerd[1554]: time="2026-01-23T17:59:32.453712138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-tqdk2,Uid:0960425a-3ec9-4577-9a39-48675b2f3498,Namespace:calico-system,Attempt:0,}" Jan 23 17:59:32.455875 kubelet[2773]: I0123 17:59:32.455839 2773 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 17:59:32.467580 containerd[1554]: time="2026-01-23T17:59:32.467087635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f4f6d594f-7pkzs,Uid:12cc26cd-794b-4ec3-9b6f-99c6d249650c,Namespace:calico-apiserver,Attempt:0,}" Jan 23 17:59:32.565753 containerd[1554]: time="2026-01-23T17:59:32.565708872Z" level=error msg="Failed to destroy network for sandbox \"aacaeb4f25f39341c1bc653b04989dacd38c62186ddacea8d9c456e708081977\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:32.569765 containerd[1554]: time="2026-01-23T17:59:32.569417602Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5cgdl,Uid:1177e5f5-20db-4e39-bf2b-92e330c49d30,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aacaeb4f25f39341c1bc653b04989dacd38c62186ddacea8d9c456e708081977\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:32.571099 kubelet[2773]: E0123 17:59:32.570797 2773 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aacaeb4f25f39341c1bc653b04989dacd38c62186ddacea8d9c456e708081977\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:32.571099 kubelet[2773]: E0123 17:59:32.570890 2773 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aacaeb4f25f39341c1bc653b04989dacd38c62186ddacea8d9c456e708081977\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-5cgdl" Jan 23 17:59:32.571099 kubelet[2773]: E0123 17:59:32.570911 2773 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aacaeb4f25f39341c1bc653b04989dacd38c62186ddacea8d9c456e708081977\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-5cgdl" Jan 23 17:59:32.571462 kubelet[2773]: E0123 17:59:32.570978 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-5cgdl_kube-system(1177e5f5-20db-4e39-bf2b-92e330c49d30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-5cgdl_kube-system(1177e5f5-20db-4e39-bf2b-92e330c49d30)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aacaeb4f25f39341c1bc653b04989dacd38c62186ddacea8d9c456e708081977\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-5cgdl" podUID="1177e5f5-20db-4e39-bf2b-92e330c49d30" Jan 23 17:59:32.588370 containerd[1554]: time="2026-01-23T17:59:32.586282304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 17:59:32.634494 containerd[1554]: time="2026-01-23T17:59:32.634345843Z" level=error msg="Failed to destroy network for sandbox \"039174e5e76e589f46f1eebfea6759d614fe91c89f24f0b5ec0100c747483bf0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:32.640050 containerd[1554]: time="2026-01-23T17:59:32.638233410Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sw8bc,Uid:15c8748c-1044-422b-bd07-3cf9093f7667,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"039174e5e76e589f46f1eebfea6759d614fe91c89f24f0b5ec0100c747483bf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:32.641071 kubelet[2773]: E0123 17:59:32.640360 2773 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"039174e5e76e589f46f1eebfea6759d614fe91c89f24f0b5ec0100c747483bf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:32.641071 kubelet[2773]: E0123 17:59:32.640414 2773 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"039174e5e76e589f46f1eebfea6759d614fe91c89f24f0b5ec0100c747483bf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-sw8bc" Jan 23 17:59:32.641071 kubelet[2773]: E0123 17:59:32.640435 2773 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"039174e5e76e589f46f1eebfea6759d614fe91c89f24f0b5ec0100c747483bf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-sw8bc" Jan 23 17:59:32.641243 kubelet[2773]: E0123 17:59:32.640488 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-sw8bc_kube-system(15c8748c-1044-422b-bd07-3cf9093f7667)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-sw8bc_kube-system(15c8748c-1044-422b-bd07-3cf9093f7667)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"039174e5e76e589f46f1eebfea6759d614fe91c89f24f0b5ec0100c747483bf0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-sw8bc" podUID="15c8748c-1044-422b-bd07-3cf9093f7667" Jan 23 17:59:32.654392 containerd[1554]: time="2026-01-23T17:59:32.654336074Z" level=error msg="Failed to destroy network for sandbox \"da4c3fd69288acf17305b2c18f3ab396fb25949d2c1f8cc45ebff7087b6e40d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:32.656637 containerd[1554]: time="2026-01-23T17:59:32.656589461Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f4f6d594f-7pkzs,Uid:12cc26cd-794b-4ec3-9b6f-99c6d249650c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"da4c3fd69288acf17305b2c18f3ab396fb25949d2c1f8cc45ebff7087b6e40d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:32.657174 kubelet[2773]: E0123 17:59:32.657065 2773 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da4c3fd69288acf17305b2c18f3ab396fb25949d2c1f8cc45ebff7087b6e40d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:32.657531 kubelet[2773]: E0123 17:59:32.657470 2773 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da4c3fd69288acf17305b2c18f3ab396fb25949d2c1f8cc45ebff7087b6e40d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f4f6d594f-7pkzs" Jan 23 17:59:32.657531 kubelet[2773]: E0123 17:59:32.657498 2773 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da4c3fd69288acf17305b2c18f3ab396fb25949d2c1f8cc45ebff7087b6e40d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f4f6d594f-7pkzs" Jan 23 17:59:32.657748 kubelet[2773]: E0123 17:59:32.657691 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f4f6d594f-7pkzs_calico-apiserver(12cc26cd-794b-4ec3-9b6f-99c6d249650c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f4f6d594f-7pkzs_calico-apiserver(12cc26cd-794b-4ec3-9b6f-99c6d249650c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da4c3fd69288acf17305b2c18f3ab396fb25949d2c1f8cc45ebff7087b6e40d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f4f6d594f-7pkzs" podUID="12cc26cd-794b-4ec3-9b6f-99c6d249650c" Jan 23 17:59:32.682287 containerd[1554]: time="2026-01-23T17:59:32.682200539Z" level=error msg="Failed to destroy network for sandbox \"84794c63f404abb282363afec2a2ab9490e5edd3e2ade058e6bfdd576266b783\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:32.684917 containerd[1554]: time="2026-01-23T17:59:32.684859291Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f4f6d594f-f5jb7,Uid:974cfe4b-f68c-4b1b-b7ed-efcbbe2e4c59,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"84794c63f404abb282363afec2a2ab9490e5edd3e2ade058e6bfdd576266b783\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:32.685842 kubelet[2773]: E0123 17:59:32.685135 2773 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84794c63f404abb282363afec2a2ab9490e5edd3e2ade058e6bfdd576266b783\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:32.685842 kubelet[2773]: E0123 17:59:32.685198 2773 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84794c63f404abb282363afec2a2ab9490e5edd3e2ade058e6bfdd576266b783\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f4f6d594f-f5jb7" Jan 23 17:59:32.685842 kubelet[2773]: E0123 17:59:32.685220 2773 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84794c63f404abb282363afec2a2ab9490e5edd3e2ade058e6bfdd576266b783\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f4f6d594f-f5jb7" Jan 23 17:59:32.687102 kubelet[2773]: E0123 17:59:32.686230 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f4f6d594f-f5jb7_calico-apiserver(974cfe4b-f68c-4b1b-b7ed-efcbbe2e4c59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f4f6d594f-f5jb7_calico-apiserver(974cfe4b-f68c-4b1b-b7ed-efcbbe2e4c59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"84794c63f404abb282363afec2a2ab9490e5edd3e2ade058e6bfdd576266b783\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f4f6d594f-f5jb7" podUID="974cfe4b-f68c-4b1b-b7ed-efcbbe2e4c59" Jan 23 17:59:32.692997 containerd[1554]: time="2026-01-23T17:59:32.692738607Z" level=error msg="Failed to destroy network for sandbox \"84dbae35fc4424d95a84d2106a07f7ca9294631cdca3f92fff19e9de64258067\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:32.696078 containerd[1554]: time="2026-01-23T17:59:32.695561553Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b9444d76-tdrjj,Uid:01f4753b-deaf-4717-98ef-d3ec1b50a10a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"84dbae35fc4424d95a84d2106a07f7ca9294631cdca3f92fff19e9de64258067\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:32.696312 kubelet[2773]: E0123 17:59:32.696209 2773 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84dbae35fc4424d95a84d2106a07f7ca9294631cdca3f92fff19e9de64258067\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:32.696312 kubelet[2773]: E0123 17:59:32.696305 2773 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84dbae35fc4424d95a84d2106a07f7ca9294631cdca3f92fff19e9de64258067\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b9444d76-tdrjj" Jan 23 17:59:32.696389 kubelet[2773]: E0123 17:59:32.696331 2773 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84dbae35fc4424d95a84d2106a07f7ca9294631cdca3f92fff19e9de64258067\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b9444d76-tdrjj" Jan 23 17:59:32.696422 kubelet[2773]: E0123 17:59:32.696378 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b9444d76-tdrjj_calico-system(01f4753b-deaf-4717-98ef-d3ec1b50a10a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b9444d76-tdrjj_calico-system(01f4753b-deaf-4717-98ef-d3ec1b50a10a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"84dbae35fc4424d95a84d2106a07f7ca9294631cdca3f92fff19e9de64258067\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b9444d76-tdrjj" podUID="01f4753b-deaf-4717-98ef-d3ec1b50a10a" Jan 23 17:59:32.708529 containerd[1554]: time="2026-01-23T17:59:32.708400499Z" level=error msg="Failed to destroy network for sandbox \"b1f55384d036fb11ba39fc505e35f304e1571cbbc01d0d824ea8ce3faea91eaf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:32.710563 containerd[1554]: time="2026-01-23T17:59:32.710495894Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bd6b74446-7sm6r,Uid:34ce0042-b647-451e-af06-1121a3b681b0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1f55384d036fb11ba39fc505e35f304e1571cbbc01d0d824ea8ce3faea91eaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:32.711090 kubelet[2773]: E0123 17:59:32.710985 2773 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1f55384d036fb11ba39fc505e35f304e1571cbbc01d0d824ea8ce3faea91eaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:32.711090 kubelet[2773]: E0123 17:59:32.711075 2773 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1f55384d036fb11ba39fc505e35f304e1571cbbc01d0d824ea8ce3faea91eaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-bd6b74446-7sm6r" Jan 23 17:59:32.711507 kubelet[2773]: E0123 17:59:32.711098 2773 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1f55384d036fb11ba39fc505e35f304e1571cbbc01d0d824ea8ce3faea91eaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-bd6b74446-7sm6r" Jan 23 17:59:32.711507 kubelet[2773]: E0123 17:59:32.711156 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-bd6b74446-7sm6r_calico-system(34ce0042-b647-451e-af06-1121a3b681b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-bd6b74446-7sm6r_calico-system(34ce0042-b647-451e-af06-1121a3b681b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b1f55384d036fb11ba39fc505e35f304e1571cbbc01d0d824ea8ce3faea91eaf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-bd6b74446-7sm6r" podUID="34ce0042-b647-451e-af06-1121a3b681b0" Jan 23 17:59:32.713723 containerd[1554]: time="2026-01-23T17:59:32.713655830Z" level=error msg="Failed to destroy network for sandbox \"f6b6fec70413b9abc5a457869235e65aad19abc39d93106397c5eab301d702d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:32.715723 containerd[1554]: time="2026-01-23T17:59:32.715661287Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-tqdk2,Uid:0960425a-3ec9-4577-9a39-48675b2f3498,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6b6fec70413b9abc5a457869235e65aad19abc39d93106397c5eab301d702d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:32.716145 kubelet[2773]: E0123 17:59:32.715954 2773 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6b6fec70413b9abc5a457869235e65aad19abc39d93106397c5eab301d702d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:32.716145 kubelet[2773]: E0123 17:59:32.716058 2773 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6b6fec70413b9abc5a457869235e65aad19abc39d93106397c5eab301d702d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-tqdk2" Jan 23 17:59:32.716145 kubelet[2773]: E0123 17:59:32.716078 2773 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6b6fec70413b9abc5a457869235e65aad19abc39d93106397c5eab301d702d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-tqdk2" Jan 23 17:59:32.716473 kubelet[2773]: E0123 17:59:32.716139 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-tqdk2_calico-system(0960425a-3ec9-4577-9a39-48675b2f3498)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-tqdk2_calico-system(0960425a-3ec9-4577-9a39-48675b2f3498)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f6b6fec70413b9abc5a457869235e65aad19abc39d93106397c5eab301d702d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-tqdk2" podUID="0960425a-3ec9-4577-9a39-48675b2f3498" Jan 23 17:59:33.379873 systemd[1]: Created slice kubepods-besteffort-pod19e7b4f8_da69_42b9_9afe_ffb180e828e6.slice - libcontainer container kubepods-besteffort-pod19e7b4f8_da69_42b9_9afe_ffb180e828e6.slice. Jan 23 17:59:33.385295 containerd[1554]: time="2026-01-23T17:59:33.385065228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lc96b,Uid:19e7b4f8-da69-42b9-9afe-ffb180e828e6,Namespace:calico-system,Attempt:0,}" Jan 23 17:59:33.451113 containerd[1554]: time="2026-01-23T17:59:33.451049165Z" level=error msg="Failed to destroy network for sandbox \"662cf8d1d0ce948469d75853c14ad5b012807d91fa34f6e3fab247cbc34b20cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:33.453466 systemd[1]: run-netns-cni\x2debf63389\x2d656c\x2ded99\x2d2936\x2d2a530dfd4dca.mount: Deactivated successfully. Jan 23 17:59:33.457447 containerd[1554]: time="2026-01-23T17:59:33.457225414Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lc96b,Uid:19e7b4f8-da69-42b9-9afe-ffb180e828e6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"662cf8d1d0ce948469d75853c14ad5b012807d91fa34f6e3fab247cbc34b20cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:33.457703 kubelet[2773]: E0123 17:59:33.457574 2773 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"662cf8d1d0ce948469d75853c14ad5b012807d91fa34f6e3fab247cbc34b20cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:33.457703 kubelet[2773]: E0123 17:59:33.457630 2773 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"662cf8d1d0ce948469d75853c14ad5b012807d91fa34f6e3fab247cbc34b20cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lc96b" Jan 23 17:59:33.457703 kubelet[2773]: E0123 17:59:33.457651 2773 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"662cf8d1d0ce948469d75853c14ad5b012807d91fa34f6e3fab247cbc34b20cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lc96b" Jan 23 17:59:33.458502 kubelet[2773]: E0123 17:59:33.457705 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lc96b_calico-system(19e7b4f8-da69-42b9-9afe-ffb180e828e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lc96b_calico-system(19e7b4f8-da69-42b9-9afe-ffb180e828e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"662cf8d1d0ce948469d75853c14ad5b012807d91fa34f6e3fab247cbc34b20cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lc96b" podUID="19e7b4f8-da69-42b9-9afe-ffb180e828e6" Jan 23 17:59:37.032581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3118798112.mount: Deactivated successfully. Jan 23 17:59:37.053948 containerd[1554]: time="2026-01-23T17:59:37.053300087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:59:37.054448 containerd[1554]: time="2026-01-23T17:59:37.054419212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Jan 23 17:59:37.055386 containerd[1554]: time="2026-01-23T17:59:37.055353823Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:59:37.057610 containerd[1554]: time="2026-01-23T17:59:37.057571270Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:59:37.058058 containerd[1554]: time="2026-01-23T17:59:37.058022152Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.47169384s" Jan 23 17:59:37.058058 containerd[1554]: time="2026-01-23T17:59:37.058057079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 23 17:59:37.083619 containerd[1554]: time="2026-01-23T17:59:37.083557915Z" level=info msg="CreateContainer within sandbox \"bd9a41320ac09b9c81b64b4f17b15bed9d488289949ebac7d1bae66b3ff79ce5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 17:59:37.097976 containerd[1554]: time="2026-01-23T17:59:37.095552075Z" level=info msg="Container ac59f0270431b8b31719a4cb50ef985c00280361c79977f4d071592551734592: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:59:37.100729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1379491587.mount: Deactivated successfully. Jan 23 17:59:37.123028 containerd[1554]: time="2026-01-23T17:59:37.122962662Z" level=info msg="CreateContainer within sandbox \"bd9a41320ac09b9c81b64b4f17b15bed9d488289949ebac7d1bae66b3ff79ce5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ac59f0270431b8b31719a4cb50ef985c00280361c79977f4d071592551734592\"" Jan 23 17:59:37.125253 containerd[1554]: time="2026-01-23T17:59:37.123728882Z" level=info msg="StartContainer for \"ac59f0270431b8b31719a4cb50ef985c00280361c79977f4d071592551734592\"" Jan 23 17:59:37.126726 containerd[1554]: time="2026-01-23T17:59:37.126683504Z" level=info msg="connecting to shim ac59f0270431b8b31719a4cb50ef985c00280361c79977f4d071592551734592" address="unix:///run/containerd/s/3d6f01a6a1486763c90c9f7380416f6b51cc99e828c717e2afbfbe5a4e0d7fbb" protocol=ttrpc version=3 Jan 23 17:59:37.148495 systemd[1]: Started cri-containerd-ac59f0270431b8b31719a4cb50ef985c00280361c79977f4d071592551734592.scope - libcontainer container ac59f0270431b8b31719a4cb50ef985c00280361c79977f4d071592551734592. Jan 23 17:59:37.253446 containerd[1554]: time="2026-01-23T17:59:37.253403022Z" level=info msg="StartContainer for \"ac59f0270431b8b31719a4cb50ef985c00280361c79977f4d071592551734592\" returns successfully" Jan 23 17:59:37.417637 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 17:59:37.417755 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 17:59:37.704152 kubelet[2773]: I0123 17:59:37.702261 2773 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8k2m\" (UniqueName: \"kubernetes.io/projected/34ce0042-b647-451e-af06-1121a3b681b0-kube-api-access-n8k2m\") pod \"34ce0042-b647-451e-af06-1121a3b681b0\" (UID: \"34ce0042-b647-451e-af06-1121a3b681b0\") " Jan 23 17:59:37.704152 kubelet[2773]: I0123 17:59:37.703147 2773 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/34ce0042-b647-451e-af06-1121a3b681b0-whisker-backend-key-pair\") pod \"34ce0042-b647-451e-af06-1121a3b681b0\" (UID: \"34ce0042-b647-451e-af06-1121a3b681b0\") " Jan 23 17:59:37.704152 kubelet[2773]: I0123 17:59:37.703171 2773 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34ce0042-b647-451e-af06-1121a3b681b0-whisker-ca-bundle\") pod \"34ce0042-b647-451e-af06-1121a3b681b0\" (UID: \"34ce0042-b647-451e-af06-1121a3b681b0\") " Jan 23 17:59:37.712859 kubelet[2773]: I0123 17:59:37.712815 2773 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34ce0042-b647-451e-af06-1121a3b681b0-kube-api-access-n8k2m" (OuterVolumeSpecName: "kube-api-access-n8k2m") pod "34ce0042-b647-451e-af06-1121a3b681b0" (UID: "34ce0042-b647-451e-af06-1121a3b681b0"). InnerVolumeSpecName "kube-api-access-n8k2m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 17:59:37.715936 kubelet[2773]: I0123 17:59:37.715869 2773 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34ce0042-b647-451e-af06-1121a3b681b0-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "34ce0042-b647-451e-af06-1121a3b681b0" (UID: "34ce0042-b647-451e-af06-1121a3b681b0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 17:59:37.717124 kubelet[2773]: I0123 17:59:37.716880 2773 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34ce0042-b647-451e-af06-1121a3b681b0-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "34ce0042-b647-451e-af06-1121a3b681b0" (UID: "34ce0042-b647-451e-af06-1121a3b681b0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 17:59:37.803964 kubelet[2773]: I0123 17:59:37.803917 2773 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n8k2m\" (UniqueName: \"kubernetes.io/projected/34ce0042-b647-451e-af06-1121a3b681b0-kube-api-access-n8k2m\") on node \"ci-4459-2-3-9-0be39219fc\" DevicePath \"\"" Jan 23 17:59:37.803964 kubelet[2773]: I0123 17:59:37.803955 2773 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/34ce0042-b647-451e-af06-1121a3b681b0-whisker-backend-key-pair\") on node \"ci-4459-2-3-9-0be39219fc\" DevicePath \"\"" Jan 23 17:59:37.803964 kubelet[2773]: I0123 17:59:37.803967 2773 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34ce0042-b647-451e-af06-1121a3b681b0-whisker-ca-bundle\") on node \"ci-4459-2-3-9-0be39219fc\" DevicePath \"\"" Jan 23 17:59:38.032458 systemd[1]: var-lib-kubelet-pods-34ce0042\x2db647\x2d451e\x2daf06\x2d1121a3b681b0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn8k2m.mount: Deactivated successfully. Jan 23 17:59:38.032603 systemd[1]: var-lib-kubelet-pods-34ce0042\x2db647\x2d451e\x2daf06\x2d1121a3b681b0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 17:59:38.624209 systemd[1]: Removed slice kubepods-besteffort-pod34ce0042_b647_451e_af06_1121a3b681b0.slice - libcontainer container kubepods-besteffort-pod34ce0042_b647_451e_af06_1121a3b681b0.slice. Jan 23 17:59:38.647694 kubelet[2773]: I0123 17:59:38.647590 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fkbl8" podStartSLOduration=2.54533208 podStartE2EDuration="15.647042506s" podCreationTimestamp="2026-01-23 17:59:23 +0000 UTC" firstStartedPulling="2026-01-23 17:59:23.957450895 +0000 UTC m=+26.710066349" lastFinishedPulling="2026-01-23 17:59:37.059161321 +0000 UTC m=+39.811776775" observedRunningTime="2026-01-23 17:59:37.670970317 +0000 UTC m=+40.423585771" watchObservedRunningTime="2026-01-23 17:59:38.647042506 +0000 UTC m=+41.399657920" Jan 23 17:59:38.722992 systemd[1]: Created slice kubepods-besteffort-pod35b6ccfd_7d16_48a4_9d79_4f9d7b111e38.slice - libcontainer container kubepods-besteffort-pod35b6ccfd_7d16_48a4_9d79_4f9d7b111e38.slice. Jan 23 17:59:38.811003 kubelet[2773]: I0123 17:59:38.810948 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87fcl\" (UniqueName: \"kubernetes.io/projected/35b6ccfd-7d16-48a4-9d79-4f9d7b111e38-kube-api-access-87fcl\") pod \"whisker-86cf47b744-dzg5h\" (UID: \"35b6ccfd-7d16-48a4-9d79-4f9d7b111e38\") " pod="calico-system/whisker-86cf47b744-dzg5h" Jan 23 17:59:38.811003 kubelet[2773]: I0123 17:59:38.810996 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/35b6ccfd-7d16-48a4-9d79-4f9d7b111e38-whisker-backend-key-pair\") pod \"whisker-86cf47b744-dzg5h\" (UID: \"35b6ccfd-7d16-48a4-9d79-4f9d7b111e38\") " pod="calico-system/whisker-86cf47b744-dzg5h" Jan 23 17:59:38.811930 kubelet[2773]: I0123 17:59:38.811029 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35b6ccfd-7d16-48a4-9d79-4f9d7b111e38-whisker-ca-bundle\") pod \"whisker-86cf47b744-dzg5h\" (UID: \"35b6ccfd-7d16-48a4-9d79-4f9d7b111e38\") " pod="calico-system/whisker-86cf47b744-dzg5h" Jan 23 17:59:39.032559 containerd[1554]: time="2026-01-23T17:59:39.032505624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86cf47b744-dzg5h,Uid:35b6ccfd-7d16-48a4-9d79-4f9d7b111e38,Namespace:calico-system,Attempt:0,}" Jan 23 17:59:39.289069 systemd-networkd[1419]: cali2bef5196234: Link UP Jan 23 17:59:39.290740 systemd-networkd[1419]: cali2bef5196234: Gained carrier Jan 23 17:59:39.318799 containerd[1554]: 2026-01-23 17:59:39.077 [INFO][4002] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 17:59:39.318799 containerd[1554]: 2026-01-23 17:59:39.136 [INFO][4002] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--9--0be39219fc-k8s-whisker--86cf47b744--dzg5h-eth0 whisker-86cf47b744- calico-system 35b6ccfd-7d16-48a4-9d79-4f9d7b111e38 888 0 2026-01-23 17:59:38 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:86cf47b744 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459-2-3-9-0be39219fc whisker-86cf47b744-dzg5h eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2bef5196234 [] [] }} ContainerID="76a06f169f249b494c5be22bfcc341c450b594a86392e749ca7f7fec4d4d337e" Namespace="calico-system" Pod="whisker-86cf47b744-dzg5h" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-whisker--86cf47b744--dzg5h-" Jan 23 17:59:39.318799 containerd[1554]: 2026-01-23 17:59:39.136 [INFO][4002] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="76a06f169f249b494c5be22bfcc341c450b594a86392e749ca7f7fec4d4d337e" Namespace="calico-system" Pod="whisker-86cf47b744-dzg5h" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-whisker--86cf47b744--dzg5h-eth0" Jan 23 17:59:39.318799 containerd[1554]: 2026-01-23 17:59:39.204 [INFO][4016] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="76a06f169f249b494c5be22bfcc341c450b594a86392e749ca7f7fec4d4d337e" HandleID="k8s-pod-network.76a06f169f249b494c5be22bfcc341c450b594a86392e749ca7f7fec4d4d337e" Workload="ci--4459--2--3--9--0be39219fc-k8s-whisker--86cf47b744--dzg5h-eth0" Jan 23 17:59:39.319033 containerd[1554]: 2026-01-23 17:59:39.205 [INFO][4016] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="76a06f169f249b494c5be22bfcc341c450b594a86392e749ca7f7fec4d4d337e" HandleID="k8s-pod-network.76a06f169f249b494c5be22bfcc341c450b594a86392e749ca7f7fec4d4d337e" Workload="ci--4459--2--3--9--0be39219fc-k8s-whisker--86cf47b744--dzg5h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400038da90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-3-9-0be39219fc", "pod":"whisker-86cf47b744-dzg5h", "timestamp":"2026-01-23 17:59:39.204754563 +0000 UTC"}, Hostname:"ci-4459-2-3-9-0be39219fc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:59:39.319033 containerd[1554]: 2026-01-23 17:59:39.205 [INFO][4016] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:59:39.319033 containerd[1554]: 2026-01-23 17:59:39.205 [INFO][4016] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:59:39.319033 containerd[1554]: 2026-01-23 17:59:39.206 [INFO][4016] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-9-0be39219fc' Jan 23 17:59:39.319033 containerd[1554]: 2026-01-23 17:59:39.223 [INFO][4016] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.76a06f169f249b494c5be22bfcc341c450b594a86392e749ca7f7fec4d4d337e" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:39.319033 containerd[1554]: 2026-01-23 17:59:39.232 [INFO][4016] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:39.319033 containerd[1554]: 2026-01-23 17:59:39.238 [INFO][4016] ipam/ipam.go 511: Trying affinity for 192.168.124.0/26 host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:39.319033 containerd[1554]: 2026-01-23 17:59:39.243 [INFO][4016] ipam/ipam.go 158: Attempting to load block cidr=192.168.124.0/26 host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:39.319033 containerd[1554]: 2026-01-23 17:59:39.246 [INFO][4016] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.124.0/26 host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:39.319246 containerd[1554]: 2026-01-23 17:59:39.247 [INFO][4016] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.124.0/26 handle="k8s-pod-network.76a06f169f249b494c5be22bfcc341c450b594a86392e749ca7f7fec4d4d337e" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:39.319246 containerd[1554]: 2026-01-23 17:59:39.249 [INFO][4016] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.76a06f169f249b494c5be22bfcc341c450b594a86392e749ca7f7fec4d4d337e Jan 23 17:59:39.319246 containerd[1554]: 2026-01-23 17:59:39.256 [INFO][4016] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.124.0/26 handle="k8s-pod-network.76a06f169f249b494c5be22bfcc341c450b594a86392e749ca7f7fec4d4d337e" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:39.319246 containerd[1554]: 2026-01-23 17:59:39.264 [INFO][4016] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.124.1/26] block=192.168.124.0/26 handle="k8s-pod-network.76a06f169f249b494c5be22bfcc341c450b594a86392e749ca7f7fec4d4d337e" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:39.319246 containerd[1554]: 2026-01-23 17:59:39.264 [INFO][4016] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.124.1/26] handle="k8s-pod-network.76a06f169f249b494c5be22bfcc341c450b594a86392e749ca7f7fec4d4d337e" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:39.319246 containerd[1554]: 2026-01-23 17:59:39.264 [INFO][4016] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:59:39.319246 containerd[1554]: 2026-01-23 17:59:39.264 [INFO][4016] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.124.1/26] IPv6=[] ContainerID="76a06f169f249b494c5be22bfcc341c450b594a86392e749ca7f7fec4d4d337e" HandleID="k8s-pod-network.76a06f169f249b494c5be22bfcc341c450b594a86392e749ca7f7fec4d4d337e" Workload="ci--4459--2--3--9--0be39219fc-k8s-whisker--86cf47b744--dzg5h-eth0" Jan 23 17:59:39.319400 containerd[1554]: 2026-01-23 17:59:39.272 [INFO][4002] cni-plugin/k8s.go 418: Populated endpoint ContainerID="76a06f169f249b494c5be22bfcc341c450b594a86392e749ca7f7fec4d4d337e" Namespace="calico-system" Pod="whisker-86cf47b744-dzg5h" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-whisker--86cf47b744--dzg5h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--9--0be39219fc-k8s-whisker--86cf47b744--dzg5h-eth0", GenerateName:"whisker-86cf47b744-", Namespace:"calico-system", SelfLink:"", UID:"35b6ccfd-7d16-48a4-9d79-4f9d7b111e38", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 59, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"86cf47b744", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-9-0be39219fc", ContainerID:"", Pod:"whisker-86cf47b744-dzg5h", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.124.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2bef5196234", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:59:39.319400 containerd[1554]: 2026-01-23 17:59:39.272 [INFO][4002] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.1/32] ContainerID="76a06f169f249b494c5be22bfcc341c450b594a86392e749ca7f7fec4d4d337e" Namespace="calico-system" Pod="whisker-86cf47b744-dzg5h" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-whisker--86cf47b744--dzg5h-eth0" Jan 23 17:59:39.319469 containerd[1554]: 2026-01-23 17:59:39.272 [INFO][4002] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2bef5196234 ContainerID="76a06f169f249b494c5be22bfcc341c450b594a86392e749ca7f7fec4d4d337e" Namespace="calico-system" Pod="whisker-86cf47b744-dzg5h" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-whisker--86cf47b744--dzg5h-eth0" Jan 23 17:59:39.319469 containerd[1554]: 2026-01-23 17:59:39.294 [INFO][4002] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="76a06f169f249b494c5be22bfcc341c450b594a86392e749ca7f7fec4d4d337e" Namespace="calico-system" Pod="whisker-86cf47b744-dzg5h" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-whisker--86cf47b744--dzg5h-eth0" Jan 23 17:59:39.319510 containerd[1554]: 2026-01-23 17:59:39.296 [INFO][4002] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="76a06f169f249b494c5be22bfcc341c450b594a86392e749ca7f7fec4d4d337e" Namespace="calico-system" Pod="whisker-86cf47b744-dzg5h" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-whisker--86cf47b744--dzg5h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--9--0be39219fc-k8s-whisker--86cf47b744--dzg5h-eth0", GenerateName:"whisker-86cf47b744-", Namespace:"calico-system", SelfLink:"", UID:"35b6ccfd-7d16-48a4-9d79-4f9d7b111e38", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 59, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"86cf47b744", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-9-0be39219fc", ContainerID:"76a06f169f249b494c5be22bfcc341c450b594a86392e749ca7f7fec4d4d337e", Pod:"whisker-86cf47b744-dzg5h", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.124.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2bef5196234", MAC:"8a:dc:b9:a0:76:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:59:39.319561 containerd[1554]: 2026-01-23 17:59:39.314 [INFO][4002] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="76a06f169f249b494c5be22bfcc341c450b594a86392e749ca7f7fec4d4d337e" Namespace="calico-system" Pod="whisker-86cf47b744-dzg5h" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-whisker--86cf47b744--dzg5h-eth0" Jan 23 17:59:39.378064 kubelet[2773]: I0123 17:59:39.377917 2773 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34ce0042-b647-451e-af06-1121a3b681b0" path="/var/lib/kubelet/pods/34ce0042-b647-451e-af06-1121a3b681b0/volumes" Jan 23 17:59:39.388048 containerd[1554]: time="2026-01-23T17:59:39.387443137Z" level=info msg="connecting to shim 76a06f169f249b494c5be22bfcc341c450b594a86392e749ca7f7fec4d4d337e" address="unix:///run/containerd/s/4bbe75b141f28538bb29c0ccb9ef8589cf65dd18240cab534bccfc49cad095aa" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:59:39.465593 systemd[1]: Started cri-containerd-76a06f169f249b494c5be22bfcc341c450b594a86392e749ca7f7fec4d4d337e.scope - libcontainer container 76a06f169f249b494c5be22bfcc341c450b594a86392e749ca7f7fec4d4d337e. Jan 23 17:59:39.554757 containerd[1554]: time="2026-01-23T17:59:39.554574978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86cf47b744-dzg5h,Uid:35b6ccfd-7d16-48a4-9d79-4f9d7b111e38,Namespace:calico-system,Attempt:0,} returns sandbox id \"76a06f169f249b494c5be22bfcc341c450b594a86392e749ca7f7fec4d4d337e\"" Jan 23 17:59:39.560595 containerd[1554]: time="2026-01-23T17:59:39.560547627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 17:59:39.876884 systemd-networkd[1419]: vxlan.calico: Link UP Jan 23 17:59:39.876898 systemd-networkd[1419]: vxlan.calico: Gained carrier Jan 23 17:59:39.968467 containerd[1554]: time="2026-01-23T17:59:39.968326903Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:59:39.970568 containerd[1554]: time="2026-01-23T17:59:39.970506006Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 17:59:39.970912 containerd[1554]: time="2026-01-23T17:59:39.970887073Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 17:59:39.976046 kubelet[2773]: E0123 17:59:39.975940 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:59:39.976046 kubelet[2773]: E0123 17:59:39.976029 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:59:39.980683 kubelet[2773]: E0123 17:59:39.980546 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-86cf47b744-dzg5h_calico-system(35b6ccfd-7d16-48a4-9d79-4f9d7b111e38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 17:59:39.983133 containerd[1554]: time="2026-01-23T17:59:39.983037328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 17:59:40.336945 containerd[1554]: time="2026-01-23T17:59:40.336725365Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:59:40.338486 containerd[1554]: time="2026-01-23T17:59:40.338260989Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 17:59:40.338486 containerd[1554]: time="2026-01-23T17:59:40.338307677Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 17:59:40.338774 kubelet[2773]: E0123 17:59:40.338678 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:59:40.338839 kubelet[2773]: E0123 17:59:40.338760 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:59:40.338915 kubelet[2773]: E0123 17:59:40.338885 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-86cf47b744-dzg5h_calico-system(35b6ccfd-7d16-48a4-9d79-4f9d7b111e38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 17:59:40.339074 kubelet[2773]: E0123 17:59:40.339038 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86cf47b744-dzg5h" podUID="35b6ccfd-7d16-48a4-9d79-4f9d7b111e38" Jan 23 17:59:40.595830 systemd-networkd[1419]: cali2bef5196234: Gained IPv6LL Jan 23 17:59:40.627088 kubelet[2773]: E0123 17:59:40.627020 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86cf47b744-dzg5h" podUID="35b6ccfd-7d16-48a4-9d79-4f9d7b111e38" Jan 23 17:59:41.043645 systemd-networkd[1419]: vxlan.calico: Gained IPv6LL Jan 23 17:59:43.251511 systemd[1]: Started sshd@7-49.12.73.152:22-109.107.189.250:37148.service - OpenSSH per-connection server daemon (109.107.189.250:37148). Jan 23 17:59:43.373889 containerd[1554]: time="2026-01-23T17:59:43.373801679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-tqdk2,Uid:0960425a-3ec9-4577-9a39-48675b2f3498,Namespace:calico-system,Attempt:0,}" Jan 23 17:59:43.484486 sshd[4213]: Invalid user sparepart from 109.107.189.250 port 37148 Jan 23 17:59:43.530701 sshd[4213]: Connection closed by invalid user sparepart 109.107.189.250 port 37148 [preauth] Jan 23 17:59:43.535312 systemd[1]: sshd@7-49.12.73.152:22-109.107.189.250:37148.service: Deactivated successfully. Jan 23 17:59:43.536102 systemd-networkd[1419]: cali1341d6c326b: Link UP Jan 23 17:59:43.537392 systemd-networkd[1419]: cali1341d6c326b: Gained carrier Jan 23 17:59:43.561844 containerd[1554]: 2026-01-23 17:59:43.433 [INFO][4220] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--9--0be39219fc-k8s-goldmane--7c778bb748--tqdk2-eth0 goldmane-7c778bb748- calico-system 0960425a-3ec9-4577-9a39-48675b2f3498 815 0 2026-01-23 17:59:20 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459-2-3-9-0be39219fc goldmane-7c778bb748-tqdk2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1341d6c326b [] [] }} ContainerID="f9fba1d99e1084ff5047cb54a9a39ecbd7878c9e4e951c7aa8fab8c51d0a9f5c" Namespace="calico-system" Pod="goldmane-7c778bb748-tqdk2" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-goldmane--7c778bb748--tqdk2-" Jan 23 17:59:43.561844 containerd[1554]: 2026-01-23 17:59:43.433 [INFO][4220] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f9fba1d99e1084ff5047cb54a9a39ecbd7878c9e4e951c7aa8fab8c51d0a9f5c" Namespace="calico-system" Pod="goldmane-7c778bb748-tqdk2" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-goldmane--7c778bb748--tqdk2-eth0" Jan 23 17:59:43.561844 containerd[1554]: 2026-01-23 17:59:43.467 [INFO][4228] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f9fba1d99e1084ff5047cb54a9a39ecbd7878c9e4e951c7aa8fab8c51d0a9f5c" HandleID="k8s-pod-network.f9fba1d99e1084ff5047cb54a9a39ecbd7878c9e4e951c7aa8fab8c51d0a9f5c" Workload="ci--4459--2--3--9--0be39219fc-k8s-goldmane--7c778bb748--tqdk2-eth0" Jan 23 17:59:43.562093 containerd[1554]: 2026-01-23 17:59:43.468 [INFO][4228] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f9fba1d99e1084ff5047cb54a9a39ecbd7878c9e4e951c7aa8fab8c51d0a9f5c" HandleID="k8s-pod-network.f9fba1d99e1084ff5047cb54a9a39ecbd7878c9e4e951c7aa8fab8c51d0a9f5c" Workload="ci--4459--2--3--9--0be39219fc-k8s-goldmane--7c778bb748--tqdk2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-3-9-0be39219fc", "pod":"goldmane-7c778bb748-tqdk2", "timestamp":"2026-01-23 17:59:43.467797114 +0000 UTC"}, Hostname:"ci-4459-2-3-9-0be39219fc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:59:43.562093 containerd[1554]: 2026-01-23 17:59:43.468 [INFO][4228] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:59:43.562093 containerd[1554]: 2026-01-23 17:59:43.468 [INFO][4228] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:59:43.562093 containerd[1554]: 2026-01-23 17:59:43.468 [INFO][4228] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-9-0be39219fc' Jan 23 17:59:43.562093 containerd[1554]: 2026-01-23 17:59:43.488 [INFO][4228] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f9fba1d99e1084ff5047cb54a9a39ecbd7878c9e4e951c7aa8fab8c51d0a9f5c" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:43.562093 containerd[1554]: 2026-01-23 17:59:43.496 [INFO][4228] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:43.562093 containerd[1554]: 2026-01-23 17:59:43.504 [INFO][4228] ipam/ipam.go 511: Trying affinity for 192.168.124.0/26 host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:43.562093 containerd[1554]: 2026-01-23 17:59:43.507 [INFO][4228] ipam/ipam.go 158: Attempting to load block cidr=192.168.124.0/26 host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:43.562093 containerd[1554]: 2026-01-23 17:59:43.511 [INFO][4228] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.124.0/26 host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:43.562629 containerd[1554]: 2026-01-23 17:59:43.512 [INFO][4228] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.124.0/26 handle="k8s-pod-network.f9fba1d99e1084ff5047cb54a9a39ecbd7878c9e4e951c7aa8fab8c51d0a9f5c" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:43.562629 containerd[1554]: 2026-01-23 17:59:43.514 [INFO][4228] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f9fba1d99e1084ff5047cb54a9a39ecbd7878c9e4e951c7aa8fab8c51d0a9f5c Jan 23 17:59:43.562629 containerd[1554]: 2026-01-23 17:59:43.520 [INFO][4228] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.124.0/26 handle="k8s-pod-network.f9fba1d99e1084ff5047cb54a9a39ecbd7878c9e4e951c7aa8fab8c51d0a9f5c" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:43.562629 containerd[1554]: 2026-01-23 17:59:43.528 [INFO][4228] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.124.2/26] block=192.168.124.0/26 handle="k8s-pod-network.f9fba1d99e1084ff5047cb54a9a39ecbd7878c9e4e951c7aa8fab8c51d0a9f5c" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:43.562629 containerd[1554]: 2026-01-23 17:59:43.528 [INFO][4228] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.124.2/26] handle="k8s-pod-network.f9fba1d99e1084ff5047cb54a9a39ecbd7878c9e4e951c7aa8fab8c51d0a9f5c" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:43.562629 containerd[1554]: 2026-01-23 17:59:43.528 [INFO][4228] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:59:43.562629 containerd[1554]: 2026-01-23 17:59:43.528 [INFO][4228] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.124.2/26] IPv6=[] ContainerID="f9fba1d99e1084ff5047cb54a9a39ecbd7878c9e4e951c7aa8fab8c51d0a9f5c" HandleID="k8s-pod-network.f9fba1d99e1084ff5047cb54a9a39ecbd7878c9e4e951c7aa8fab8c51d0a9f5c" Workload="ci--4459--2--3--9--0be39219fc-k8s-goldmane--7c778bb748--tqdk2-eth0" Jan 23 17:59:43.562872 containerd[1554]: 2026-01-23 17:59:43.530 [INFO][4220] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f9fba1d99e1084ff5047cb54a9a39ecbd7878c9e4e951c7aa8fab8c51d0a9f5c" Namespace="calico-system" Pod="goldmane-7c778bb748-tqdk2" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-goldmane--7c778bb748--tqdk2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--9--0be39219fc-k8s-goldmane--7c778bb748--tqdk2-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"0960425a-3ec9-4577-9a39-48675b2f3498", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 59, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-9-0be39219fc", ContainerID:"", Pod:"goldmane-7c778bb748-tqdk2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.124.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1341d6c326b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:59:43.562953 containerd[1554]: 2026-01-23 17:59:43.531 [INFO][4220] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.2/32] ContainerID="f9fba1d99e1084ff5047cb54a9a39ecbd7878c9e4e951c7aa8fab8c51d0a9f5c" Namespace="calico-system" Pod="goldmane-7c778bb748-tqdk2" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-goldmane--7c778bb748--tqdk2-eth0" Jan 23 17:59:43.562953 containerd[1554]: 2026-01-23 17:59:43.531 [INFO][4220] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1341d6c326b ContainerID="f9fba1d99e1084ff5047cb54a9a39ecbd7878c9e4e951c7aa8fab8c51d0a9f5c" Namespace="calico-system" Pod="goldmane-7c778bb748-tqdk2" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-goldmane--7c778bb748--tqdk2-eth0" Jan 23 17:59:43.562953 containerd[1554]: 2026-01-23 17:59:43.538 [INFO][4220] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f9fba1d99e1084ff5047cb54a9a39ecbd7878c9e4e951c7aa8fab8c51d0a9f5c" Namespace="calico-system" Pod="goldmane-7c778bb748-tqdk2" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-goldmane--7c778bb748--tqdk2-eth0" Jan 23 17:59:43.563013 containerd[1554]: 2026-01-23 17:59:43.539 [INFO][4220] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f9fba1d99e1084ff5047cb54a9a39ecbd7878c9e4e951c7aa8fab8c51d0a9f5c" Namespace="calico-system" Pod="goldmane-7c778bb748-tqdk2" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-goldmane--7c778bb748--tqdk2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--9--0be39219fc-k8s-goldmane--7c778bb748--tqdk2-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"0960425a-3ec9-4577-9a39-48675b2f3498", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 59, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-9-0be39219fc", ContainerID:"f9fba1d99e1084ff5047cb54a9a39ecbd7878c9e4e951c7aa8fab8c51d0a9f5c", Pod:"goldmane-7c778bb748-tqdk2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.124.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1341d6c326b", MAC:"26:d3:0a:49:00:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:59:43.563061 containerd[1554]: 2026-01-23 17:59:43.557 [INFO][4220] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f9fba1d99e1084ff5047cb54a9a39ecbd7878c9e4e951c7aa8fab8c51d0a9f5c" Namespace="calico-system" Pod="goldmane-7c778bb748-tqdk2" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-goldmane--7c778bb748--tqdk2-eth0" Jan 23 17:59:43.609620 containerd[1554]: time="2026-01-23T17:59:43.609575176Z" level=info msg="connecting to shim f9fba1d99e1084ff5047cb54a9a39ecbd7878c9e4e951c7aa8fab8c51d0a9f5c" address="unix:///run/containerd/s/ddca20170de7c588f2bab435958f353288f96a547cf56642839a928eb5c70970" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:59:43.637504 systemd[1]: Started cri-containerd-f9fba1d99e1084ff5047cb54a9a39ecbd7878c9e4e951c7aa8fab8c51d0a9f5c.scope - libcontainer container f9fba1d99e1084ff5047cb54a9a39ecbd7878c9e4e951c7aa8fab8c51d0a9f5c. Jan 23 17:59:43.679888 containerd[1554]: time="2026-01-23T17:59:43.679838785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-tqdk2,Uid:0960425a-3ec9-4577-9a39-48675b2f3498,Namespace:calico-system,Attempt:0,} returns sandbox id \"f9fba1d99e1084ff5047cb54a9a39ecbd7878c9e4e951c7aa8fab8c51d0a9f5c\"" Jan 23 17:59:43.682186 containerd[1554]: time="2026-01-23T17:59:43.682135879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 17:59:44.032704 containerd[1554]: time="2026-01-23T17:59:44.032451716Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:59:44.034513 containerd[1554]: time="2026-01-23T17:59:44.034337498Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 17:59:44.034513 containerd[1554]: time="2026-01-23T17:59:44.034378705Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 17:59:44.034793 kubelet[2773]: E0123 17:59:44.034700 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:59:44.034793 kubelet[2773]: E0123 17:59:44.034759 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:59:44.035498 kubelet[2773]: E0123 17:59:44.034859 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-tqdk2_calico-system(0960425a-3ec9-4577-9a39-48675b2f3498): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 17:59:44.035498 kubelet[2773]: E0123 17:59:44.034918 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tqdk2" podUID="0960425a-3ec9-4577-9a39-48675b2f3498" Jan 23 17:59:44.373392 containerd[1554]: time="2026-01-23T17:59:44.373226642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b9444d76-tdrjj,Uid:01f4753b-deaf-4717-98ef-d3ec1b50a10a,Namespace:calico-system,Attempt:0,}" Jan 23 17:59:44.533801 systemd-networkd[1419]: calidee4cfd35f6: Link UP Jan 23 17:59:44.535563 systemd-networkd[1419]: calidee4cfd35f6: Gained carrier Jan 23 17:59:44.573760 containerd[1554]: 2026-01-23 17:59:44.427 [INFO][4296] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--9--0be39219fc-k8s-calico--kube--controllers--6b9444d76--tdrjj-eth0 calico-kube-controllers-6b9444d76- calico-system 01f4753b-deaf-4717-98ef-d3ec1b50a10a 814 0 2026-01-23 17:59:23 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6b9444d76 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459-2-3-9-0be39219fc calico-kube-controllers-6b9444d76-tdrjj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calidee4cfd35f6 [] [] }} ContainerID="90a37e9c1933fdf7d5d87ca4367b1881dcab57c03bcd1855d2a245f56c38abfe" Namespace="calico-system" Pod="calico-kube-controllers-6b9444d76-tdrjj" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-calico--kube--controllers--6b9444d76--tdrjj-" Jan 23 17:59:44.573760 containerd[1554]: 2026-01-23 17:59:44.428 [INFO][4296] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="90a37e9c1933fdf7d5d87ca4367b1881dcab57c03bcd1855d2a245f56c38abfe" Namespace="calico-system" Pod="calico-kube-controllers-6b9444d76-tdrjj" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-calico--kube--controllers--6b9444d76--tdrjj-eth0" Jan 23 17:59:44.573760 containerd[1554]: 2026-01-23 17:59:44.470 [INFO][4309] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90a37e9c1933fdf7d5d87ca4367b1881dcab57c03bcd1855d2a245f56c38abfe" HandleID="k8s-pod-network.90a37e9c1933fdf7d5d87ca4367b1881dcab57c03bcd1855d2a245f56c38abfe" Workload="ci--4459--2--3--9--0be39219fc-k8s-calico--kube--controllers--6b9444d76--tdrjj-eth0" Jan 23 17:59:44.574348 containerd[1554]: 2026-01-23 17:59:44.470 [INFO][4309] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="90a37e9c1933fdf7d5d87ca4367b1881dcab57c03bcd1855d2a245f56c38abfe" HandleID="k8s-pod-network.90a37e9c1933fdf7d5d87ca4367b1881dcab57c03bcd1855d2a245f56c38abfe" Workload="ci--4459--2--3--9--0be39219fc-k8s-calico--kube--controllers--6b9444d76--tdrjj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c18d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-3-9-0be39219fc", "pod":"calico-kube-controllers-6b9444d76-tdrjj", "timestamp":"2026-01-23 17:59:44.470532555 +0000 UTC"}, Hostname:"ci-4459-2-3-9-0be39219fc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:59:44.574348 containerd[1554]: 2026-01-23 17:59:44.470 [INFO][4309] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:59:44.574348 containerd[1554]: 2026-01-23 17:59:44.470 [INFO][4309] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:59:44.574348 containerd[1554]: 2026-01-23 17:59:44.470 [INFO][4309] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-9-0be39219fc' Jan 23 17:59:44.574348 containerd[1554]: 2026-01-23 17:59:44.482 [INFO][4309] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.90a37e9c1933fdf7d5d87ca4367b1881dcab57c03bcd1855d2a245f56c38abfe" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:44.574348 containerd[1554]: 2026-01-23 17:59:44.490 [INFO][4309] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:44.574348 containerd[1554]: 2026-01-23 17:59:44.497 [INFO][4309] ipam/ipam.go 511: Trying affinity for 192.168.124.0/26 host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:44.574348 containerd[1554]: 2026-01-23 17:59:44.500 [INFO][4309] ipam/ipam.go 158: Attempting to load block cidr=192.168.124.0/26 host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:44.574348 containerd[1554]: 2026-01-23 17:59:44.504 [INFO][4309] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.124.0/26 host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:44.574552 containerd[1554]: 2026-01-23 17:59:44.504 [INFO][4309] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.124.0/26 handle="k8s-pod-network.90a37e9c1933fdf7d5d87ca4367b1881dcab57c03bcd1855d2a245f56c38abfe" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:44.574552 containerd[1554]: 2026-01-23 17:59:44.507 [INFO][4309] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.90a37e9c1933fdf7d5d87ca4367b1881dcab57c03bcd1855d2a245f56c38abfe Jan 23 17:59:44.574552 containerd[1554]: 2026-01-23 17:59:44.515 [INFO][4309] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.124.0/26 handle="k8s-pod-network.90a37e9c1933fdf7d5d87ca4367b1881dcab57c03bcd1855d2a245f56c38abfe" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:44.574552 containerd[1554]: 2026-01-23 17:59:44.525 [INFO][4309] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.124.3/26] block=192.168.124.0/26 handle="k8s-pod-network.90a37e9c1933fdf7d5d87ca4367b1881dcab57c03bcd1855d2a245f56c38abfe" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:44.574552 containerd[1554]: 2026-01-23 17:59:44.525 [INFO][4309] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.124.3/26] handle="k8s-pod-network.90a37e9c1933fdf7d5d87ca4367b1881dcab57c03bcd1855d2a245f56c38abfe" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:44.574552 containerd[1554]: 2026-01-23 17:59:44.525 [INFO][4309] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:59:44.574552 containerd[1554]: 2026-01-23 17:59:44.525 [INFO][4309] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.124.3/26] IPv6=[] ContainerID="90a37e9c1933fdf7d5d87ca4367b1881dcab57c03bcd1855d2a245f56c38abfe" HandleID="k8s-pod-network.90a37e9c1933fdf7d5d87ca4367b1881dcab57c03bcd1855d2a245f56c38abfe" Workload="ci--4459--2--3--9--0be39219fc-k8s-calico--kube--controllers--6b9444d76--tdrjj-eth0" Jan 23 17:59:44.574689 containerd[1554]: 2026-01-23 17:59:44.530 [INFO][4296] cni-plugin/k8s.go 418: Populated endpoint ContainerID="90a37e9c1933fdf7d5d87ca4367b1881dcab57c03bcd1855d2a245f56c38abfe" Namespace="calico-system" Pod="calico-kube-controllers-6b9444d76-tdrjj" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-calico--kube--controllers--6b9444d76--tdrjj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--9--0be39219fc-k8s-calico--kube--controllers--6b9444d76--tdrjj-eth0", GenerateName:"calico-kube-controllers-6b9444d76-", Namespace:"calico-system", SelfLink:"", UID:"01f4753b-deaf-4717-98ef-d3ec1b50a10a", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 59, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b9444d76", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-9-0be39219fc", ContainerID:"", Pod:"calico-kube-controllers-6b9444d76-tdrjj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidee4cfd35f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:59:44.574740 containerd[1554]: 2026-01-23 17:59:44.530 [INFO][4296] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.3/32] ContainerID="90a37e9c1933fdf7d5d87ca4367b1881dcab57c03bcd1855d2a245f56c38abfe" Namespace="calico-system" Pod="calico-kube-controllers-6b9444d76-tdrjj" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-calico--kube--controllers--6b9444d76--tdrjj-eth0" Jan 23 17:59:44.574740 containerd[1554]: 2026-01-23 17:59:44.530 [INFO][4296] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidee4cfd35f6 ContainerID="90a37e9c1933fdf7d5d87ca4367b1881dcab57c03bcd1855d2a245f56c38abfe" Namespace="calico-system" Pod="calico-kube-controllers-6b9444d76-tdrjj" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-calico--kube--controllers--6b9444d76--tdrjj-eth0" Jan 23 17:59:44.574740 containerd[1554]: 2026-01-23 17:59:44.534 [INFO][4296] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90a37e9c1933fdf7d5d87ca4367b1881dcab57c03bcd1855d2a245f56c38abfe" Namespace="calico-system" Pod="calico-kube-controllers-6b9444d76-tdrjj" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-calico--kube--controllers--6b9444d76--tdrjj-eth0" Jan 23 17:59:44.574798 containerd[1554]: 2026-01-23 17:59:44.534 [INFO][4296] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="90a37e9c1933fdf7d5d87ca4367b1881dcab57c03bcd1855d2a245f56c38abfe" Namespace="calico-system" Pod="calico-kube-controllers-6b9444d76-tdrjj" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-calico--kube--controllers--6b9444d76--tdrjj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--9--0be39219fc-k8s-calico--kube--controllers--6b9444d76--tdrjj-eth0", GenerateName:"calico-kube-controllers-6b9444d76-", Namespace:"calico-system", SelfLink:"", UID:"01f4753b-deaf-4717-98ef-d3ec1b50a10a", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 59, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b9444d76", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-9-0be39219fc", ContainerID:"90a37e9c1933fdf7d5d87ca4367b1881dcab57c03bcd1855d2a245f56c38abfe", Pod:"calico-kube-controllers-6b9444d76-tdrjj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidee4cfd35f6", MAC:"4a:cc:ea:a1:aa:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:59:44.574847 containerd[1554]: 2026-01-23 17:59:44.569 [INFO][4296] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="90a37e9c1933fdf7d5d87ca4367b1881dcab57c03bcd1855d2a245f56c38abfe" Namespace="calico-system" Pod="calico-kube-controllers-6b9444d76-tdrjj" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-calico--kube--controllers--6b9444d76--tdrjj-eth0" Jan 23 17:59:44.618595 containerd[1554]: time="2026-01-23T17:59:44.618532550Z" level=info msg="connecting to shim 90a37e9c1933fdf7d5d87ca4367b1881dcab57c03bcd1855d2a245f56c38abfe" address="unix:///run/containerd/s/b34d5cbdf28d1417e63560f9483dd26b54e76e5e9558f7d59b96583078e43041" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:59:44.652284 kubelet[2773]: E0123 17:59:44.651965 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tqdk2" podUID="0960425a-3ec9-4577-9a39-48675b2f3498" Jan 23 17:59:44.688498 systemd[1]: Started cri-containerd-90a37e9c1933fdf7d5d87ca4367b1881dcab57c03bcd1855d2a245f56c38abfe.scope - libcontainer container 90a37e9c1933fdf7d5d87ca4367b1881dcab57c03bcd1855d2a245f56c38abfe. Jan 23 17:59:44.800056 containerd[1554]: time="2026-01-23T17:59:44.800013591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b9444d76-tdrjj,Uid:01f4753b-deaf-4717-98ef-d3ec1b50a10a,Namespace:calico-system,Attempt:0,} returns sandbox id \"90a37e9c1933fdf7d5d87ca4367b1881dcab57c03bcd1855d2a245f56c38abfe\"" Jan 23 17:59:44.804587 containerd[1554]: time="2026-01-23T17:59:44.804548598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 17:59:45.011730 systemd-networkd[1419]: cali1341d6c326b: Gained IPv6LL Jan 23 17:59:45.182708 containerd[1554]: time="2026-01-23T17:59:45.182640009Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:59:45.184057 containerd[1554]: time="2026-01-23T17:59:45.183990542Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 17:59:45.184199 containerd[1554]: time="2026-01-23T17:59:45.184006264Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 17:59:45.184383 kubelet[2773]: E0123 17:59:45.184333 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:59:45.184748 kubelet[2773]: E0123 17:59:45.184391 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:59:45.184748 kubelet[2773]: E0123 17:59:45.184457 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6b9444d76-tdrjj_calico-system(01f4753b-deaf-4717-98ef-d3ec1b50a10a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 17:59:45.184748 kubelet[2773]: E0123 17:59:45.184488 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b9444d76-tdrjj" podUID="01f4753b-deaf-4717-98ef-d3ec1b50a10a" Jan 23 17:59:45.374163 containerd[1554]: time="2026-01-23T17:59:45.373623809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lc96b,Uid:19e7b4f8-da69-42b9-9afe-ffb180e828e6,Namespace:calico-system,Attempt:0,}" Jan 23 17:59:45.527672 systemd-networkd[1419]: cali73df132ea2a: Link UP Jan 23 17:59:45.528121 systemd-networkd[1419]: cali73df132ea2a: Gained carrier Jan 23 17:59:45.549596 containerd[1554]: 2026-01-23 17:59:45.426 [INFO][4378] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--9--0be39219fc-k8s-csi--node--driver--lc96b-eth0 csi-node-driver- calico-system 19e7b4f8-da69-42b9-9afe-ffb180e828e6 771 0 2026-01-23 17:59:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459-2-3-9-0be39219fc csi-node-driver-lc96b eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali73df132ea2a [] [] }} ContainerID="89be9d660f3a76df85ab277c2f95c2cbe92a4685b28245cfc7b419b456bf0bb0" Namespace="calico-system" Pod="csi-node-driver-lc96b" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-csi--node--driver--lc96b-" Jan 23 17:59:45.549596 containerd[1554]: 2026-01-23 17:59:45.426 [INFO][4378] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="89be9d660f3a76df85ab277c2f95c2cbe92a4685b28245cfc7b419b456bf0bb0" Namespace="calico-system" Pod="csi-node-driver-lc96b" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-csi--node--driver--lc96b-eth0" Jan 23 17:59:45.549596 containerd[1554]: 2026-01-23 17:59:45.463 [INFO][4389] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="89be9d660f3a76df85ab277c2f95c2cbe92a4685b28245cfc7b419b456bf0bb0" HandleID="k8s-pod-network.89be9d660f3a76df85ab277c2f95c2cbe92a4685b28245cfc7b419b456bf0bb0" Workload="ci--4459--2--3--9--0be39219fc-k8s-csi--node--driver--lc96b-eth0" Jan 23 17:59:45.549874 containerd[1554]: 2026-01-23 17:59:45.463 [INFO][4389] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="89be9d660f3a76df85ab277c2f95c2cbe92a4685b28245cfc7b419b456bf0bb0" HandleID="k8s-pod-network.89be9d660f3a76df85ab277c2f95c2cbe92a4685b28245cfc7b419b456bf0bb0" Workload="ci--4459--2--3--9--0be39219fc-k8s-csi--node--driver--lc96b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002556a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-3-9-0be39219fc", "pod":"csi-node-driver-lc96b", "timestamp":"2026-01-23 17:59:45.463200055 +0000 UTC"}, Hostname:"ci-4459-2-3-9-0be39219fc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:59:45.549874 containerd[1554]: 2026-01-23 17:59:45.463 [INFO][4389] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:59:45.549874 containerd[1554]: 2026-01-23 17:59:45.463 [INFO][4389] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:59:45.549874 containerd[1554]: 2026-01-23 17:59:45.463 [INFO][4389] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-9-0be39219fc' Jan 23 17:59:45.549874 containerd[1554]: 2026-01-23 17:59:45.478 [INFO][4389] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.89be9d660f3a76df85ab277c2f95c2cbe92a4685b28245cfc7b419b456bf0bb0" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:45.549874 containerd[1554]: 2026-01-23 17:59:45.485 [INFO][4389] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:45.549874 containerd[1554]: 2026-01-23 17:59:45.490 [INFO][4389] ipam/ipam.go 511: Trying affinity for 192.168.124.0/26 host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:45.549874 containerd[1554]: 2026-01-23 17:59:45.493 [INFO][4389] ipam/ipam.go 158: Attempting to load block cidr=192.168.124.0/26 host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:45.549874 containerd[1554]: 2026-01-23 17:59:45.496 [INFO][4389] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.124.0/26 host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:45.550186 containerd[1554]: 2026-01-23 17:59:45.496 [INFO][4389] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.124.0/26 handle="k8s-pod-network.89be9d660f3a76df85ab277c2f95c2cbe92a4685b28245cfc7b419b456bf0bb0" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:45.550186 containerd[1554]: 2026-01-23 17:59:45.498 [INFO][4389] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.89be9d660f3a76df85ab277c2f95c2cbe92a4685b28245cfc7b419b456bf0bb0 Jan 23 17:59:45.550186 containerd[1554]: 2026-01-23 17:59:45.505 [INFO][4389] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.124.0/26 handle="k8s-pod-network.89be9d660f3a76df85ab277c2f95c2cbe92a4685b28245cfc7b419b456bf0bb0" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:45.550186 containerd[1554]: 2026-01-23 17:59:45.517 [INFO][4389] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.124.4/26] block=192.168.124.0/26 handle="k8s-pod-network.89be9d660f3a76df85ab277c2f95c2cbe92a4685b28245cfc7b419b456bf0bb0" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:45.550186 containerd[1554]: 2026-01-23 17:59:45.517 [INFO][4389] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.124.4/26] handle="k8s-pod-network.89be9d660f3a76df85ab277c2f95c2cbe92a4685b28245cfc7b419b456bf0bb0" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:45.550186 containerd[1554]: 2026-01-23 17:59:45.517 [INFO][4389] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:59:45.550186 containerd[1554]: 2026-01-23 17:59:45.517 [INFO][4389] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.124.4/26] IPv6=[] ContainerID="89be9d660f3a76df85ab277c2f95c2cbe92a4685b28245cfc7b419b456bf0bb0" HandleID="k8s-pod-network.89be9d660f3a76df85ab277c2f95c2cbe92a4685b28245cfc7b419b456bf0bb0" Workload="ci--4459--2--3--9--0be39219fc-k8s-csi--node--driver--lc96b-eth0" Jan 23 17:59:45.550502 containerd[1554]: 2026-01-23 17:59:45.523 [INFO][4378] cni-plugin/k8s.go 418: Populated endpoint ContainerID="89be9d660f3a76df85ab277c2f95c2cbe92a4685b28245cfc7b419b456bf0bb0" Namespace="calico-system" Pod="csi-node-driver-lc96b" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-csi--node--driver--lc96b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--9--0be39219fc-k8s-csi--node--driver--lc96b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"19e7b4f8-da69-42b9-9afe-ffb180e828e6", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 59, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-9-0be39219fc", ContainerID:"", Pod:"csi-node-driver-lc96b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali73df132ea2a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:59:45.550596 containerd[1554]: 2026-01-23 17:59:45.523 [INFO][4378] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.4/32] ContainerID="89be9d660f3a76df85ab277c2f95c2cbe92a4685b28245cfc7b419b456bf0bb0" Namespace="calico-system" Pod="csi-node-driver-lc96b" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-csi--node--driver--lc96b-eth0" Jan 23 17:59:45.550596 containerd[1554]: 2026-01-23 17:59:45.523 [INFO][4378] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali73df132ea2a ContainerID="89be9d660f3a76df85ab277c2f95c2cbe92a4685b28245cfc7b419b456bf0bb0" Namespace="calico-system" Pod="csi-node-driver-lc96b" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-csi--node--driver--lc96b-eth0" Jan 23 17:59:45.550596 containerd[1554]: 2026-01-23 17:59:45.529 [INFO][4378] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="89be9d660f3a76df85ab277c2f95c2cbe92a4685b28245cfc7b419b456bf0bb0" Namespace="calico-system" Pod="csi-node-driver-lc96b" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-csi--node--driver--lc96b-eth0" Jan 23 17:59:45.550795 containerd[1554]: 2026-01-23 17:59:45.529 [INFO][4378] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="89be9d660f3a76df85ab277c2f95c2cbe92a4685b28245cfc7b419b456bf0bb0" Namespace="calico-system" Pod="csi-node-driver-lc96b" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-csi--node--driver--lc96b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--9--0be39219fc-k8s-csi--node--driver--lc96b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"19e7b4f8-da69-42b9-9afe-ffb180e828e6", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 59, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-9-0be39219fc", ContainerID:"89be9d660f3a76df85ab277c2f95c2cbe92a4685b28245cfc7b419b456bf0bb0", Pod:"csi-node-driver-lc96b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali73df132ea2a", MAC:"a2:3f:ff:2c:6a:cb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:59:45.550966 containerd[1554]: 2026-01-23 17:59:45.545 [INFO][4378] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="89be9d660f3a76df85ab277c2f95c2cbe92a4685b28245cfc7b419b456bf0bb0" Namespace="calico-system" Pod="csi-node-driver-lc96b" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-csi--node--driver--lc96b-eth0" Jan 23 17:59:45.585204 containerd[1554]: time="2026-01-23T17:59:45.584691296Z" level=info msg="connecting to shim 89be9d660f3a76df85ab277c2f95c2cbe92a4685b28245cfc7b419b456bf0bb0" address="unix:///run/containerd/s/23591a61fbcee2a0a15aa2d18d90c94a91f554c26ba82e015b7463cce747bfb8" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:59:45.609600 systemd[1]: Started cri-containerd-89be9d660f3a76df85ab277c2f95c2cbe92a4685b28245cfc7b419b456bf0bb0.scope - libcontainer container 89be9d660f3a76df85ab277c2f95c2cbe92a4685b28245cfc7b419b456bf0bb0. Jan 23 17:59:45.697597 kubelet[2773]: E0123 17:59:45.697368 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tqdk2" podUID="0960425a-3ec9-4577-9a39-48675b2f3498" Jan 23 17:59:45.697863 kubelet[2773]: E0123 17:59:45.697760 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b9444d76-tdrjj" podUID="01f4753b-deaf-4717-98ef-d3ec1b50a10a" Jan 23 17:59:45.754153 containerd[1554]: time="2026-01-23T17:59:45.754078289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lc96b,Uid:19e7b4f8-da69-42b9-9afe-ffb180e828e6,Namespace:calico-system,Attempt:0,} returns sandbox id \"89be9d660f3a76df85ab277c2f95c2cbe92a4685b28245cfc7b419b456bf0bb0\"" Jan 23 17:59:45.762416 containerd[1554]: time="2026-01-23T17:59:45.762376918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 17:59:45.843625 systemd-networkd[1419]: calidee4cfd35f6: Gained IPv6LL Jan 23 17:59:46.095896 containerd[1554]: time="2026-01-23T17:59:46.095631208Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:59:46.097452 containerd[1554]: time="2026-01-23T17:59:46.097380480Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 17:59:46.097637 containerd[1554]: time="2026-01-23T17:59:46.097538264Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 17:59:46.098074 kubelet[2773]: E0123 17:59:46.097944 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:59:46.098298 kubelet[2773]: E0123 17:59:46.098049 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:59:46.098760 kubelet[2773]: E0123 17:59:46.098563 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-lc96b_calico-system(19e7b4f8-da69-42b9-9afe-ffb180e828e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 17:59:46.101112 containerd[1554]: time="2026-01-23T17:59:46.101076774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 17:59:46.373113 containerd[1554]: time="2026-01-23T17:59:46.372915120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5cgdl,Uid:1177e5f5-20db-4e39-bf2b-92e330c49d30,Namespace:kube-system,Attempt:0,}" Jan 23 17:59:46.375249 containerd[1554]: time="2026-01-23T17:59:46.375160548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f4f6d594f-7pkzs,Uid:12cc26cd-794b-4ec3-9b6f-99c6d249650c,Namespace:calico-apiserver,Attempt:0,}" Jan 23 17:59:46.432984 containerd[1554]: time="2026-01-23T17:59:46.432835147Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:59:46.441482 containerd[1554]: time="2026-01-23T17:59:46.441405318Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 17:59:46.441788 containerd[1554]: time="2026-01-23T17:59:46.441767335Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 17:59:46.442194 kubelet[2773]: E0123 17:59:46.442060 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:59:46.443666 kubelet[2773]: E0123 17:59:46.443062 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:59:46.443666 kubelet[2773]: E0123 17:59:46.443176 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-lc96b_calico-system(19e7b4f8-da69-42b9-9afe-ffb180e828e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 17:59:46.443666 kubelet[2773]: E0123 17:59:46.443216 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lc96b" podUID="19e7b4f8-da69-42b9-9afe-ffb180e828e6" Jan 23 17:59:46.565739 systemd-networkd[1419]: cali5e053f2496c: Link UP Jan 23 17:59:46.567011 systemd-networkd[1419]: cali5e053f2496c: Gained carrier Jan 23 17:59:46.589796 containerd[1554]: 2026-01-23 17:59:46.452 [INFO][4453] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--9--0be39219fc-k8s-calico--apiserver--6f4f6d594f--7pkzs-eth0 calico-apiserver-6f4f6d594f- calico-apiserver 12cc26cd-794b-4ec3-9b6f-99c6d249650c 818 0 2026-01-23 17:59:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f4f6d594f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-3-9-0be39219fc calico-apiserver-6f4f6d594f-7pkzs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5e053f2496c [] [] }} ContainerID="c0d5f72968eb112c51fab412e5a504c3550915c09c588bdf03f022d13789ac28" Namespace="calico-apiserver" Pod="calico-apiserver-6f4f6d594f-7pkzs" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-calico--apiserver--6f4f6d594f--7pkzs-" Jan 23 17:59:46.589796 containerd[1554]: 2026-01-23 17:59:46.452 [INFO][4453] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c0d5f72968eb112c51fab412e5a504c3550915c09c588bdf03f022d13789ac28" Namespace="calico-apiserver" Pod="calico-apiserver-6f4f6d594f-7pkzs" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-calico--apiserver--6f4f6d594f--7pkzs-eth0" Jan 23 17:59:46.589796 containerd[1554]: 2026-01-23 17:59:46.501 [INFO][4477] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0d5f72968eb112c51fab412e5a504c3550915c09c588bdf03f022d13789ac28" HandleID="k8s-pod-network.c0d5f72968eb112c51fab412e5a504c3550915c09c588bdf03f022d13789ac28" Workload="ci--4459--2--3--9--0be39219fc-k8s-calico--apiserver--6f4f6d594f--7pkzs-eth0" Jan 23 17:59:46.590647 containerd[1554]: 2026-01-23 17:59:46.502 [INFO][4477] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c0d5f72968eb112c51fab412e5a504c3550915c09c588bdf03f022d13789ac28" HandleID="k8s-pod-network.c0d5f72968eb112c51fab412e5a504c3550915c09c588bdf03f022d13789ac28" Workload="ci--4459--2--3--9--0be39219fc-k8s-calico--apiserver--6f4f6d594f--7pkzs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b5d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-2-3-9-0be39219fc", "pod":"calico-apiserver-6f4f6d594f-7pkzs", "timestamp":"2026-01-23 17:59:46.501870471 +0000 UTC"}, Hostname:"ci-4459-2-3-9-0be39219fc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:59:46.590647 containerd[1554]: 2026-01-23 17:59:46.502 [INFO][4477] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:59:46.590647 containerd[1554]: 2026-01-23 17:59:46.502 [INFO][4477] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:59:46.590647 containerd[1554]: 2026-01-23 17:59:46.502 [INFO][4477] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-9-0be39219fc' Jan 23 17:59:46.590647 containerd[1554]: 2026-01-23 17:59:46.514 [INFO][4477] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c0d5f72968eb112c51fab412e5a504c3550915c09c588bdf03f022d13789ac28" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:46.590647 containerd[1554]: 2026-01-23 17:59:46.522 [INFO][4477] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:46.590647 containerd[1554]: 2026-01-23 17:59:46.529 [INFO][4477] ipam/ipam.go 511: Trying affinity for 192.168.124.0/26 host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:46.590647 containerd[1554]: 2026-01-23 17:59:46.532 [INFO][4477] ipam/ipam.go 158: Attempting to load block cidr=192.168.124.0/26 host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:46.590647 containerd[1554]: 2026-01-23 17:59:46.535 [INFO][4477] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.124.0/26 host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:46.590884 containerd[1554]: 2026-01-23 17:59:46.535 [INFO][4477] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.124.0/26 handle="k8s-pod-network.c0d5f72968eb112c51fab412e5a504c3550915c09c588bdf03f022d13789ac28" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:46.590884 containerd[1554]: 2026-01-23 17:59:46.538 [INFO][4477] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c0d5f72968eb112c51fab412e5a504c3550915c09c588bdf03f022d13789ac28 Jan 23 17:59:46.590884 containerd[1554]: 2026-01-23 17:59:46.544 [INFO][4477] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.124.0/26 handle="k8s-pod-network.c0d5f72968eb112c51fab412e5a504c3550915c09c588bdf03f022d13789ac28" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:46.590884 containerd[1554]: 2026-01-23 17:59:46.553 [INFO][4477] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.124.5/26] block=192.168.124.0/26 handle="k8s-pod-network.c0d5f72968eb112c51fab412e5a504c3550915c09c588bdf03f022d13789ac28" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:46.590884 containerd[1554]: 2026-01-23 17:59:46.553 [INFO][4477] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.124.5/26] handle="k8s-pod-network.c0d5f72968eb112c51fab412e5a504c3550915c09c588bdf03f022d13789ac28" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:46.590884 containerd[1554]: 2026-01-23 17:59:46.553 [INFO][4477] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:59:46.590884 containerd[1554]: 2026-01-23 17:59:46.553 [INFO][4477] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.124.5/26] IPv6=[] ContainerID="c0d5f72968eb112c51fab412e5a504c3550915c09c588bdf03f022d13789ac28" HandleID="k8s-pod-network.c0d5f72968eb112c51fab412e5a504c3550915c09c588bdf03f022d13789ac28" Workload="ci--4459--2--3--9--0be39219fc-k8s-calico--apiserver--6f4f6d594f--7pkzs-eth0" Jan 23 17:59:46.591043 containerd[1554]: 2026-01-23 17:59:46.558 [INFO][4453] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c0d5f72968eb112c51fab412e5a504c3550915c09c588bdf03f022d13789ac28" Namespace="calico-apiserver" Pod="calico-apiserver-6f4f6d594f-7pkzs" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-calico--apiserver--6f4f6d594f--7pkzs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--9--0be39219fc-k8s-calico--apiserver--6f4f6d594f--7pkzs-eth0", GenerateName:"calico-apiserver-6f4f6d594f-", Namespace:"calico-apiserver", SelfLink:"", UID:"12cc26cd-794b-4ec3-9b6f-99c6d249650c", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 59, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f4f6d594f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-9-0be39219fc", ContainerID:"", Pod:"calico-apiserver-6f4f6d594f-7pkzs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5e053f2496c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:59:46.591104 containerd[1554]: 2026-01-23 17:59:46.559 [INFO][4453] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.5/32] ContainerID="c0d5f72968eb112c51fab412e5a504c3550915c09c588bdf03f022d13789ac28" Namespace="calico-apiserver" Pod="calico-apiserver-6f4f6d594f-7pkzs" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-calico--apiserver--6f4f6d594f--7pkzs-eth0" Jan 23 17:59:46.591104 containerd[1554]: 2026-01-23 17:59:46.559 [INFO][4453] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5e053f2496c ContainerID="c0d5f72968eb112c51fab412e5a504c3550915c09c588bdf03f022d13789ac28" Namespace="calico-apiserver" Pod="calico-apiserver-6f4f6d594f-7pkzs" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-calico--apiserver--6f4f6d594f--7pkzs-eth0" Jan 23 17:59:46.591104 containerd[1554]: 2026-01-23 17:59:46.568 [INFO][4453] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0d5f72968eb112c51fab412e5a504c3550915c09c588bdf03f022d13789ac28" Namespace="calico-apiserver" Pod="calico-apiserver-6f4f6d594f-7pkzs" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-calico--apiserver--6f4f6d594f--7pkzs-eth0" Jan 23 17:59:46.591176 containerd[1554]: 2026-01-23 17:59:46.570 [INFO][4453] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c0d5f72968eb112c51fab412e5a504c3550915c09c588bdf03f022d13789ac28" Namespace="calico-apiserver" Pod="calico-apiserver-6f4f6d594f-7pkzs" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-calico--apiserver--6f4f6d594f--7pkzs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--9--0be39219fc-k8s-calico--apiserver--6f4f6d594f--7pkzs-eth0", GenerateName:"calico-apiserver-6f4f6d594f-", Namespace:"calico-apiserver", SelfLink:"", UID:"12cc26cd-794b-4ec3-9b6f-99c6d249650c", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 59, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f4f6d594f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-9-0be39219fc", ContainerID:"c0d5f72968eb112c51fab412e5a504c3550915c09c588bdf03f022d13789ac28", Pod:"calico-apiserver-6f4f6d594f-7pkzs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5e053f2496c", MAC:"7a:89:97:cc:73:99", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:59:46.591235 containerd[1554]: 2026-01-23 17:59:46.587 [INFO][4453] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c0d5f72968eb112c51fab412e5a504c3550915c09c588bdf03f022d13789ac28" Namespace="calico-apiserver" Pod="calico-apiserver-6f4f6d594f-7pkzs" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-calico--apiserver--6f4f6d594f--7pkzs-eth0" Jan 23 17:59:46.622283 containerd[1554]: time="2026-01-23T17:59:46.622174198Z" level=info msg="connecting to shim c0d5f72968eb112c51fab412e5a504c3550915c09c588bdf03f022d13789ac28" address="unix:///run/containerd/s/0ca5e20be7f6e0eecc444b3496d4b00f2413a0e5f7146ac779bbc92192d1b5d1" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:59:46.665538 systemd[1]: Started cri-containerd-c0d5f72968eb112c51fab412e5a504c3550915c09c588bdf03f022d13789ac28.scope - libcontainer container c0d5f72968eb112c51fab412e5a504c3550915c09c588bdf03f022d13789ac28. Jan 23 17:59:46.693032 systemd-networkd[1419]: calid2b7bbe02dc: Link UP Jan 23 17:59:46.694830 systemd-networkd[1419]: calid2b7bbe02dc: Gained carrier Jan 23 17:59:46.710998 kubelet[2773]: E0123 17:59:46.710859 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b9444d76-tdrjj" podUID="01f4753b-deaf-4717-98ef-d3ec1b50a10a" Jan 23 17:59:46.714008 kubelet[2773]: E0123 17:59:46.713945 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lc96b" podUID="19e7b4f8-da69-42b9-9afe-ffb180e828e6" Jan 23 17:59:46.734012 containerd[1554]: 2026-01-23 17:59:46.462 [INFO][4451] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--9--0be39219fc-k8s-coredns--66bc5c9577--5cgdl-eth0 coredns-66bc5c9577- kube-system 1177e5f5-20db-4e39-bf2b-92e330c49d30 812 0 2026-01-23 17:59:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-3-9-0be39219fc coredns-66bc5c9577-5cgdl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid2b7bbe02dc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="4bba921deec1bffd251a5e7556076f39aa94f9f1e9f2eeff403a656e38c89757" Namespace="kube-system" Pod="coredns-66bc5c9577-5cgdl" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-coredns--66bc5c9577--5cgdl-" Jan 23 17:59:46.734012 containerd[1554]: 2026-01-23 17:59:46.462 [INFO][4451] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4bba921deec1bffd251a5e7556076f39aa94f9f1e9f2eeff403a656e38c89757" Namespace="kube-system" Pod="coredns-66bc5c9577-5cgdl" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-coredns--66bc5c9577--5cgdl-eth0" Jan 23 17:59:46.734012 containerd[1554]: 2026-01-23 17:59:46.505 [INFO][4482] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4bba921deec1bffd251a5e7556076f39aa94f9f1e9f2eeff403a656e38c89757" HandleID="k8s-pod-network.4bba921deec1bffd251a5e7556076f39aa94f9f1e9f2eeff403a656e38c89757" Workload="ci--4459--2--3--9--0be39219fc-k8s-coredns--66bc5c9577--5cgdl-eth0" Jan 23 17:59:46.734235 containerd[1554]: 2026-01-23 17:59:46.505 [INFO][4482] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4bba921deec1bffd251a5e7556076f39aa94f9f1e9f2eeff403a656e38c89757" HandleID="k8s-pod-network.4bba921deec1bffd251a5e7556076f39aa94f9f1e9f2eeff403a656e38c89757" Workload="ci--4459--2--3--9--0be39219fc-k8s-coredns--66bc5c9577--5cgdl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b7f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-3-9-0be39219fc", "pod":"coredns-66bc5c9577-5cgdl", "timestamp":"2026-01-23 17:59:46.505180705 +0000 UTC"}, Hostname:"ci-4459-2-3-9-0be39219fc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:59:46.734235 containerd[1554]: 2026-01-23 17:59:46.505 [INFO][4482] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:59:46.734235 containerd[1554]: 2026-01-23 17:59:46.553 [INFO][4482] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:59:46.734235 containerd[1554]: 2026-01-23 17:59:46.554 [INFO][4482] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-9-0be39219fc' Jan 23 17:59:46.734235 containerd[1554]: 2026-01-23 17:59:46.615 [INFO][4482] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4bba921deec1bffd251a5e7556076f39aa94f9f1e9f2eeff403a656e38c89757" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:46.734235 containerd[1554]: 2026-01-23 17:59:46.626 [INFO][4482] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:46.734235 containerd[1554]: 2026-01-23 17:59:46.639 [INFO][4482] ipam/ipam.go 511: Trying affinity for 192.168.124.0/26 host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:46.734235 containerd[1554]: 2026-01-23 17:59:46.645 [INFO][4482] ipam/ipam.go 158: Attempting to load block cidr=192.168.124.0/26 host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:46.734235 containerd[1554]: 2026-01-23 17:59:46.650 [INFO][4482] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.124.0/26 host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:46.735013 containerd[1554]: 2026-01-23 17:59:46.650 [INFO][4482] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.124.0/26 handle="k8s-pod-network.4bba921deec1bffd251a5e7556076f39aa94f9f1e9f2eeff403a656e38c89757" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:46.735013 containerd[1554]: 2026-01-23 17:59:46.655 [INFO][4482] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4bba921deec1bffd251a5e7556076f39aa94f9f1e9f2eeff403a656e38c89757 Jan 23 17:59:46.735013 containerd[1554]: 2026-01-23 17:59:46.661 [INFO][4482] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.124.0/26 handle="k8s-pod-network.4bba921deec1bffd251a5e7556076f39aa94f9f1e9f2eeff403a656e38c89757" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:46.735013 containerd[1554]: 2026-01-23 17:59:46.675 [INFO][4482] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.124.6/26] block=192.168.124.0/26 handle="k8s-pod-network.4bba921deec1bffd251a5e7556076f39aa94f9f1e9f2eeff403a656e38c89757" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:46.735013 containerd[1554]: 2026-01-23 17:59:46.675 [INFO][4482] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.124.6/26] handle="k8s-pod-network.4bba921deec1bffd251a5e7556076f39aa94f9f1e9f2eeff403a656e38c89757" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:46.735013 containerd[1554]: 2026-01-23 17:59:46.676 [INFO][4482] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:59:46.735013 containerd[1554]: 2026-01-23 17:59:46.676 [INFO][4482] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.124.6/26] IPv6=[] ContainerID="4bba921deec1bffd251a5e7556076f39aa94f9f1e9f2eeff403a656e38c89757" HandleID="k8s-pod-network.4bba921deec1bffd251a5e7556076f39aa94f9f1e9f2eeff403a656e38c89757" Workload="ci--4459--2--3--9--0be39219fc-k8s-coredns--66bc5c9577--5cgdl-eth0" Jan 23 17:59:46.735154 containerd[1554]: 2026-01-23 17:59:46.684 [INFO][4451] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4bba921deec1bffd251a5e7556076f39aa94f9f1e9f2eeff403a656e38c89757" Namespace="kube-system" Pod="coredns-66bc5c9577-5cgdl" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-coredns--66bc5c9577--5cgdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--9--0be39219fc-k8s-coredns--66bc5c9577--5cgdl-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1177e5f5-20db-4e39-bf2b-92e330c49d30", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 59, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-9-0be39219fc", ContainerID:"", Pod:"coredns-66bc5c9577-5cgdl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2b7bbe02dc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:59:46.735154 containerd[1554]: 2026-01-23 17:59:46.684 [INFO][4451] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.6/32] ContainerID="4bba921deec1bffd251a5e7556076f39aa94f9f1e9f2eeff403a656e38c89757" Namespace="kube-system" Pod="coredns-66bc5c9577-5cgdl" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-coredns--66bc5c9577--5cgdl-eth0" Jan 23 17:59:46.735154 containerd[1554]: 2026-01-23 17:59:46.685 [INFO][4451] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid2b7bbe02dc ContainerID="4bba921deec1bffd251a5e7556076f39aa94f9f1e9f2eeff403a656e38c89757" Namespace="kube-system" Pod="coredns-66bc5c9577-5cgdl" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-coredns--66bc5c9577--5cgdl-eth0" Jan 23 17:59:46.735154 containerd[1554]: 2026-01-23 17:59:46.695 [INFO][4451] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4bba921deec1bffd251a5e7556076f39aa94f9f1e9f2eeff403a656e38c89757" Namespace="kube-system" Pod="coredns-66bc5c9577-5cgdl" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-coredns--66bc5c9577--5cgdl-eth0" Jan 23 17:59:46.735154 containerd[1554]: 2026-01-23 17:59:46.705 [INFO][4451] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4bba921deec1bffd251a5e7556076f39aa94f9f1e9f2eeff403a656e38c89757" Namespace="kube-system" Pod="coredns-66bc5c9577-5cgdl" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-coredns--66bc5c9577--5cgdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--9--0be39219fc-k8s-coredns--66bc5c9577--5cgdl-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1177e5f5-20db-4e39-bf2b-92e330c49d30", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 59, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-9-0be39219fc", ContainerID:"4bba921deec1bffd251a5e7556076f39aa94f9f1e9f2eeff403a656e38c89757", Pod:"coredns-66bc5c9577-5cgdl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2b7bbe02dc", MAC:"de:a3:96:67:d2:3d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:59:46.736543 containerd[1554]: 2026-01-23 17:59:46.726 [INFO][4451] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4bba921deec1bffd251a5e7556076f39aa94f9f1e9f2eeff403a656e38c89757" Namespace="kube-system" Pod="coredns-66bc5c9577-5cgdl" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-coredns--66bc5c9577--5cgdl-eth0" Jan 23 17:59:46.784055 containerd[1554]: time="2026-01-23T17:59:46.783981012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f4f6d594f-7pkzs,Uid:12cc26cd-794b-4ec3-9b6f-99c6d249650c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c0d5f72968eb112c51fab412e5a504c3550915c09c588bdf03f022d13789ac28\"" Jan 23 17:59:46.787417 containerd[1554]: time="2026-01-23T17:59:46.787040647Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:59:46.791262 containerd[1554]: time="2026-01-23T17:59:46.791192812Z" level=info msg="connecting to shim 4bba921deec1bffd251a5e7556076f39aa94f9f1e9f2eeff403a656e38c89757" address="unix:///run/containerd/s/b02cbdab7b51b8cd3a755c62506324a32f80629ff78cacb24f61ecd659151e89" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:59:46.803539 systemd-networkd[1419]: cali73df132ea2a: Gained IPv6LL Jan 23 17:59:46.837529 systemd[1]: Started cri-containerd-4bba921deec1bffd251a5e7556076f39aa94f9f1e9f2eeff403a656e38c89757.scope - libcontainer container 4bba921deec1bffd251a5e7556076f39aa94f9f1e9f2eeff403a656e38c89757. Jan 23 17:59:46.885866 containerd[1554]: time="2026-01-23T17:59:46.885707653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5cgdl,Uid:1177e5f5-20db-4e39-bf2b-92e330c49d30,Namespace:kube-system,Attempt:0,} returns sandbox id \"4bba921deec1bffd251a5e7556076f39aa94f9f1e9f2eeff403a656e38c89757\"" Jan 23 17:59:46.893613 containerd[1554]: time="2026-01-23T17:59:46.893535909Z" level=info msg="CreateContainer within sandbox \"4bba921deec1bffd251a5e7556076f39aa94f9f1e9f2eeff403a656e38c89757\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 17:59:46.903928 containerd[1554]: time="2026-01-23T17:59:46.903877436Z" level=info msg="Container fe976258675d62e8afbbc42a394cb61d0d1fe207d76446464dfe1dc6956d8902: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:59:46.911023 containerd[1554]: time="2026-01-23T17:59:46.910530429Z" level=info msg="CreateContainer within sandbox \"4bba921deec1bffd251a5e7556076f39aa94f9f1e9f2eeff403a656e38c89757\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fe976258675d62e8afbbc42a394cb61d0d1fe207d76446464dfe1dc6956d8902\"" Jan 23 17:59:46.912338 containerd[1554]: time="2026-01-23T17:59:46.911523463Z" level=info msg="StartContainer for \"fe976258675d62e8afbbc42a394cb61d0d1fe207d76446464dfe1dc6956d8902\"" Jan 23 17:59:46.912998 containerd[1554]: time="2026-01-23T17:59:46.912952325Z" level=info msg="connecting to shim fe976258675d62e8afbbc42a394cb61d0d1fe207d76446464dfe1dc6956d8902" address="unix:///run/containerd/s/b02cbdab7b51b8cd3a755c62506324a32f80629ff78cacb24f61ecd659151e89" protocol=ttrpc version=3 Jan 23 17:59:46.936515 systemd[1]: Started cri-containerd-fe976258675d62e8afbbc42a394cb61d0d1fe207d76446464dfe1dc6956d8902.scope - libcontainer container fe976258675d62e8afbbc42a394cb61d0d1fe207d76446464dfe1dc6956d8902. Jan 23 17:59:46.983491 containerd[1554]: time="2026-01-23T17:59:46.983438634Z" level=info msg="StartContainer for \"fe976258675d62e8afbbc42a394cb61d0d1fe207d76446464dfe1dc6956d8902\" returns successfully" Jan 23 17:59:47.142851 containerd[1554]: time="2026-01-23T17:59:47.142798833Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:59:47.145238 containerd[1554]: time="2026-01-23T17:59:47.145129030Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:59:47.145628 containerd[1554]: time="2026-01-23T17:59:47.145151833Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 17:59:47.146089 kubelet[2773]: E0123 17:59:47.145887 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:59:47.146089 kubelet[2773]: E0123 17:59:47.145938 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:59:47.146089 kubelet[2773]: E0123 17:59:47.146018 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6f4f6d594f-7pkzs_calico-apiserver(12cc26cd-794b-4ec3-9b6f-99c6d249650c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:59:47.146089 kubelet[2773]: E0123 17:59:47.146048 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f4f6d594f-7pkzs" podUID="12cc26cd-794b-4ec3-9b6f-99c6d249650c" Jan 23 17:59:47.377050 containerd[1554]: time="2026-01-23T17:59:47.377012013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sw8bc,Uid:15c8748c-1044-422b-bd07-3cf9093f7667,Namespace:kube-system,Attempt:0,}" Jan 23 17:59:47.554360 systemd-networkd[1419]: cali6a1c9dfbfdb: Link UP Jan 23 17:59:47.554565 systemd-networkd[1419]: cali6a1c9dfbfdb: Gained carrier Jan 23 17:59:47.565313 containerd[1554]: 2026-01-23 17:59:47.436 [INFO][4633] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--9--0be39219fc-k8s-coredns--66bc5c9577--sw8bc-eth0 coredns-66bc5c9577- kube-system 15c8748c-1044-422b-bd07-3cf9093f7667 813 0 2026-01-23 17:59:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-3-9-0be39219fc coredns-66bc5c9577-sw8bc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6a1c9dfbfdb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="2f126063c72bb49560e092aac10ce6d2368593032b76f813fb58dbbc6f5c7209" Namespace="kube-system" Pod="coredns-66bc5c9577-sw8bc" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-coredns--66bc5c9577--sw8bc-" Jan 23 17:59:47.565313 containerd[1554]: 2026-01-23 17:59:47.436 [INFO][4633] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f126063c72bb49560e092aac10ce6d2368593032b76f813fb58dbbc6f5c7209" Namespace="kube-system" Pod="coredns-66bc5c9577-sw8bc" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-coredns--66bc5c9577--sw8bc-eth0" Jan 23 17:59:47.565313 containerd[1554]: 2026-01-23 17:59:47.476 [INFO][4644] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f126063c72bb49560e092aac10ce6d2368593032b76f813fb58dbbc6f5c7209" HandleID="k8s-pod-network.2f126063c72bb49560e092aac10ce6d2368593032b76f813fb58dbbc6f5c7209" Workload="ci--4459--2--3--9--0be39219fc-k8s-coredns--66bc5c9577--sw8bc-eth0" Jan 23 17:59:47.565313 containerd[1554]: 2026-01-23 17:59:47.476 [INFO][4644] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2f126063c72bb49560e092aac10ce6d2368593032b76f813fb58dbbc6f5c7209" HandleID="k8s-pod-network.2f126063c72bb49560e092aac10ce6d2368593032b76f813fb58dbbc6f5c7209" Workload="ci--4459--2--3--9--0be39219fc-k8s-coredns--66bc5c9577--sw8bc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b260), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-3-9-0be39219fc", "pod":"coredns-66bc5c9577-sw8bc", "timestamp":"2026-01-23 17:59:47.476512687 +0000 UTC"}, Hostname:"ci-4459-2-3-9-0be39219fc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:59:47.565313 containerd[1554]: 2026-01-23 17:59:47.476 [INFO][4644] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:59:47.565313 containerd[1554]: 2026-01-23 17:59:47.476 [INFO][4644] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:59:47.565313 containerd[1554]: 2026-01-23 17:59:47.476 [INFO][4644] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-9-0be39219fc' Jan 23 17:59:47.565313 containerd[1554]: 2026-01-23 17:59:47.491 [INFO][4644] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2f126063c72bb49560e092aac10ce6d2368593032b76f813fb58dbbc6f5c7209" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:47.565313 containerd[1554]: 2026-01-23 17:59:47.500 [INFO][4644] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:47.565313 containerd[1554]: 2026-01-23 17:59:47.507 [INFO][4644] ipam/ipam.go 511: Trying affinity for 192.168.124.0/26 host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:47.565313 containerd[1554]: 2026-01-23 17:59:47.510 [INFO][4644] ipam/ipam.go 158: Attempting to load block cidr=192.168.124.0/26 host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:47.565313 containerd[1554]: 2026-01-23 17:59:47.515 [INFO][4644] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.124.0/26 host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:47.565313 containerd[1554]: 2026-01-23 17:59:47.515 [INFO][4644] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.124.0/26 handle="k8s-pod-network.2f126063c72bb49560e092aac10ce6d2368593032b76f813fb58dbbc6f5c7209" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:47.565313 containerd[1554]: 2026-01-23 17:59:47.520 [INFO][4644] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2f126063c72bb49560e092aac10ce6d2368593032b76f813fb58dbbc6f5c7209 Jan 23 17:59:47.565313 containerd[1554]: 2026-01-23 17:59:47.527 [INFO][4644] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.124.0/26 handle="k8s-pod-network.2f126063c72bb49560e092aac10ce6d2368593032b76f813fb58dbbc6f5c7209" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:47.565313 containerd[1554]: 2026-01-23 17:59:47.539 [INFO][4644] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.124.7/26] block=192.168.124.0/26 handle="k8s-pod-network.2f126063c72bb49560e092aac10ce6d2368593032b76f813fb58dbbc6f5c7209" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:47.565313 containerd[1554]: 2026-01-23 17:59:47.539 [INFO][4644] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.124.7/26] handle="k8s-pod-network.2f126063c72bb49560e092aac10ce6d2368593032b76f813fb58dbbc6f5c7209" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:47.565313 containerd[1554]: 2026-01-23 17:59:47.539 [INFO][4644] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:59:47.565313 containerd[1554]: 2026-01-23 17:59:47.539 [INFO][4644] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.124.7/26] IPv6=[] ContainerID="2f126063c72bb49560e092aac10ce6d2368593032b76f813fb58dbbc6f5c7209" HandleID="k8s-pod-network.2f126063c72bb49560e092aac10ce6d2368593032b76f813fb58dbbc6f5c7209" Workload="ci--4459--2--3--9--0be39219fc-k8s-coredns--66bc5c9577--sw8bc-eth0" Jan 23 17:59:47.565953 containerd[1554]: 2026-01-23 17:59:47.542 [INFO][4633] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f126063c72bb49560e092aac10ce6d2368593032b76f813fb58dbbc6f5c7209" Namespace="kube-system" Pod="coredns-66bc5c9577-sw8bc" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-coredns--66bc5c9577--sw8bc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--9--0be39219fc-k8s-coredns--66bc5c9577--sw8bc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"15c8748c-1044-422b-bd07-3cf9093f7667", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 59, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-9-0be39219fc", ContainerID:"", Pod:"coredns-66bc5c9577-sw8bc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6a1c9dfbfdb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:59:47.565953 containerd[1554]: 2026-01-23 17:59:47.543 [INFO][4633] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.7/32] ContainerID="2f126063c72bb49560e092aac10ce6d2368593032b76f813fb58dbbc6f5c7209" Namespace="kube-system" Pod="coredns-66bc5c9577-sw8bc" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-coredns--66bc5c9577--sw8bc-eth0" Jan 23 17:59:47.565953 containerd[1554]: 2026-01-23 17:59:47.543 [INFO][4633] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6a1c9dfbfdb ContainerID="2f126063c72bb49560e092aac10ce6d2368593032b76f813fb58dbbc6f5c7209" Namespace="kube-system" Pod="coredns-66bc5c9577-sw8bc" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-coredns--66bc5c9577--sw8bc-eth0" Jan 23 17:59:47.565953 containerd[1554]: 2026-01-23 17:59:47.545 [INFO][4633] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f126063c72bb49560e092aac10ce6d2368593032b76f813fb58dbbc6f5c7209" Namespace="kube-system" Pod="coredns-66bc5c9577-sw8bc" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-coredns--66bc5c9577--sw8bc-eth0" Jan 23 17:59:47.565953 containerd[1554]: 2026-01-23 17:59:47.546 [INFO][4633] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f126063c72bb49560e092aac10ce6d2368593032b76f813fb58dbbc6f5c7209" Namespace="kube-system" Pod="coredns-66bc5c9577-sw8bc" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-coredns--66bc5c9577--sw8bc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--9--0be39219fc-k8s-coredns--66bc5c9577--sw8bc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"15c8748c-1044-422b-bd07-3cf9093f7667", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 59, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-9-0be39219fc", ContainerID:"2f126063c72bb49560e092aac10ce6d2368593032b76f813fb58dbbc6f5c7209", Pod:"coredns-66bc5c9577-sw8bc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6a1c9dfbfdb", MAC:"ae:f6:42:ec:84:5d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:59:47.566143 containerd[1554]: 2026-01-23 17:59:47.557 [INFO][4633] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f126063c72bb49560e092aac10ce6d2368593032b76f813fb58dbbc6f5c7209" Namespace="kube-system" Pod="coredns-66bc5c9577-sw8bc" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-coredns--66bc5c9577--sw8bc-eth0" Jan 23 17:59:47.598684 containerd[1554]: time="2026-01-23T17:59:47.598438075Z" level=info msg="connecting to shim 2f126063c72bb49560e092aac10ce6d2368593032b76f813fb58dbbc6f5c7209" address="unix:///run/containerd/s/d5e1a5288796265678f85edaab00e86f21dbabdbff36c8643979a680c19325f3" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:59:47.655570 systemd[1]: Started cri-containerd-2f126063c72bb49560e092aac10ce6d2368593032b76f813fb58dbbc6f5c7209.scope - libcontainer container 2f126063c72bb49560e092aac10ce6d2368593032b76f813fb58dbbc6f5c7209. Jan 23 17:59:47.708954 containerd[1554]: time="2026-01-23T17:59:47.707827263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sw8bc,Uid:15c8748c-1044-422b-bd07-3cf9093f7667,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f126063c72bb49560e092aac10ce6d2368593032b76f813fb58dbbc6f5c7209\"" Jan 23 17:59:47.716303 kubelet[2773]: E0123 17:59:47.716142 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f4f6d594f-7pkzs" podUID="12cc26cd-794b-4ec3-9b6f-99c6d249650c" Jan 23 17:59:47.718736 kubelet[2773]: E0123 17:59:47.718219 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lc96b" podUID="19e7b4f8-da69-42b9-9afe-ffb180e828e6" Jan 23 17:59:47.720315 containerd[1554]: time="2026-01-23T17:59:47.720098422Z" level=info msg="CreateContainer within sandbox \"2f126063c72bb49560e092aac10ce6d2368593032b76f813fb58dbbc6f5c7209\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 17:59:47.738238 containerd[1554]: time="2026-01-23T17:59:47.737516568Z" level=info msg="Container d5fedbb74e7d7326ce4f5416508d50731a1cc46bc9a4bc6f1f26347c89ba8594: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:59:47.747850 containerd[1554]: time="2026-01-23T17:59:47.747723611Z" level=info msg="CreateContainer within sandbox \"2f126063c72bb49560e092aac10ce6d2368593032b76f813fb58dbbc6f5c7209\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d5fedbb74e7d7326ce4f5416508d50731a1cc46bc9a4bc6f1f26347c89ba8594\"" Jan 23 17:59:47.749284 containerd[1554]: time="2026-01-23T17:59:47.749162391Z" level=info msg="StartContainer for \"d5fedbb74e7d7326ce4f5416508d50731a1cc46bc9a4bc6f1f26347c89ba8594\"" Jan 23 17:59:47.751922 containerd[1554]: time="2026-01-23T17:59:47.751869766Z" level=info msg="connecting to shim d5fedbb74e7d7326ce4f5416508d50731a1cc46bc9a4bc6f1f26347c89ba8594" address="unix:///run/containerd/s/d5e1a5288796265678f85edaab00e86f21dbabdbff36c8643979a680c19325f3" protocol=ttrpc version=3 Jan 23 17:59:47.774214 kubelet[2773]: I0123 17:59:47.772868 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-5cgdl" podStartSLOduration=43.772848898 podStartE2EDuration="43.772848898s" podCreationTimestamp="2026-01-23 17:59:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:59:47.739801598 +0000 UTC m=+50.492417092" watchObservedRunningTime="2026-01-23 17:59:47.772848898 +0000 UTC m=+50.525464352" Jan 23 17:59:47.785519 systemd[1]: Started cri-containerd-d5fedbb74e7d7326ce4f5416508d50731a1cc46bc9a4bc6f1f26347c89ba8594.scope - libcontainer container d5fedbb74e7d7326ce4f5416508d50731a1cc46bc9a4bc6f1f26347c89ba8594. Jan 23 17:59:47.855100 containerd[1554]: time="2026-01-23T17:59:47.854597254Z" level=info msg="StartContainer for \"d5fedbb74e7d7326ce4f5416508d50731a1cc46bc9a4bc6f1f26347c89ba8594\" returns successfully" Jan 23 17:59:48.147530 systemd-networkd[1419]: calid2b7bbe02dc: Gained IPv6LL Jan 23 17:59:48.276752 systemd-networkd[1419]: cali5e053f2496c: Gained IPv6LL Jan 23 17:59:48.373080 containerd[1554]: time="2026-01-23T17:59:48.373001691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f4f6d594f-f5jb7,Uid:974cfe4b-f68c-4b1b-b7ed-efcbbe2e4c59,Namespace:calico-apiserver,Attempt:0,}" Jan 23 17:59:48.523861 systemd-networkd[1419]: cali5f081d9c187: Link UP Jan 23 17:59:48.526392 systemd-networkd[1419]: cali5f081d9c187: Gained carrier Jan 23 17:59:48.551203 containerd[1554]: 2026-01-23 17:59:48.430 [INFO][4747] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--9--0be39219fc-k8s-calico--apiserver--6f4f6d594f--f5jb7-eth0 calico-apiserver-6f4f6d594f- calico-apiserver 974cfe4b-f68c-4b1b-b7ed-efcbbe2e4c59 816 0 2026-01-23 17:59:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f4f6d594f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-3-9-0be39219fc calico-apiserver-6f4f6d594f-f5jb7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5f081d9c187 [] [] }} ContainerID="35735e817c1e45f70677a35fe775bd4b4979b5551f669b96b03efd64fe6c3c82" Namespace="calico-apiserver" Pod="calico-apiserver-6f4f6d594f-f5jb7" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-calico--apiserver--6f4f6d594f--f5jb7-" Jan 23 17:59:48.551203 containerd[1554]: 2026-01-23 17:59:48.431 [INFO][4747] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="35735e817c1e45f70677a35fe775bd4b4979b5551f669b96b03efd64fe6c3c82" Namespace="calico-apiserver" Pod="calico-apiserver-6f4f6d594f-f5jb7" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-calico--apiserver--6f4f6d594f--f5jb7-eth0" Jan 23 17:59:48.551203 containerd[1554]: 2026-01-23 17:59:48.462 [INFO][4756] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="35735e817c1e45f70677a35fe775bd4b4979b5551f669b96b03efd64fe6c3c82" HandleID="k8s-pod-network.35735e817c1e45f70677a35fe775bd4b4979b5551f669b96b03efd64fe6c3c82" Workload="ci--4459--2--3--9--0be39219fc-k8s-calico--apiserver--6f4f6d594f--f5jb7-eth0" Jan 23 17:59:48.551203 containerd[1554]: 2026-01-23 17:59:48.462 [INFO][4756] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="35735e817c1e45f70677a35fe775bd4b4979b5551f669b96b03efd64fe6c3c82" HandleID="k8s-pod-network.35735e817c1e45f70677a35fe775bd4b4979b5551f669b96b03efd64fe6c3c82" Workload="ci--4459--2--3--9--0be39219fc-k8s-calico--apiserver--6f4f6d594f--f5jb7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b870), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-2-3-9-0be39219fc", "pod":"calico-apiserver-6f4f6d594f-f5jb7", "timestamp":"2026-01-23 17:59:48.462131751 +0000 UTC"}, Hostname:"ci-4459-2-3-9-0be39219fc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:59:48.551203 containerd[1554]: 2026-01-23 17:59:48.462 [INFO][4756] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:59:48.551203 containerd[1554]: 2026-01-23 17:59:48.462 [INFO][4756] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:59:48.551203 containerd[1554]: 2026-01-23 17:59:48.462 [INFO][4756] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-9-0be39219fc' Jan 23 17:59:48.551203 containerd[1554]: 2026-01-23 17:59:48.477 [INFO][4756] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.35735e817c1e45f70677a35fe775bd4b4979b5551f669b96b03efd64fe6c3c82" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:48.551203 containerd[1554]: 2026-01-23 17:59:48.484 [INFO][4756] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:48.551203 containerd[1554]: 2026-01-23 17:59:48.491 [INFO][4756] ipam/ipam.go 511: Trying affinity for 192.168.124.0/26 host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:48.551203 containerd[1554]: 2026-01-23 17:59:48.494 [INFO][4756] ipam/ipam.go 158: Attempting to load block cidr=192.168.124.0/26 host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:48.551203 containerd[1554]: 2026-01-23 17:59:48.497 [INFO][4756] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.124.0/26 host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:48.551203 containerd[1554]: 2026-01-23 17:59:48.498 [INFO][4756] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.124.0/26 handle="k8s-pod-network.35735e817c1e45f70677a35fe775bd4b4979b5551f669b96b03efd64fe6c3c82" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:48.551203 containerd[1554]: 2026-01-23 17:59:48.500 [INFO][4756] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.35735e817c1e45f70677a35fe775bd4b4979b5551f669b96b03efd64fe6c3c82 Jan 23 17:59:48.551203 containerd[1554]: 2026-01-23 17:59:48.507 [INFO][4756] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.124.0/26 handle="k8s-pod-network.35735e817c1e45f70677a35fe775bd4b4979b5551f669b96b03efd64fe6c3c82" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:48.551203 containerd[1554]: 2026-01-23 17:59:48.517 [INFO][4756] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.124.8/26] block=192.168.124.0/26 handle="k8s-pod-network.35735e817c1e45f70677a35fe775bd4b4979b5551f669b96b03efd64fe6c3c82" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:48.551203 containerd[1554]: 2026-01-23 17:59:48.517 [INFO][4756] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.124.8/26] handle="k8s-pod-network.35735e817c1e45f70677a35fe775bd4b4979b5551f669b96b03efd64fe6c3c82" host="ci-4459-2-3-9-0be39219fc" Jan 23 17:59:48.551203 containerd[1554]: 2026-01-23 17:59:48.517 [INFO][4756] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:59:48.551203 containerd[1554]: 2026-01-23 17:59:48.517 [INFO][4756] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.124.8/26] IPv6=[] ContainerID="35735e817c1e45f70677a35fe775bd4b4979b5551f669b96b03efd64fe6c3c82" HandleID="k8s-pod-network.35735e817c1e45f70677a35fe775bd4b4979b5551f669b96b03efd64fe6c3c82" Workload="ci--4459--2--3--9--0be39219fc-k8s-calico--apiserver--6f4f6d594f--f5jb7-eth0" Jan 23 17:59:48.552011 containerd[1554]: 2026-01-23 17:59:48.520 [INFO][4747] cni-plugin/k8s.go 418: Populated endpoint ContainerID="35735e817c1e45f70677a35fe775bd4b4979b5551f669b96b03efd64fe6c3c82" Namespace="calico-apiserver" Pod="calico-apiserver-6f4f6d594f-f5jb7" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-calico--apiserver--6f4f6d594f--f5jb7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--9--0be39219fc-k8s-calico--apiserver--6f4f6d594f--f5jb7-eth0", GenerateName:"calico-apiserver-6f4f6d594f-", Namespace:"calico-apiserver", SelfLink:"", UID:"974cfe4b-f68c-4b1b-b7ed-efcbbe2e4c59", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 59, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f4f6d594f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-9-0be39219fc", ContainerID:"", Pod:"calico-apiserver-6f4f6d594f-f5jb7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5f081d9c187", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:59:48.552011 containerd[1554]: 2026-01-23 17:59:48.520 [INFO][4747] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.8/32] ContainerID="35735e817c1e45f70677a35fe775bd4b4979b5551f669b96b03efd64fe6c3c82" Namespace="calico-apiserver" Pod="calico-apiserver-6f4f6d594f-f5jb7" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-calico--apiserver--6f4f6d594f--f5jb7-eth0" Jan 23 17:59:48.552011 containerd[1554]: 2026-01-23 17:59:48.520 [INFO][4747] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5f081d9c187 ContainerID="35735e817c1e45f70677a35fe775bd4b4979b5551f669b96b03efd64fe6c3c82" Namespace="calico-apiserver" Pod="calico-apiserver-6f4f6d594f-f5jb7" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-calico--apiserver--6f4f6d594f--f5jb7-eth0" Jan 23 17:59:48.552011 containerd[1554]: 2026-01-23 17:59:48.528 [INFO][4747] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="35735e817c1e45f70677a35fe775bd4b4979b5551f669b96b03efd64fe6c3c82" Namespace="calico-apiserver" Pod="calico-apiserver-6f4f6d594f-f5jb7" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-calico--apiserver--6f4f6d594f--f5jb7-eth0" Jan 23 17:59:48.552011 containerd[1554]: 2026-01-23 17:59:48.528 [INFO][4747] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="35735e817c1e45f70677a35fe775bd4b4979b5551f669b96b03efd64fe6c3c82" Namespace="calico-apiserver" Pod="calico-apiserver-6f4f6d594f-f5jb7" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-calico--apiserver--6f4f6d594f--f5jb7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--9--0be39219fc-k8s-calico--apiserver--6f4f6d594f--f5jb7-eth0", GenerateName:"calico-apiserver-6f4f6d594f-", Namespace:"calico-apiserver", SelfLink:"", UID:"974cfe4b-f68c-4b1b-b7ed-efcbbe2e4c59", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 59, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f4f6d594f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-9-0be39219fc", ContainerID:"35735e817c1e45f70677a35fe775bd4b4979b5551f669b96b03efd64fe6c3c82", Pod:"calico-apiserver-6f4f6d594f-f5jb7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5f081d9c187", MAC:"7e:32:e6:90:c6:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:59:48.552011 containerd[1554]: 2026-01-23 17:59:48.548 [INFO][4747] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="35735e817c1e45f70677a35fe775bd4b4979b5551f669b96b03efd64fe6c3c82" Namespace="calico-apiserver" Pod="calico-apiserver-6f4f6d594f-f5jb7" WorkloadEndpoint="ci--4459--2--3--9--0be39219fc-k8s-calico--apiserver--6f4f6d594f--f5jb7-eth0" Jan 23 17:59:48.583006 containerd[1554]: time="2026-01-23T17:59:48.582844701Z" level=info msg="connecting to shim 35735e817c1e45f70677a35fe775bd4b4979b5551f669b96b03efd64fe6c3c82" address="unix:///run/containerd/s/69d37e5a829c920dadb2b46f15f58de24cddc8fb06afe352b9c3f7aebe5317e7" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:59:48.620614 systemd[1]: Started cri-containerd-35735e817c1e45f70677a35fe775bd4b4979b5551f669b96b03efd64fe6c3c82.scope - libcontainer container 35735e817c1e45f70677a35fe775bd4b4979b5551f669b96b03efd64fe6c3c82. Jan 23 17:59:48.682211 containerd[1554]: time="2026-01-23T17:59:48.682001916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f4f6d594f-f5jb7,Uid:974cfe4b-f68c-4b1b-b7ed-efcbbe2e4c59,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"35735e817c1e45f70677a35fe775bd4b4979b5551f669b96b03efd64fe6c3c82\"" Jan 23 17:59:48.684857 containerd[1554]: time="2026-01-23T17:59:48.684756892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:59:48.727803 kubelet[2773]: E0123 17:59:48.727756 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f4f6d594f-7pkzs" podUID="12cc26cd-794b-4ec3-9b6f-99c6d249650c" Jan 23 17:59:48.766676 kubelet[2773]: I0123 17:59:48.766601 2773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-sw8bc" podStartSLOduration=44.766578289 podStartE2EDuration="44.766578289s" podCreationTimestamp="2026-01-23 17:59:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:59:48.742869868 +0000 UTC m=+51.495485362" watchObservedRunningTime="2026-01-23 17:59:48.766578289 +0000 UTC m=+51.519193783" Jan 23 17:59:49.024774 containerd[1554]: time="2026-01-23T17:59:49.024675625Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:59:49.026350 containerd[1554]: time="2026-01-23T17:59:49.026284464Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:59:49.027618 containerd[1554]: time="2026-01-23T17:59:49.026298826Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 17:59:49.027743 kubelet[2773]: E0123 17:59:49.026573 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:59:49.027743 kubelet[2773]: E0123 17:59:49.026642 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:59:49.027743 kubelet[2773]: E0123 17:59:49.026796 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6f4f6d594f-f5jb7_calico-apiserver(974cfe4b-f68c-4b1b-b7ed-efcbbe2e4c59): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:59:49.027743 kubelet[2773]: E0123 17:59:49.026860 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f4f6d594f-f5jb7" podUID="974cfe4b-f68c-4b1b-b7ed-efcbbe2e4c59" Jan 23 17:59:49.300177 systemd-networkd[1419]: cali6a1c9dfbfdb: Gained IPv6LL Jan 23 17:59:49.728560 kubelet[2773]: E0123 17:59:49.728429 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f4f6d594f-f5jb7" podUID="974cfe4b-f68c-4b1b-b7ed-efcbbe2e4c59" Jan 23 17:59:49.939559 systemd-networkd[1419]: cali5f081d9c187: Gained IPv6LL Jan 23 17:59:51.374033 containerd[1554]: time="2026-01-23T17:59:51.373969819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 17:59:51.750262 containerd[1554]: time="2026-01-23T17:59:51.749825749Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:59:51.752557 containerd[1554]: time="2026-01-23T17:59:51.752344716Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 17:59:51.752557 containerd[1554]: time="2026-01-23T17:59:51.752495978Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 17:59:51.752795 kubelet[2773]: E0123 17:59:51.752736 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:59:51.753313 kubelet[2773]: E0123 17:59:51.752806 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:59:51.753313 kubelet[2773]: E0123 17:59:51.752909 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-86cf47b744-dzg5h_calico-system(35b6ccfd-7d16-48a4-9d79-4f9d7b111e38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 17:59:51.754695 containerd[1554]: time="2026-01-23T17:59:51.754581001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 17:59:52.102242 containerd[1554]: time="2026-01-23T17:59:52.102136330Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:59:52.103825 containerd[1554]: time="2026-01-23T17:59:52.103742961Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 17:59:52.104112 containerd[1554]: time="2026-01-23T17:59:52.103862778Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 17:59:52.104235 kubelet[2773]: E0123 17:59:52.104172 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:59:52.104582 kubelet[2773]: E0123 17:59:52.104368 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:59:52.104582 kubelet[2773]: E0123 17:59:52.104482 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-86cf47b744-dzg5h_calico-system(35b6ccfd-7d16-48a4-9d79-4f9d7b111e38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 17:59:52.104582 kubelet[2773]: E0123 17:59:52.104534 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86cf47b744-dzg5h" podUID="35b6ccfd-7d16-48a4-9d79-4f9d7b111e38" Jan 23 17:59:58.371801 containerd[1554]: time="2026-01-23T17:59:58.371435402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 17:59:58.719254 containerd[1554]: time="2026-01-23T17:59:58.719105373Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:59:58.721449 containerd[1554]: time="2026-01-23T17:59:58.721378246Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 17:59:58.721449 containerd[1554]: time="2026-01-23T17:59:58.721357724Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 17:59:58.722330 kubelet[2773]: E0123 17:59:58.721662 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:59:58.722330 kubelet[2773]: E0123 17:59:58.721720 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:59:58.722330 kubelet[2773]: E0123 17:59:58.721799 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-tqdk2_calico-system(0960425a-3ec9-4577-9a39-48675b2f3498): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 17:59:58.722330 kubelet[2773]: E0123 17:59:58.721835 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tqdk2" podUID="0960425a-3ec9-4577-9a39-48675b2f3498" Jan 23 18:00:00.373193 containerd[1554]: time="2026-01-23T18:00:00.373144616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:00:01.747424 containerd[1554]: time="2026-01-23T18:00:01.747145674Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:00:01.750677 containerd[1554]: time="2026-01-23T18:00:01.750509472Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:00:01.750677 containerd[1554]: time="2026-01-23T18:00:01.750570062Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 18:00:01.750927 kubelet[2773]: E0123 18:00:01.750847 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:00:01.750927 kubelet[2773]: E0123 18:00:01.750905 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:00:01.751772 kubelet[2773]: E0123 18:00:01.751416 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6b9444d76-tdrjj_calico-system(01f4753b-deaf-4717-98ef-d3ec1b50a10a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:00:01.751772 kubelet[2773]: E0123 18:00:01.751464 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b9444d76-tdrjj" podUID="01f4753b-deaf-4717-98ef-d3ec1b50a10a" Jan 23 18:00:01.752425 containerd[1554]: time="2026-01-23T18:00:01.752369761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:00:03.394089 containerd[1554]: time="2026-01-23T18:00:03.394021960Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:00:03.395783 containerd[1554]: time="2026-01-23T18:00:03.395737300Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:00:03.396373 containerd[1554]: time="2026-01-23T18:00:03.395768776Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:00:03.396564 kubelet[2773]: E0123 18:00:03.396512 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:00:03.397482 kubelet[2773]: E0123 18:00:03.396564 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:00:03.397482 kubelet[2773]: E0123 18:00:03.396842 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6f4f6d594f-f5jb7_calico-apiserver(974cfe4b-f68c-4b1b-b7ed-efcbbe2e4c59): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:00:03.397482 kubelet[2773]: E0123 18:00:03.396956 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f4f6d594f-f5jb7" podUID="974cfe4b-f68c-4b1b-b7ed-efcbbe2e4c59" Jan 23 18:00:03.398035 containerd[1554]: time="2026-01-23T18:00:03.397782710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:00:04.783811 containerd[1554]: time="2026-01-23T18:00:04.783566515Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:00:04.785096 containerd[1554]: time="2026-01-23T18:00:04.784929919Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:00:04.785096 containerd[1554]: time="2026-01-23T18:00:04.785059140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:00:04.785655 kubelet[2773]: E0123 18:00:04.785571 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:00:04.785655 kubelet[2773]: E0123 18:00:04.785631 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:00:04.786689 kubelet[2773]: E0123 18:00:04.785885 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6f4f6d594f-7pkzs_calico-apiserver(12cc26cd-794b-4ec3-9b6f-99c6d249650c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:00:04.786689 kubelet[2773]: E0123 18:00:04.785964 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f4f6d594f-7pkzs" podUID="12cc26cd-794b-4ec3-9b6f-99c6d249650c" Jan 23 18:00:04.788567 containerd[1554]: time="2026-01-23T18:00:04.788259039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:00:06.419241 containerd[1554]: time="2026-01-23T18:00:06.419174605Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:00:06.421323 containerd[1554]: time="2026-01-23T18:00:06.421191143Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:00:06.421323 containerd[1554]: time="2026-01-23T18:00:06.421284931Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 18:00:06.421524 kubelet[2773]: E0123 18:00:06.421444 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:00:06.421524 kubelet[2773]: E0123 18:00:06.421491 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:00:06.421816 kubelet[2773]: E0123 18:00:06.421594 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-lc96b_calico-system(19e7b4f8-da69-42b9-9afe-ffb180e828e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:00:06.424125 containerd[1554]: time="2026-01-23T18:00:06.424059330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:00:07.182342 containerd[1554]: time="2026-01-23T18:00:07.182248952Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:00:07.183993 containerd[1554]: time="2026-01-23T18:00:07.183926146Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:00:07.184108 containerd[1554]: time="2026-01-23T18:00:07.184037372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 18:00:07.184528 kubelet[2773]: E0123 18:00:07.184396 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:00:07.184641 kubelet[2773]: E0123 18:00:07.184534 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:00:07.186377 kubelet[2773]: E0123 18:00:07.184652 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-lc96b_calico-system(19e7b4f8-da69-42b9-9afe-ffb180e828e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:00:07.186495 kubelet[2773]: E0123 18:00:07.186412 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lc96b" podUID="19e7b4f8-da69-42b9-9afe-ffb180e828e6" Jan 23 18:00:07.376184 kubelet[2773]: E0123 18:00:07.376055 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86cf47b744-dzg5h" podUID="35b6ccfd-7d16-48a4-9d79-4f9d7b111e38" Jan 23 18:00:13.374319 kubelet[2773]: E0123 18:00:13.372911 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b9444d76-tdrjj" podUID="01f4753b-deaf-4717-98ef-d3ec1b50a10a" Jan 23 18:00:14.371214 kubelet[2773]: E0123 18:00:14.371165 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tqdk2" podUID="0960425a-3ec9-4577-9a39-48675b2f3498" Jan 23 18:00:15.375371 kubelet[2773]: E0123 18:00:15.374906 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f4f6d594f-f5jb7" podUID="974cfe4b-f68c-4b1b-b7ed-efcbbe2e4c59" Jan 23 18:00:19.377446 containerd[1554]: time="2026-01-23T18:00:19.376849000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:00:20.373841 kubelet[2773]: E0123 18:00:20.373541 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lc96b" podUID="19e7b4f8-da69-42b9-9afe-ffb180e828e6" Jan 23 18:00:20.373841 kubelet[2773]: E0123 18:00:20.373749 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f4f6d594f-7pkzs" podUID="12cc26cd-794b-4ec3-9b6f-99c6d249650c" Jan 23 18:00:20.599738 containerd[1554]: time="2026-01-23T18:00:20.599668352Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:00:20.601705 containerd[1554]: time="2026-01-23T18:00:20.601621210Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:00:20.601844 containerd[1554]: time="2026-01-23T18:00:20.601781281Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 18:00:20.603311 kubelet[2773]: E0123 18:00:20.602033 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:00:20.603538 kubelet[2773]: E0123 18:00:20.603419 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:00:20.603710 kubelet[2773]: E0123 18:00:20.603689 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-86cf47b744-dzg5h_calico-system(35b6ccfd-7d16-48a4-9d79-4f9d7b111e38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:00:20.606334 containerd[1554]: time="2026-01-23T18:00:20.606279845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:00:21.795803 containerd[1554]: time="2026-01-23T18:00:21.795741455Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:00:21.797967 containerd[1554]: time="2026-01-23T18:00:21.797872312Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:00:21.797967 containerd[1554]: time="2026-01-23T18:00:21.797931589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 18:00:21.798385 kubelet[2773]: E0123 18:00:21.798264 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:00:21.799290 kubelet[2773]: E0123 18:00:21.798374 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:00:21.799290 kubelet[2773]: E0123 18:00:21.798986 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-86cf47b744-dzg5h_calico-system(35b6ccfd-7d16-48a4-9d79-4f9d7b111e38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:00:21.799290 kubelet[2773]: E0123 18:00:21.799047 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86cf47b744-dzg5h" podUID="35b6ccfd-7d16-48a4-9d79-4f9d7b111e38" Jan 23 18:00:27.385689 containerd[1554]: time="2026-01-23T18:00:27.385520075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:00:27.914623 containerd[1554]: time="2026-01-23T18:00:27.914564725Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:00:27.915898 containerd[1554]: time="2026-01-23T18:00:27.915807254Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:00:27.915898 containerd[1554]: time="2026-01-23T18:00:27.915894932Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 18:00:27.916204 kubelet[2773]: E0123 18:00:27.916160 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:00:27.917386 kubelet[2773]: E0123 18:00:27.916677 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:00:27.917386 kubelet[2773]: E0123 18:00:27.916874 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-tqdk2_calico-system(0960425a-3ec9-4577-9a39-48675b2f3498): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:00:27.917386 kubelet[2773]: E0123 18:00:27.917027 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tqdk2" podUID="0960425a-3ec9-4577-9a39-48675b2f3498" Jan 23 18:00:27.918010 containerd[1554]: time="2026-01-23T18:00:27.917063143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:00:28.277305 containerd[1554]: time="2026-01-23T18:00:28.276866674Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:00:28.279285 containerd[1554]: time="2026-01-23T18:00:28.279171745Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:00:28.279564 containerd[1554]: time="2026-01-23T18:00:28.279477378Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:00:28.279739 kubelet[2773]: E0123 18:00:28.279694 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:00:28.279806 kubelet[2773]: E0123 18:00:28.279749 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:00:28.279911 kubelet[2773]: E0123 18:00:28.279886 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6f4f6d594f-f5jb7_calico-apiserver(974cfe4b-f68c-4b1b-b7ed-efcbbe2e4c59): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:00:28.280416 kubelet[2773]: E0123 18:00:28.280371 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f4f6d594f-f5jb7" podUID="974cfe4b-f68c-4b1b-b7ed-efcbbe2e4c59" Jan 23 18:00:28.371790 containerd[1554]: time="2026-01-23T18:00:28.371518489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:00:28.710618 containerd[1554]: time="2026-01-23T18:00:28.710080647Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:00:28.712888 containerd[1554]: time="2026-01-23T18:00:28.712740470Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:00:28.712888 containerd[1554]: time="2026-01-23T18:00:28.712849868Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 18:00:28.713102 kubelet[2773]: E0123 18:00:28.713016 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:00:28.713102 kubelet[2773]: E0123 18:00:28.713064 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:00:28.713174 kubelet[2773]: E0123 18:00:28.713139 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6b9444d76-tdrjj_calico-system(01f4753b-deaf-4717-98ef-d3ec1b50a10a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:00:28.713337 kubelet[2773]: E0123 18:00:28.713168 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b9444d76-tdrjj" podUID="01f4753b-deaf-4717-98ef-d3ec1b50a10a" Jan 23 18:00:31.375773 containerd[1554]: time="2026-01-23T18:00:31.375502255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:00:31.738094 containerd[1554]: time="2026-01-23T18:00:31.737802480Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:00:31.739871 containerd[1554]: time="2026-01-23T18:00:31.739500181Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:00:31.739871 containerd[1554]: time="2026-01-23T18:00:31.739618979Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 18:00:31.740370 kubelet[2773]: E0123 18:00:31.740254 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:00:31.740370 kubelet[2773]: E0123 18:00:31.740343 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:00:31.741214 kubelet[2773]: E0123 18:00:31.740934 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-lc96b_calico-system(19e7b4f8-da69-42b9-9afe-ffb180e828e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:00:31.744026 containerd[1554]: time="2026-01-23T18:00:31.743892129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:00:32.107171 containerd[1554]: time="2026-01-23T18:00:32.107083670Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:00:32.108978 containerd[1554]: time="2026-01-23T18:00:32.108896855Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:00:32.109088 containerd[1554]: time="2026-01-23T18:00:32.109017294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 18:00:32.109254 kubelet[2773]: E0123 18:00:32.109208 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:00:32.109339 kubelet[2773]: E0123 18:00:32.109287 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:00:32.109407 kubelet[2773]: E0123 18:00:32.109373 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-lc96b_calico-system(19e7b4f8-da69-42b9-9afe-ffb180e828e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:00:32.110688 kubelet[2773]: E0123 18:00:32.109428 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lc96b" podUID="19e7b4f8-da69-42b9-9afe-ffb180e828e6" Jan 23 18:00:34.374058 kubelet[2773]: E0123 18:00:34.373954 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86cf47b744-dzg5h" podUID="35b6ccfd-7d16-48a4-9d79-4f9d7b111e38" Jan 23 18:00:35.374667 containerd[1554]: time="2026-01-23T18:00:35.373760861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:00:35.970404 containerd[1554]: time="2026-01-23T18:00:35.970319829Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:00:35.972662 containerd[1554]: time="2026-01-23T18:00:35.972601189Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:00:35.972786 containerd[1554]: time="2026-01-23T18:00:35.972723789Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:00:35.973027 kubelet[2773]: E0123 18:00:35.972988 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:00:35.974605 kubelet[2773]: E0123 18:00:35.973556 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:00:35.974605 kubelet[2773]: E0123 18:00:35.973689 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6f4f6d594f-7pkzs_calico-apiserver(12cc26cd-794b-4ec3-9b6f-99c6d249650c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:00:35.974905 kubelet[2773]: E0123 18:00:35.974830 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f4f6d594f-7pkzs" podUID="12cc26cd-794b-4ec3-9b6f-99c6d249650c" Jan 23 18:00:39.374095 kubelet[2773]: E0123 18:00:39.373641 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f4f6d594f-f5jb7" podUID="974cfe4b-f68c-4b1b-b7ed-efcbbe2e4c59" Jan 23 18:00:40.372718 kubelet[2773]: E0123 18:00:40.372234 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tqdk2" podUID="0960425a-3ec9-4577-9a39-48675b2f3498" Jan 23 18:00:41.373065 kubelet[2773]: E0123 18:00:41.372927 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b9444d76-tdrjj" podUID="01f4753b-deaf-4717-98ef-d3ec1b50a10a" Jan 23 18:00:42.374255 kubelet[2773]: E0123 18:00:42.374186 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lc96b" podUID="19e7b4f8-da69-42b9-9afe-ffb180e828e6" Jan 23 18:00:46.374247 kubelet[2773]: E0123 18:00:46.373773 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f4f6d594f-7pkzs" podUID="12cc26cd-794b-4ec3-9b6f-99c6d249650c" Jan 23 18:00:49.374468 kubelet[2773]: E0123 18:00:49.373892 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86cf47b744-dzg5h" podUID="35b6ccfd-7d16-48a4-9d79-4f9d7b111e38" Jan 23 18:00:51.372417 kubelet[2773]: E0123 18:00:51.372222 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f4f6d594f-f5jb7" podUID="974cfe4b-f68c-4b1b-b7ed-efcbbe2e4c59" Jan 23 18:00:52.373997 kubelet[2773]: E0123 18:00:52.373291 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b9444d76-tdrjj" podUID="01f4753b-deaf-4717-98ef-d3ec1b50a10a" Jan 23 18:00:52.373997 kubelet[2773]: E0123 18:00:52.373827 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tqdk2" podUID="0960425a-3ec9-4577-9a39-48675b2f3498" Jan 23 18:00:56.373077 kubelet[2773]: E0123 18:00:56.372991 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lc96b" podUID="19e7b4f8-da69-42b9-9afe-ffb180e828e6" Jan 23 18:01:00.371861 kubelet[2773]: E0123 18:01:00.371236 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f4f6d594f-7pkzs" podUID="12cc26cd-794b-4ec3-9b6f-99c6d249650c" Jan 23 18:01:03.373445 kubelet[2773]: E0123 18:01:03.372614 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b9444d76-tdrjj" podUID="01f4753b-deaf-4717-98ef-d3ec1b50a10a" Jan 23 18:01:03.375799 kubelet[2773]: E0123 18:01:03.375483 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tqdk2" podUID="0960425a-3ec9-4577-9a39-48675b2f3498" Jan 23 18:01:03.376359 kubelet[2773]: E0123 18:01:03.375513 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f4f6d594f-f5jb7" podUID="974cfe4b-f68c-4b1b-b7ed-efcbbe2e4c59" Jan 23 18:01:04.373149 containerd[1554]: time="2026-01-23T18:01:04.373096609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:01:04.736565 containerd[1554]: time="2026-01-23T18:01:04.735736060Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:01:04.740297 containerd[1554]: time="2026-01-23T18:01:04.739715586Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:01:04.740297 containerd[1554]: time="2026-01-23T18:01:04.739805671Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 18:01:04.742311 kubelet[2773]: E0123 18:01:04.740678 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:01:04.742311 kubelet[2773]: E0123 18:01:04.740732 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:01:04.742311 kubelet[2773]: E0123 18:01:04.740817 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-86cf47b744-dzg5h_calico-system(35b6ccfd-7d16-48a4-9d79-4f9d7b111e38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:01:04.745670 containerd[1554]: time="2026-01-23T18:01:04.745610091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:01:05.106625 containerd[1554]: time="2026-01-23T18:01:05.105930575Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:01:05.119768 containerd[1554]: time="2026-01-23T18:01:05.119643700Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:01:05.123139 containerd[1554]: time="2026-01-23T18:01:05.119673981Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 18:01:05.123352 kubelet[2773]: E0123 18:01:05.122393 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:01:05.123352 kubelet[2773]: E0123 18:01:05.122468 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:01:05.123352 kubelet[2773]: E0123 18:01:05.122552 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-86cf47b744-dzg5h_calico-system(35b6ccfd-7d16-48a4-9d79-4f9d7b111e38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:01:05.123465 kubelet[2773]: E0123 18:01:05.122596 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86cf47b744-dzg5h" podUID="35b6ccfd-7d16-48a4-9d79-4f9d7b111e38" Jan 23 18:01:10.373993 kubelet[2773]: E0123 18:01:10.373859 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lc96b" podUID="19e7b4f8-da69-42b9-9afe-ffb180e828e6" Jan 23 18:01:14.374131 containerd[1554]: time="2026-01-23T18:01:14.373090137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:01:14.712834 containerd[1554]: time="2026-01-23T18:01:14.712462265Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:01:14.714013 containerd[1554]: time="2026-01-23T18:01:14.713932835Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:01:14.714304 containerd[1554]: time="2026-01-23T18:01:14.714001199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 18:01:14.714612 kubelet[2773]: E0123 18:01:14.714512 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:01:14.714612 kubelet[2773]: E0123 18:01:14.714581 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:01:14.715142 kubelet[2773]: E0123 18:01:14.714870 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6b9444d76-tdrjj_calico-system(01f4753b-deaf-4717-98ef-d3ec1b50a10a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:01:14.715623 kubelet[2773]: E0123 18:01:14.715326 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b9444d76-tdrjj" podUID="01f4753b-deaf-4717-98ef-d3ec1b50a10a" Jan 23 18:01:15.376299 kubelet[2773]: E0123 18:01:15.376052 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f4f6d594f-7pkzs" podUID="12cc26cd-794b-4ec3-9b6f-99c6d249650c" Jan 23 18:01:15.380849 containerd[1554]: time="2026-01-23T18:01:15.380083937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:01:15.729080 containerd[1554]: time="2026-01-23T18:01:15.728556172Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:01:15.730540 containerd[1554]: time="2026-01-23T18:01:15.730374885Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:01:15.730540 containerd[1554]: time="2026-01-23T18:01:15.730373725Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:01:15.731322 kubelet[2773]: E0123 18:01:15.730940 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:01:15.731670 kubelet[2773]: E0123 18:01:15.731345 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:01:15.731670 kubelet[2773]: E0123 18:01:15.731443 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6f4f6d594f-f5jb7_calico-apiserver(974cfe4b-f68c-4b1b-b7ed-efcbbe2e4c59): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:01:15.731670 kubelet[2773]: E0123 18:01:15.731479 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f4f6d594f-f5jb7" podUID="974cfe4b-f68c-4b1b-b7ed-efcbbe2e4c59" Jan 23 18:01:17.374484 containerd[1554]: time="2026-01-23T18:01:17.374367166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:01:17.753119 containerd[1554]: time="2026-01-23T18:01:17.752795057Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:01:17.754968 containerd[1554]: time="2026-01-23T18:01:17.754894750Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:01:17.755239 containerd[1554]: time="2026-01-23T18:01:17.755033559Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 18:01:17.755303 kubelet[2773]: E0123 18:01:17.755229 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:01:17.755643 kubelet[2773]: E0123 18:01:17.755333 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:01:17.756023 kubelet[2773]: E0123 18:01:17.755985 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-tqdk2_calico-system(0960425a-3ec9-4577-9a39-48675b2f3498): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:01:17.756102 kubelet[2773]: E0123 18:01:17.756054 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tqdk2" podUID="0960425a-3ec9-4577-9a39-48675b2f3498" Jan 23 18:01:18.375215 kubelet[2773]: E0123 18:01:18.375077 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86cf47b744-dzg5h" podUID="35b6ccfd-7d16-48a4-9d79-4f9d7b111e38" Jan 23 18:01:25.377289 containerd[1554]: time="2026-01-23T18:01:25.376730089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:01:25.713216 containerd[1554]: time="2026-01-23T18:01:25.712231246Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:01:25.716538 containerd[1554]: time="2026-01-23T18:01:25.716430013Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:01:25.716765 containerd[1554]: time="2026-01-23T18:01:25.716456175Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 18:01:25.717141 kubelet[2773]: E0123 18:01:25.717040 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:01:25.717141 kubelet[2773]: E0123 18:01:25.717123 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:01:25.719206 kubelet[2773]: E0123 18:01:25.717871 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-lc96b_calico-system(19e7b4f8-da69-42b9-9afe-ffb180e828e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:01:25.721624 containerd[1554]: time="2026-01-23T18:01:25.721567045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:01:26.066583 containerd[1554]: time="2026-01-23T18:01:26.066522324Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:01:26.069449 containerd[1554]: time="2026-01-23T18:01:26.069358240Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:01:26.069725 containerd[1554]: time="2026-01-23T18:01:26.069509291Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 18:01:26.069784 kubelet[2773]: E0123 18:01:26.069689 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:01:26.069784 kubelet[2773]: E0123 18:01:26.069747 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:01:26.069906 kubelet[2773]: E0123 18:01:26.069854 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-lc96b_calico-system(19e7b4f8-da69-42b9-9afe-ffb180e828e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:01:26.070364 kubelet[2773]: E0123 18:01:26.069906 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lc96b" podUID="19e7b4f8-da69-42b9-9afe-ffb180e828e6" Jan 23 18:01:26.372952 kubelet[2773]: E0123 18:01:26.372759 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b9444d76-tdrjj" podUID="01f4753b-deaf-4717-98ef-d3ec1b50a10a" Jan 23 18:01:27.379981 kubelet[2773]: E0123 18:01:27.378792 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f4f6d594f-f5jb7" podUID="974cfe4b-f68c-4b1b-b7ed-efcbbe2e4c59" Jan 23 18:01:27.382172 containerd[1554]: time="2026-01-23T18:01:27.382060467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:01:27.721568 containerd[1554]: time="2026-01-23T18:01:27.720881458Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:01:27.722637 containerd[1554]: time="2026-01-23T18:01:27.722562975Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:01:27.723056 containerd[1554]: time="2026-01-23T18:01:27.722680983Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:01:27.723175 kubelet[2773]: E0123 18:01:27.722989 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:01:27.723175 kubelet[2773]: E0123 18:01:27.723059 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:01:27.725413 kubelet[2773]: E0123 18:01:27.725344 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6f4f6d594f-7pkzs_calico-apiserver(12cc26cd-794b-4ec3-9b6f-99c6d249650c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:01:27.725550 kubelet[2773]: E0123 18:01:27.725432 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f4f6d594f-7pkzs" podUID="12cc26cd-794b-4ec3-9b6f-99c6d249650c" Jan 23 18:01:28.061782 systemd[1]: Started sshd@8-49.12.73.152:22-68.220.241.50:56640.service - OpenSSH per-connection server daemon (68.220.241.50:56640). Jan 23 18:01:28.719852 sshd[4974]: Accepted publickey for core from 68.220.241.50 port 56640 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 18:01:28.724093 sshd-session[4974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:01:28.731611 systemd-logind[1520]: New session 8 of user core. Jan 23 18:01:28.736555 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 18:01:29.275500 sshd[4977]: Connection closed by 68.220.241.50 port 56640 Jan 23 18:01:29.278571 sshd-session[4974]: pam_unix(sshd:session): session closed for user core Jan 23 18:01:29.283665 systemd[1]: sshd@8-49.12.73.152:22-68.220.241.50:56640.service: Deactivated successfully. Jan 23 18:01:29.290507 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 18:01:29.292607 systemd-logind[1520]: Session 8 logged out. Waiting for processes to exit. Jan 23 18:01:29.297717 systemd-logind[1520]: Removed session 8. Jan 23 18:01:29.381713 kubelet[2773]: E0123 18:01:29.381644 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tqdk2" podUID="0960425a-3ec9-4577-9a39-48675b2f3498" Jan 23 18:01:29.383567 kubelet[2773]: E0123 18:01:29.383240 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86cf47b744-dzg5h" podUID="35b6ccfd-7d16-48a4-9d79-4f9d7b111e38" Jan 23 18:01:34.392467 systemd[1]: Started sshd@9-49.12.73.152:22-68.220.241.50:41904.service - OpenSSH per-connection server daemon (68.220.241.50:41904). Jan 23 18:01:35.027040 sshd[4994]: Accepted publickey for core from 68.220.241.50 port 41904 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 18:01:35.031646 sshd-session[4994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:01:35.040473 systemd-logind[1520]: New session 9 of user core. Jan 23 18:01:35.044926 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 18:01:35.624232 sshd[4999]: Connection closed by 68.220.241.50 port 41904 Jan 23 18:01:35.623559 sshd-session[4994]: pam_unix(sshd:session): session closed for user core Jan 23 18:01:35.632052 systemd[1]: sshd@9-49.12.73.152:22-68.220.241.50:41904.service: Deactivated successfully. Jan 23 18:01:35.637476 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 18:01:35.642915 systemd-logind[1520]: Session 9 logged out. Waiting for processes to exit. Jan 23 18:01:35.645180 systemd-logind[1520]: Removed session 9. Jan 23 18:01:38.373048 kubelet[2773]: E0123 18:01:38.372922 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f4f6d594f-f5jb7" podUID="974cfe4b-f68c-4b1b-b7ed-efcbbe2e4c59" Jan 23 18:01:39.372306 kubelet[2773]: E0123 18:01:39.371835 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f4f6d594f-7pkzs" podUID="12cc26cd-794b-4ec3-9b6f-99c6d249650c" Jan 23 18:01:39.372883 kubelet[2773]: E0123 18:01:39.372844 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b9444d76-tdrjj" podUID="01f4753b-deaf-4717-98ef-d3ec1b50a10a" Jan 23 18:01:40.372960 kubelet[2773]: E0123 18:01:40.372896 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lc96b" podUID="19e7b4f8-da69-42b9-9afe-ffb180e828e6" Jan 23 18:01:40.741763 systemd[1]: Started sshd@10-49.12.73.152:22-68.220.241.50:41906.service - OpenSSH per-connection server daemon (68.220.241.50:41906). Jan 23 18:01:41.401816 sshd[5037]: Accepted publickey for core from 68.220.241.50 port 41906 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 18:01:41.404384 sshd-session[5037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:01:41.411658 systemd-logind[1520]: New session 10 of user core. Jan 23 18:01:41.420831 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 18:01:42.001171 sshd[5040]: Connection closed by 68.220.241.50 port 41906 Jan 23 18:01:42.001009 sshd-session[5037]: pam_unix(sshd:session): session closed for user core Jan 23 18:01:42.008730 systemd-logind[1520]: Session 10 logged out. Waiting for processes to exit. Jan 23 18:01:42.009239 systemd[1]: sshd@10-49.12.73.152:22-68.220.241.50:41906.service: Deactivated successfully. Jan 23 18:01:42.014642 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 18:01:42.017362 systemd-logind[1520]: Removed session 10. Jan 23 18:01:42.117220 systemd[1]: Started sshd@11-49.12.73.152:22-68.220.241.50:41918.service - OpenSSH per-connection server daemon (68.220.241.50:41918). Jan 23 18:01:42.763216 sshd[5053]: Accepted publickey for core from 68.220.241.50 port 41918 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 18:01:42.764936 sshd-session[5053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:01:42.774474 systemd-logind[1520]: New session 11 of user core. Jan 23 18:01:42.780704 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 18:01:43.366399 sshd[5056]: Connection closed by 68.220.241.50 port 41918 Jan 23 18:01:43.367064 sshd-session[5053]: pam_unix(sshd:session): session closed for user core Jan 23 18:01:43.375679 systemd[1]: sshd@11-49.12.73.152:22-68.220.241.50:41918.service: Deactivated successfully. Jan 23 18:01:43.381605 kubelet[2773]: E0123 18:01:43.381539 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tqdk2" podUID="0960425a-3ec9-4577-9a39-48675b2f3498" Jan 23 18:01:43.385415 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 18:01:43.389912 systemd-logind[1520]: Session 11 logged out. Waiting for processes to exit. Jan 23 18:01:43.395453 systemd-logind[1520]: Removed session 11. Jan 23 18:01:43.482141 systemd[1]: Started sshd@12-49.12.73.152:22-68.220.241.50:57318.service - OpenSSH per-connection server daemon (68.220.241.50:57318). Jan 23 18:01:44.138047 sshd[5066]: Accepted publickey for core from 68.220.241.50 port 57318 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 18:01:44.140757 sshd-session[5066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:01:44.154319 systemd-logind[1520]: New session 12 of user core. Jan 23 18:01:44.159541 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 18:01:44.372428 kubelet[2773]: E0123 18:01:44.372337 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86cf47b744-dzg5h" podUID="35b6ccfd-7d16-48a4-9d79-4f9d7b111e38" Jan 23 18:01:44.704543 sshd[5069]: Connection closed by 68.220.241.50 port 57318 Jan 23 18:01:44.705597 sshd-session[5066]: pam_unix(sshd:session): session closed for user core Jan 23 18:01:44.714643 systemd-logind[1520]: Session 12 logged out. Waiting for processes to exit. Jan 23 18:01:44.714831 systemd[1]: sshd@12-49.12.73.152:22-68.220.241.50:57318.service: Deactivated successfully. Jan 23 18:01:44.721147 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 18:01:44.728822 systemd-logind[1520]: Removed session 12. Jan 23 18:01:49.827222 systemd[1]: Started sshd@13-49.12.73.152:22-68.220.241.50:57334.service - OpenSSH per-connection server daemon (68.220.241.50:57334). Jan 23 18:01:50.372602 kubelet[2773]: E0123 18:01:50.372562 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b9444d76-tdrjj" podUID="01f4753b-deaf-4717-98ef-d3ec1b50a10a" Jan 23 18:01:50.373690 kubelet[2773]: E0123 18:01:50.373642 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f4f6d594f-7pkzs" podUID="12cc26cd-794b-4ec3-9b6f-99c6d249650c" Jan 23 18:01:50.500609 sshd[5086]: Accepted publickey for core from 68.220.241.50 port 57334 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 18:01:50.503341 sshd-session[5086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:01:50.515396 systemd-logind[1520]: New session 13 of user core. Jan 23 18:01:50.519909 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 18:01:51.076465 sshd[5089]: Connection closed by 68.220.241.50 port 57334 Jan 23 18:01:51.077708 sshd-session[5086]: pam_unix(sshd:session): session closed for user core Jan 23 18:01:51.087422 systemd[1]: sshd@13-49.12.73.152:22-68.220.241.50:57334.service: Deactivated successfully. Jan 23 18:01:51.092996 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 18:01:51.094796 systemd-logind[1520]: Session 13 logged out. Waiting for processes to exit. Jan 23 18:01:51.098859 systemd-logind[1520]: Removed session 13. Jan 23 18:01:51.378857 kubelet[2773]: E0123 18:01:51.377934 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f4f6d594f-f5jb7" podUID="974cfe4b-f68c-4b1b-b7ed-efcbbe2e4c59" Jan 23 18:01:54.375537 kubelet[2773]: E0123 18:01:54.375420 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lc96b" podUID="19e7b4f8-da69-42b9-9afe-ffb180e828e6" Jan 23 18:01:55.372109 kubelet[2773]: E0123 18:01:55.372002 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tqdk2" podUID="0960425a-3ec9-4577-9a39-48675b2f3498" Jan 23 18:01:56.188578 systemd[1]: Started sshd@14-49.12.73.152:22-68.220.241.50:55628.service - OpenSSH per-connection server daemon (68.220.241.50:55628). Jan 23 18:01:56.849294 sshd[5101]: Accepted publickey for core from 68.220.241.50 port 55628 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 18:01:56.850588 sshd-session[5101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:01:56.858155 systemd-logind[1520]: New session 14 of user core. Jan 23 18:01:56.866408 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 18:01:57.400772 sshd[5104]: Connection closed by 68.220.241.50 port 55628 Jan 23 18:01:57.403689 sshd-session[5101]: pam_unix(sshd:session): session closed for user core Jan 23 18:01:57.411370 systemd[1]: sshd@14-49.12.73.152:22-68.220.241.50:55628.service: Deactivated successfully. Jan 23 18:01:57.415194 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 18:01:57.416730 systemd-logind[1520]: Session 14 logged out. Waiting for processes to exit. Jan 23 18:01:57.422479 systemd-logind[1520]: Removed session 14. Jan 23 18:01:58.374587 kubelet[2773]: E0123 18:01:58.374520 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86cf47b744-dzg5h" podUID="35b6ccfd-7d16-48a4-9d79-4f9d7b111e38" Jan 23 18:02:02.515615 systemd[1]: Started sshd@15-49.12.73.152:22-68.220.241.50:55242.service - OpenSSH per-connection server daemon (68.220.241.50:55242). Jan 23 18:02:03.133464 sshd[5118]: Accepted publickey for core from 68.220.241.50 port 55242 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 18:02:03.135392 sshd-session[5118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:02:03.145359 systemd-logind[1520]: New session 15 of user core. Jan 23 18:02:03.152730 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 18:02:03.651018 sshd[5121]: Connection closed by 68.220.241.50 port 55242 Jan 23 18:02:03.653716 sshd-session[5118]: pam_unix(sshd:session): session closed for user core Jan 23 18:02:03.662815 systemd[1]: sshd@15-49.12.73.152:22-68.220.241.50:55242.service: Deactivated successfully. Jan 23 18:02:03.663361 systemd-logind[1520]: Session 15 logged out. Waiting for processes to exit. Jan 23 18:02:03.669879 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 18:02:03.674567 systemd-logind[1520]: Removed session 15. Jan 23 18:02:05.373955 kubelet[2773]: E0123 18:02:05.372332 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f4f6d594f-7pkzs" podUID="12cc26cd-794b-4ec3-9b6f-99c6d249650c" Jan 23 18:02:05.375153 kubelet[2773]: E0123 18:02:05.375114 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b9444d76-tdrjj" podUID="01f4753b-deaf-4717-98ef-d3ec1b50a10a" Jan 23 18:02:06.370598 kubelet[2773]: E0123 18:02:06.370546 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f4f6d594f-f5jb7" podUID="974cfe4b-f68c-4b1b-b7ed-efcbbe2e4c59" Jan 23 18:02:08.374117 kubelet[2773]: E0123 18:02:08.374050 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lc96b" podUID="19e7b4f8-da69-42b9-9afe-ffb180e828e6" Jan 23 18:02:08.775589 systemd[1]: Started sshd@16-49.12.73.152:22-68.220.241.50:55258.service - OpenSSH per-connection server daemon (68.220.241.50:55258). Jan 23 18:02:09.432920 sshd[5134]: Accepted publickey for core from 68.220.241.50 port 55258 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 18:02:09.436262 sshd-session[5134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:02:09.444209 systemd-logind[1520]: New session 16 of user core. Jan 23 18:02:09.448499 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 18:02:09.989342 sshd[5137]: Connection closed by 68.220.241.50 port 55258 Jan 23 18:02:09.990507 sshd-session[5134]: pam_unix(sshd:session): session closed for user core Jan 23 18:02:09.997516 systemd-logind[1520]: Session 16 logged out. Waiting for processes to exit. Jan 23 18:02:09.997528 systemd[1]: sshd@16-49.12.73.152:22-68.220.241.50:55258.service: Deactivated successfully. Jan 23 18:02:10.001898 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 18:02:10.005223 systemd-logind[1520]: Removed session 16. Jan 23 18:02:10.098573 systemd[1]: Started sshd@17-49.12.73.152:22-68.220.241.50:55266.service - OpenSSH per-connection server daemon (68.220.241.50:55266). Jan 23 18:02:10.371340 kubelet[2773]: E0123 18:02:10.371174 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tqdk2" podUID="0960425a-3ec9-4577-9a39-48675b2f3498" Jan 23 18:02:10.740106 sshd[5173]: Accepted publickey for core from 68.220.241.50 port 55266 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 18:02:10.741728 sshd-session[5173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:02:10.748228 systemd-logind[1520]: New session 17 of user core. Jan 23 18:02:10.752523 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 18:02:11.373304 kubelet[2773]: E0123 18:02:11.372232 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86cf47b744-dzg5h" podUID="35b6ccfd-7d16-48a4-9d79-4f9d7b111e38" Jan 23 18:02:11.456791 sshd[5176]: Connection closed by 68.220.241.50 port 55266 Jan 23 18:02:11.459056 sshd-session[5173]: pam_unix(sshd:session): session closed for user core Jan 23 18:02:11.467079 systemd[1]: sshd@17-49.12.73.152:22-68.220.241.50:55266.service: Deactivated successfully. Jan 23 18:02:11.471843 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 18:02:11.473457 systemd-logind[1520]: Session 17 logged out. Waiting for processes to exit. Jan 23 18:02:11.475770 systemd-logind[1520]: Removed session 17. Jan 23 18:02:11.571867 systemd[1]: Started sshd@18-49.12.73.152:22-68.220.241.50:55274.service - OpenSSH per-connection server daemon (68.220.241.50:55274). Jan 23 18:02:12.219405 sshd[5186]: Accepted publickey for core from 68.220.241.50 port 55274 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 18:02:12.220855 sshd-session[5186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:02:12.228535 systemd-logind[1520]: New session 18 of user core. Jan 23 18:02:12.234504 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 18:02:13.457314 sshd[5189]: Connection closed by 68.220.241.50 port 55274 Jan 23 18:02:13.458182 sshd-session[5186]: pam_unix(sshd:session): session closed for user core Jan 23 18:02:13.464995 systemd[1]: sshd@18-49.12.73.152:22-68.220.241.50:55274.service: Deactivated successfully. Jan 23 18:02:13.468058 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 18:02:13.470789 systemd-logind[1520]: Session 18 logged out. Waiting for processes to exit. Jan 23 18:02:13.473398 systemd-logind[1520]: Removed session 18. Jan 23 18:02:13.565838 systemd[1]: Started sshd@19-49.12.73.152:22-68.220.241.50:49354.service - OpenSSH per-connection server daemon (68.220.241.50:49354). Jan 23 18:02:14.212627 sshd[5205]: Accepted publickey for core from 68.220.241.50 port 49354 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 18:02:14.214644 sshd-session[5205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:02:14.220395 systemd-logind[1520]: New session 19 of user core. Jan 23 18:02:14.226589 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 18:02:14.912959 sshd[5208]: Connection closed by 68.220.241.50 port 49354 Jan 23 18:02:14.913872 sshd-session[5205]: pam_unix(sshd:session): session closed for user core Jan 23 18:02:14.919681 systemd-logind[1520]: Session 19 logged out. Waiting for processes to exit. Jan 23 18:02:14.920557 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 18:02:14.922887 systemd[1]: sshd@19-49.12.73.152:22-68.220.241.50:49354.service: Deactivated successfully. Jan 23 18:02:14.928567 systemd-logind[1520]: Removed session 19. Jan 23 18:02:15.017601 systemd[1]: Started sshd@20-49.12.73.152:22-68.220.241.50:49364.service - OpenSSH per-connection server daemon (68.220.241.50:49364). Jan 23 18:02:15.639582 sshd[5220]: Accepted publickey for core from 68.220.241.50 port 49364 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 18:02:15.641958 sshd-session[5220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:02:15.649291 systemd-logind[1520]: New session 20 of user core. Jan 23 18:02:15.656479 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 18:02:16.169802 sshd[5223]: Connection closed by 68.220.241.50 port 49364 Jan 23 18:02:16.170700 sshd-session[5220]: pam_unix(sshd:session): session closed for user core Jan 23 18:02:16.176593 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 18:02:16.178905 systemd[1]: sshd@20-49.12.73.152:22-68.220.241.50:49364.service: Deactivated successfully. Jan 23 18:02:16.185075 systemd-logind[1520]: Session 20 logged out. Waiting for processes to exit. Jan 23 18:02:16.187371 systemd-logind[1520]: Removed session 20. Jan 23 18:02:18.372037 kubelet[2773]: E0123 18:02:18.371309 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b9444d76-tdrjj" podUID="01f4753b-deaf-4717-98ef-d3ec1b50a10a" Jan 23 18:02:19.380523 kubelet[2773]: E0123 18:02:19.380049 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f4f6d594f-7pkzs" podUID="12cc26cd-794b-4ec3-9b6f-99c6d249650c" Jan 23 18:02:19.381362 kubelet[2773]: E0123 18:02:19.380387 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lc96b" podUID="19e7b4f8-da69-42b9-9afe-ffb180e828e6" Jan 23 18:02:19.381362 kubelet[2773]: E0123 18:02:19.381310 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f4f6d594f-f5jb7" podUID="974cfe4b-f68c-4b1b-b7ed-efcbbe2e4c59" Jan 23 18:02:21.286010 systemd[1]: Started sshd@21-49.12.73.152:22-68.220.241.50:49370.service - OpenSSH per-connection server daemon (68.220.241.50:49370). Jan 23 18:02:21.928245 sshd[5244]: Accepted publickey for core from 68.220.241.50 port 49370 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 18:02:21.931406 sshd-session[5244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:02:21.939557 systemd-logind[1520]: New session 21 of user core. Jan 23 18:02:21.944512 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 18:02:22.371494 kubelet[2773]: E0123 18:02:22.370866 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tqdk2" podUID="0960425a-3ec9-4577-9a39-48675b2f3498" Jan 23 18:02:22.459096 sshd[5247]: Connection closed by 68.220.241.50 port 49370 Jan 23 18:02:22.457885 sshd-session[5244]: pam_unix(sshd:session): session closed for user core Jan 23 18:02:22.464832 systemd[1]: sshd@21-49.12.73.152:22-68.220.241.50:49370.service: Deactivated successfully. Jan 23 18:02:22.468369 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 18:02:22.470856 systemd-logind[1520]: Session 21 logged out. Waiting for processes to exit. Jan 23 18:02:22.474081 systemd-logind[1520]: Removed session 21. Jan 23 18:02:23.377010 kubelet[2773]: E0123 18:02:23.376921 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86cf47b744-dzg5h" podUID="35b6ccfd-7d16-48a4-9d79-4f9d7b111e38" Jan 23 18:02:27.570801 systemd[1]: Started sshd@22-49.12.73.152:22-68.220.241.50:53722.service - OpenSSH per-connection server daemon (68.220.241.50:53722). Jan 23 18:02:28.229463 sshd[5259]: Accepted publickey for core from 68.220.241.50 port 53722 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 18:02:28.232540 sshd-session[5259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:02:28.241661 systemd-logind[1520]: New session 22 of user core. Jan 23 18:02:28.249542 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 18:02:28.765942 sshd[5262]: Connection closed by 68.220.241.50 port 53722 Jan 23 18:02:28.768313 sshd-session[5259]: pam_unix(sshd:session): session closed for user core Jan 23 18:02:28.775008 systemd-logind[1520]: Session 22 logged out. Waiting for processes to exit. Jan 23 18:02:28.776049 systemd[1]: sshd@22-49.12.73.152:22-68.220.241.50:53722.service: Deactivated successfully. Jan 23 18:02:28.782240 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 18:02:28.786140 systemd-logind[1520]: Removed session 22. Jan 23 18:02:31.373454 kubelet[2773]: E0123 18:02:31.373315 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f4f6d594f-7pkzs" podUID="12cc26cd-794b-4ec3-9b6f-99c6d249650c" Jan 23 18:02:31.374340 kubelet[2773]: E0123 18:02:31.374029 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b9444d76-tdrjj" podUID="01f4753b-deaf-4717-98ef-d3ec1b50a10a" Jan 23 18:02:33.373322 kubelet[2773]: E0123 18:02:33.373136 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f4f6d594f-f5jb7" podUID="974cfe4b-f68c-4b1b-b7ed-efcbbe2e4c59" Jan 23 18:02:34.372240 containerd[1554]: time="2026-01-23T18:02:34.372154525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:02:34.373219 kubelet[2773]: E0123 18:02:34.373052 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lc96b" podUID="19e7b4f8-da69-42b9-9afe-ffb180e828e6" Jan 23 18:02:34.725445 containerd[1554]: time="2026-01-23T18:02:34.725261616Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:02:34.727054 containerd[1554]: time="2026-01-23T18:02:34.726963029Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:02:34.727249 containerd[1554]: time="2026-01-23T18:02:34.727105663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 18:02:34.727449 kubelet[2773]: E0123 18:02:34.727347 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:02:34.727449 kubelet[2773]: E0123 18:02:34.727426 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:02:34.728083 kubelet[2773]: E0123 18:02:34.727589 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-86cf47b744-dzg5h_calico-system(35b6ccfd-7d16-48a4-9d79-4f9d7b111e38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:02:34.729288 containerd[1554]: time="2026-01-23T18:02:34.729224940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:02:35.079564 containerd[1554]: time="2026-01-23T18:02:35.079397315Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:02:35.081062 containerd[1554]: time="2026-01-23T18:02:35.080916378Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:02:35.081173 containerd[1554]: time="2026-01-23T18:02:35.081047733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 18:02:35.081468 kubelet[2773]: E0123 18:02:35.081428 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:02:35.081605 kubelet[2773]: E0123 18:02:35.081585 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:02:35.081770 kubelet[2773]: E0123 18:02:35.081714 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-86cf47b744-dzg5h_calico-system(35b6ccfd-7d16-48a4-9d79-4f9d7b111e38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:02:35.081925 kubelet[2773]: E0123 18:02:35.081863 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86cf47b744-dzg5h" podUID="35b6ccfd-7d16-48a4-9d79-4f9d7b111e38" Jan 23 18:02:37.372651 kubelet[2773]: E0123 18:02:37.372383 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tqdk2" podUID="0960425a-3ec9-4577-9a39-48675b2f3498" Jan 23 18:02:43.767935 kubelet[2773]: E0123 18:02:43.766925 2773 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:44070->10.0.0.2:2379: read: connection timed out" Jan 23 18:02:44.224923 systemd[1]: cri-containerd-f21a543d8db5be5338534568d21d4ef39a746502d5e9756d692fde55c7e3ce59.scope: Deactivated successfully. Jan 23 18:02:44.226283 systemd[1]: cri-containerd-f21a543d8db5be5338534568d21d4ef39a746502d5e9756d692fde55c7e3ce59.scope: Consumed 43.026s CPU time, 113.4M memory peak. Jan 23 18:02:44.228009 containerd[1554]: time="2026-01-23T18:02:44.227942473Z" level=info msg="received container exit event container_id:\"f21a543d8db5be5338534568d21d4ef39a746502d5e9756d692fde55c7e3ce59\" id:\"f21a543d8db5be5338534568d21d4ef39a746502d5e9756d692fde55c7e3ce59\" pid:3103 exit_status:1 exited_at:{seconds:1769191364 nanos:227088534}" Jan 23 18:02:44.254664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f21a543d8db5be5338534568d21d4ef39a746502d5e9756d692fde55c7e3ce59-rootfs.mount: Deactivated successfully. Jan 23 18:02:44.368078 kubelet[2773]: I0123 18:02:44.368040 2773 scope.go:117] "RemoveContainer" containerID="f21a543d8db5be5338534568d21d4ef39a746502d5e9756d692fde55c7e3ce59" Jan 23 18:02:44.371300 containerd[1554]: time="2026-01-23T18:02:44.370783504Z" level=info msg="CreateContainer within sandbox \"9edb551d4f5b872770196dc65df30a2021baa5932a8b68b2482e048c0b8bf39e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 23 18:02:44.383303 containerd[1554]: time="2026-01-23T18:02:44.381449086Z" level=info msg="Container 5cee3ee591d285dd5d3837ead9b7beb4cde93d69ba5d17984e9857924adf757a: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:02:44.392523 containerd[1554]: time="2026-01-23T18:02:44.392471580Z" level=info msg="CreateContainer within sandbox \"9edb551d4f5b872770196dc65df30a2021baa5932a8b68b2482e048c0b8bf39e\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"5cee3ee591d285dd5d3837ead9b7beb4cde93d69ba5d17984e9857924adf757a\"" Jan 23 18:02:44.393249 containerd[1554]: time="2026-01-23T18:02:44.393210202Z" level=info msg="StartContainer for \"5cee3ee591d285dd5d3837ead9b7beb4cde93d69ba5d17984e9857924adf757a\"" Jan 23 18:02:44.394449 containerd[1554]: time="2026-01-23T18:02:44.394402453Z" level=info msg="connecting to shim 5cee3ee591d285dd5d3837ead9b7beb4cde93d69ba5d17984e9857924adf757a" address="unix:///run/containerd/s/c11eebfe9ef545d04104e422788724b6ce3bd4e714cdb504cfaa65b66d33e530" protocol=ttrpc version=3 Jan 23 18:02:44.417510 systemd[1]: Started cri-containerd-5cee3ee591d285dd5d3837ead9b7beb4cde93d69ba5d17984e9857924adf757a.scope - libcontainer container 5cee3ee591d285dd5d3837ead9b7beb4cde93d69ba5d17984e9857924adf757a. Jan 23 18:02:44.455018 containerd[1554]: time="2026-01-23T18:02:44.454944391Z" level=info msg="StartContainer for \"5cee3ee591d285dd5d3837ead9b7beb4cde93d69ba5d17984e9857924adf757a\" returns successfully" Jan 23 18:02:44.640745 systemd[1]: cri-containerd-2c313bc070d536d33ec58cf4b48fcf29e0dd22f50fb557ad8e210ba82b792de0.scope: Deactivated successfully. Jan 23 18:02:44.641568 systemd[1]: cri-containerd-2c313bc070d536d33ec58cf4b48fcf29e0dd22f50fb557ad8e210ba82b792de0.scope: Consumed 4.650s CPU time, 68.8M memory peak, 1.9M read from disk. Jan 23 18:02:44.643920 containerd[1554]: time="2026-01-23T18:02:44.643873508Z" level=info msg="received container exit event container_id:\"2c313bc070d536d33ec58cf4b48fcf29e0dd22f50fb557ad8e210ba82b792de0\" id:\"2c313bc070d536d33ec58cf4b48fcf29e0dd22f50fb557ad8e210ba82b792de0\" pid:2624 exit_status:1 exited_at:{seconds:1769191364 nanos:643490398}" Jan 23 18:02:44.701018 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c313bc070d536d33ec58cf4b48fcf29e0dd22f50fb557ad8e210ba82b792de0-rootfs.mount: Deactivated successfully. Jan 23 18:02:45.376214 containerd[1554]: time="2026-01-23T18:02:45.375972433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:02:45.377374 kubelet[2773]: E0123 18:02:45.377103 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lc96b" podUID="19e7b4f8-da69-42b9-9afe-ffb180e828e6" Jan 23 18:02:45.719113 containerd[1554]: time="2026-01-23T18:02:45.718812515Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:02:45.721239 containerd[1554]: time="2026-01-23T18:02:45.721074344Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:02:45.721239 containerd[1554]: time="2026-01-23T18:02:45.721196941Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:02:45.721807 kubelet[2773]: E0123 18:02:45.721730 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:02:45.721807 kubelet[2773]: E0123 18:02:45.721797 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:02:45.722017 kubelet[2773]: E0123 18:02:45.721885 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6f4f6d594f-f5jb7_calico-apiserver(974cfe4b-f68c-4b1b-b7ed-efcbbe2e4c59): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:02:45.722017 kubelet[2773]: E0123 18:02:45.721926 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f4f6d594f-f5jb7" podUID="974cfe4b-f68c-4b1b-b7ed-efcbbe2e4c59" Jan 23 18:02:46.372808 containerd[1554]: time="2026-01-23T18:02:46.372582637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:02:46.373253 kubelet[2773]: E0123 18:02:46.373188 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f4f6d594f-7pkzs" podUID="12cc26cd-794b-4ec3-9b6f-99c6d249650c" Jan 23 18:02:46.384252 kubelet[2773]: I0123 18:02:46.384213 2773 scope.go:117] "RemoveContainer" containerID="2c313bc070d536d33ec58cf4b48fcf29e0dd22f50fb557ad8e210ba82b792de0" Jan 23 18:02:46.390785 containerd[1554]: time="2026-01-23T18:02:46.390731610Z" level=info msg="CreateContainer within sandbox \"996667d842348a4f23606e4a63544b05e57ca3bafdbb8c9b32fff6ed76322160\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 23 18:02:46.409372 containerd[1554]: time="2026-01-23T18:02:46.408656427Z" level=info msg="Container 41865e58e9ae11b63475b07208f1243aa7e80ba7ec69041d93766e81ff4adb17: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:02:46.423070 containerd[1554]: time="2026-01-23T18:02:46.422984441Z" level=info msg="CreateContainer within sandbox \"996667d842348a4f23606e4a63544b05e57ca3bafdbb8c9b32fff6ed76322160\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"41865e58e9ae11b63475b07208f1243aa7e80ba7ec69041d93766e81ff4adb17\"" Jan 23 18:02:46.423842 containerd[1554]: time="2026-01-23T18:02:46.423806223Z" level=info msg="StartContainer for \"41865e58e9ae11b63475b07208f1243aa7e80ba7ec69041d93766e81ff4adb17\"" Jan 23 18:02:46.426229 containerd[1554]: time="2026-01-23T18:02:46.426141693Z" level=info msg="connecting to shim 41865e58e9ae11b63475b07208f1243aa7e80ba7ec69041d93766e81ff4adb17" address="unix:///run/containerd/s/6f2367f314f6d3d045d900e57ddecfdc84b5db6697d369683d3a6855670761f2" protocol=ttrpc version=3 Jan 23 18:02:46.451635 systemd[1]: Started cri-containerd-41865e58e9ae11b63475b07208f1243aa7e80ba7ec69041d93766e81ff4adb17.scope - libcontainer container 41865e58e9ae11b63475b07208f1243aa7e80ba7ec69041d93766e81ff4adb17. Jan 23 18:02:46.511233 containerd[1554]: time="2026-01-23T18:02:46.511184597Z" level=info msg="StartContainer for \"41865e58e9ae11b63475b07208f1243aa7e80ba7ec69041d93766e81ff4adb17\" returns successfully" Jan 23 18:02:46.715473 containerd[1554]: time="2026-01-23T18:02:46.714831207Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:02:46.717229 containerd[1554]: time="2026-01-23T18:02:46.716786966Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:02:46.717604 containerd[1554]: time="2026-01-23T18:02:46.716862884Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 18:02:46.717930 kubelet[2773]: E0123 18:02:46.717886 2773 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:02:46.718031 kubelet[2773]: E0123 18:02:46.717936 2773 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:02:46.718071 kubelet[2773]: E0123 18:02:46.718047 2773 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6b9444d76-tdrjj_calico-system(01f4753b-deaf-4717-98ef-d3ec1b50a10a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:02:46.718156 kubelet[2773]: E0123 18:02:46.718080 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b9444d76-tdrjj" podUID="01f4753b-deaf-4717-98ef-d3ec1b50a10a" Jan 23 18:02:48.316295 systemd[1]: cri-containerd-e4f6e590de81e50d75ff50251b3fd5a652c3353d53bab3959d7d36722b3b9e24.scope: Deactivated successfully. Jan 23 18:02:48.316896 systemd[1]: cri-containerd-e4f6e590de81e50d75ff50251b3fd5a652c3353d53bab3959d7d36722b3b9e24.scope: Consumed 5.441s CPU time, 26.5M memory peak, 2.7M read from disk. Jan 23 18:02:48.323395 containerd[1554]: time="2026-01-23T18:02:48.323262886Z" level=info msg="received container exit event container_id:\"e4f6e590de81e50d75ff50251b3fd5a652c3353d53bab3959d7d36722b3b9e24\" id:\"e4f6e590de81e50d75ff50251b3fd5a652c3353d53bab3959d7d36722b3b9e24\" pid:2612 exit_status:1 exited_at:{seconds:1769191368 nanos:321871872}" Jan 23 18:02:48.354906 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4f6e590de81e50d75ff50251b3fd5a652c3353d53bab3959d7d36722b3b9e24-rootfs.mount: Deactivated successfully. Jan 23 18:02:48.373633 kubelet[2773]: E0123 18:02:48.373565 2773 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86cf47b744-dzg5h" podUID="35b6ccfd-7d16-48a4-9d79-4f9d7b111e38" Jan 23 18:02:48.399546 kubelet[2773]: I0123 18:02:48.399501 2773 scope.go:117] "RemoveContainer" containerID="e4f6e590de81e50d75ff50251b3fd5a652c3353d53bab3959d7d36722b3b9e24" Jan 23 18:02:48.402309 containerd[1554]: time="2026-01-23T18:02:48.402231173Z" level=info msg="CreateContainer within sandbox \"49956425480b0ac059bc78719ebb412c29f0e3a7d71725d5be87fdf4a62db7ae\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 23 18:02:48.416179 containerd[1554]: time="2026-01-23T18:02:48.415605284Z" level=info msg="Container 47a03cb5e65723e662847d6fa1c62e1078fd4f752925622e718ec26301851a74: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:02:48.432329 containerd[1554]: time="2026-01-23T18:02:48.430793960Z" level=info msg="CreateContainer within sandbox \"49956425480b0ac059bc78719ebb412c29f0e3a7d71725d5be87fdf4a62db7ae\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"47a03cb5e65723e662847d6fa1c62e1078fd4f752925622e718ec26301851a74\"" Jan 23 18:02:48.432329 containerd[1554]: time="2026-01-23T18:02:48.431338150Z" level=info msg="StartContainer for \"47a03cb5e65723e662847d6fa1c62e1078fd4f752925622e718ec26301851a74\"" Jan 23 18:02:48.433121 containerd[1554]: time="2026-01-23T18:02:48.433092078Z" level=info msg="connecting to shim 47a03cb5e65723e662847d6fa1c62e1078fd4f752925622e718ec26301851a74" address="unix:///run/containerd/s/fd7c4e19c81dac7021e1bdba368e2a09e03113ba894f45616e384c7dc87bb878" protocol=ttrpc version=3 Jan 23 18:02:48.465500 systemd[1]: Started cri-containerd-47a03cb5e65723e662847d6fa1c62e1078fd4f752925622e718ec26301851a74.scope - libcontainer container 47a03cb5e65723e662847d6fa1c62e1078fd4f752925622e718ec26301851a74. Jan 23 18:02:48.522876 containerd[1554]: time="2026-01-23T18:02:48.522837283Z" level=info msg="StartContainer for \"47a03cb5e65723e662847d6fa1c62e1078fd4f752925622e718ec26301851a74\" returns successfully"