Jan 23 17:55:06.788888 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 23 17:55:06.788912 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Jan 23 16:10:02 -00 2026 Jan 23 17:55:06.788923 kernel: KASLR enabled Jan 23 17:55:06.788928 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 23 17:55:06.788934 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390b8118 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Jan 23 17:55:06.788939 kernel: random: crng init done Jan 23 17:55:06.788946 kernel: secureboot: Secure boot disabled Jan 23 17:55:06.788952 kernel: ACPI: Early table checksum verification disabled Jan 23 17:55:06.788957 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jan 23 17:55:06.788963 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 23 17:55:06.788970 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:55:06.788976 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:55:06.788982 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:55:06.788988 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:55:06.788995 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:55:06.789002 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:55:06.789010 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:55:06.789018 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:55:06.789024 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:55:06.789030 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 23 17:55:06.789036 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 23 17:55:06.789042 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 23 17:55:06.789048 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 23 17:55:06.789054 kernel: NODE_DATA(0) allocated [mem 0x13967da00-0x139684fff] Jan 23 17:55:06.789060 kernel: Zone ranges: Jan 23 17:55:06.789066 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 23 17:55:06.789074 kernel: DMA32 empty Jan 23 17:55:06.789080 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 23 17:55:06.789086 kernel: Device empty Jan 23 17:55:06.789092 kernel: Movable zone start for each node Jan 23 17:55:06.789098 kernel: Early memory node ranges Jan 23 17:55:06.789104 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Jan 23 17:55:06.789110 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Jan 23 17:55:06.789116 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Jan 23 17:55:06.789122 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jan 23 17:55:06.789128 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jan 23 17:55:06.789134 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jan 23 17:55:06.789140 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jan 23 17:55:06.789147 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jan 23 17:55:06.789154 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jan 23 17:55:06.789163 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 23 17:55:06.789169 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 23 17:55:06.789176 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Jan 23 17:55:06.789184 kernel: psci: probing for conduit method from ACPI. Jan 23 17:55:06.789191 kernel: psci: PSCIv1.1 detected in firmware. Jan 23 17:55:06.789197 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 17:55:06.789203 kernel: psci: Trusted OS migration not required Jan 23 17:55:06.789210 kernel: psci: SMC Calling Convention v1.1 Jan 23 17:55:06.789216 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 23 17:55:06.789223 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 23 17:55:06.789230 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 23 17:55:06.789236 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 17:55:06.789243 kernel: Detected PIPT I-cache on CPU0 Jan 23 17:55:06.789249 kernel: CPU features: detected: GIC system register CPU interface Jan 23 17:55:06.789257 kernel: CPU features: detected: Spectre-v4 Jan 23 17:55:06.789264 kernel: CPU features: detected: Spectre-BHB Jan 23 17:55:06.789270 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 23 17:55:06.789277 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 23 17:55:06.789283 kernel: CPU features: detected: ARM erratum 1418040 Jan 23 17:55:06.789289 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 23 17:55:06.789296 kernel: alternatives: applying boot alternatives Jan 23 17:55:06.789303 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=5fc6d8e43735a6d26d13c2f5b234025ac82c601a45144671feeb457ddade8f9d Jan 23 17:55:06.789310 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 17:55:06.789317 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 17:55:06.789323 kernel: Fallback order for Node 0: 0 Jan 23 17:55:06.789331 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1024000 Jan 23 17:55:06.789338 kernel: Policy zone: Normal Jan 23 17:55:06.789344 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 17:55:06.789350 kernel: software IO TLB: area num 2. Jan 23 17:55:06.789357 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Jan 23 17:55:06.789363 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 17:55:06.789369 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 17:55:06.789376 kernel: rcu: RCU event tracing is enabled. Jan 23 17:55:06.789383 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 17:55:06.789389 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 17:55:06.789396 kernel: Tracing variant of Tasks RCU enabled. Jan 23 17:55:06.789402 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 17:55:06.789410 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 17:55:06.789417 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 17:55:06.789423 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 17:55:06.789429 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 17:55:06.789436 kernel: GICv3: 256 SPIs implemented Jan 23 17:55:06.789443 kernel: GICv3: 0 Extended SPIs implemented Jan 23 17:55:06.789449 kernel: Root IRQ handler: gic_handle_irq Jan 23 17:55:06.789455 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 23 17:55:06.789461 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jan 23 17:55:06.789468 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 23 17:55:06.789474 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 23 17:55:06.789482 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100100000 (indirect, esz 8, psz 64K, shr 1) Jan 23 17:55:06.789489 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100110000 (flat, esz 8, psz 64K, shr 1) Jan 23 17:55:06.789495 kernel: GICv3: using LPI property table @0x0000000100120000 Jan 23 17:55:06.789502 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100130000 Jan 23 17:55:06.789575 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 17:55:06.789584 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 17:55:06.789591 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 23 17:55:06.789598 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 23 17:55:06.789604 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 23 17:55:06.789611 kernel: Console: colour dummy device 80x25 Jan 23 17:55:06.789618 kernel: ACPI: Core revision 20240827 Jan 23 17:55:06.789627 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 23 17:55:06.789634 kernel: pid_max: default: 32768 minimum: 301 Jan 23 17:55:06.789641 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 17:55:06.789647 kernel: landlock: Up and running. Jan 23 17:55:06.789654 kernel: SELinux: Initializing. Jan 23 17:55:06.789660 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 17:55:06.789667 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 17:55:06.789674 kernel: rcu: Hierarchical SRCU implementation. Jan 23 17:55:06.789681 kernel: rcu: Max phase no-delay instances is 400. Jan 23 17:55:06.789689 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 17:55:06.789695 kernel: Remapping and enabling EFI services. Jan 23 17:55:06.789702 kernel: smp: Bringing up secondary CPUs ... Jan 23 17:55:06.789708 kernel: Detected PIPT I-cache on CPU1 Jan 23 17:55:06.789715 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 23 17:55:06.789722 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100140000 Jan 23 17:55:06.789728 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 17:55:06.789735 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 23 17:55:06.789742 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 17:55:06.789749 kernel: SMP: Total of 2 processors activated. Jan 23 17:55:06.789761 kernel: CPU: All CPU(s) started at EL1 Jan 23 17:55:06.789768 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 17:55:06.789777 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 23 17:55:06.789784 kernel: CPU features: detected: Common not Private translations Jan 23 17:55:06.789803 kernel: CPU features: detected: CRC32 instructions Jan 23 17:55:06.789811 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 23 17:55:06.789819 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 23 17:55:06.789828 kernel: CPU features: detected: LSE atomic instructions Jan 23 17:55:06.789835 kernel: CPU features: detected: Privileged Access Never Jan 23 17:55:06.789842 kernel: CPU features: detected: RAS Extension Support Jan 23 17:55:06.789849 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 23 17:55:06.789856 kernel: alternatives: applying system-wide alternatives Jan 23 17:55:06.789863 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jan 23 17:55:06.789871 kernel: Memory: 3858852K/4096000K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 215668K reserved, 16384K cma-reserved) Jan 23 17:55:06.789881 kernel: devtmpfs: initialized Jan 23 17:55:06.789889 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 17:55:06.789900 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 17:55:06.789908 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 23 17:55:06.789915 kernel: 0 pages in range for non-PLT usage Jan 23 17:55:06.789922 kernel: 508400 pages in range for PLT usage Jan 23 17:55:06.789929 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 17:55:06.789936 kernel: SMBIOS 3.0.0 present. Jan 23 17:55:06.789943 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 23 17:55:06.789949 kernel: DMI: Memory slots populated: 1/1 Jan 23 17:55:06.789956 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 17:55:06.789965 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 17:55:06.789972 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 17:55:06.789979 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 17:55:06.789986 kernel: audit: initializing netlink subsys (disabled) Jan 23 17:55:06.789993 kernel: audit: type=2000 audit(0.015:1): state=initialized audit_enabled=0 res=1 Jan 23 17:55:06.790000 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 17:55:06.790008 kernel: cpuidle: using governor menu Jan 23 17:55:06.790015 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 17:55:06.790022 kernel: ASID allocator initialised with 32768 entries Jan 23 17:55:06.790030 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 17:55:06.790037 kernel: Serial: AMBA PL011 UART driver Jan 23 17:55:06.790044 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 17:55:06.790051 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 17:55:06.790058 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 17:55:06.790064 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 17:55:06.790071 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 17:55:06.790078 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 17:55:06.790085 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 17:55:06.790094 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 17:55:06.790101 kernel: ACPI: Added _OSI(Module Device) Jan 23 17:55:06.790108 kernel: ACPI: Added _OSI(Processor Device) Jan 23 17:55:06.790115 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 17:55:06.790122 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 17:55:06.790129 kernel: ACPI: Interpreter enabled Jan 23 17:55:06.790136 kernel: ACPI: Using GIC for interrupt routing Jan 23 17:55:06.790142 kernel: ACPI: MCFG table detected, 1 entries Jan 23 17:55:06.790149 kernel: ACPI: CPU0 has been hot-added Jan 23 17:55:06.790157 kernel: ACPI: CPU1 has been hot-added Jan 23 17:55:06.790164 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 23 17:55:06.790172 kernel: printk: legacy console [ttyAMA0] enabled Jan 23 17:55:06.790179 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 17:55:06.790330 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 17:55:06.790395 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 23 17:55:06.790453 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 23 17:55:06.790543 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 23 17:55:06.790610 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 23 17:55:06.790619 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 23 17:55:06.790627 kernel: PCI host bridge to bus 0000:00 Jan 23 17:55:06.790699 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 23 17:55:06.790754 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 23 17:55:06.790849 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 23 17:55:06.790906 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 17:55:06.790991 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jan 23 17:55:06.791061 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 conventional PCI endpoint Jan 23 17:55:06.791121 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11289000-0x11289fff] Jan 23 17:55:06.791180 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref] Jan 23 17:55:06.791246 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:55:06.791304 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11288000-0x11288fff] Jan 23 17:55:06.791367 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 23 17:55:06.791424 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Jan 23 17:55:06.791482 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Jan 23 17:55:06.791573 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:55:06.791635 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11287000-0x11287fff] Jan 23 17:55:06.791697 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 23 17:55:06.791755 kernel: pci 0000:00:02.1: bridge window [mem 0x10e00000-0x10ffffff] Jan 23 17:55:06.791836 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:55:06.791898 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11286000-0x11286fff] Jan 23 17:55:06.791955 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 23 17:55:06.792013 kernel: pci 0000:00:02.2: bridge window [mem 0x10c00000-0x10dfffff] Jan 23 17:55:06.792070 kernel: pci 0000:00:02.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Jan 23 17:55:06.792139 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:55:06.792197 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11285000-0x11285fff] Jan 23 17:55:06.792258 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 23 17:55:06.792318 kernel: pci 0000:00:02.3: bridge window [mem 0x10a00000-0x10bfffff] Jan 23 17:55:06.792375 kernel: pci 0000:00:02.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Jan 23 17:55:06.792441 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:55:06.792500 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11284000-0x11284fff] Jan 23 17:55:06.792588 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 23 17:55:06.792649 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 23 17:55:06.792710 kernel: pci 0000:00:02.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Jan 23 17:55:06.792780 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:55:06.792855 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11283000-0x11283fff] Jan 23 17:55:06.792917 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 23 17:55:06.792976 kernel: pci 0000:00:02.5: bridge window [mem 0x10600000-0x107fffff] Jan 23 17:55:06.793035 kernel: pci 0000:00:02.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Jan 23 17:55:06.793103 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:55:06.793165 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11282000-0x11282fff] Jan 23 17:55:06.793234 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 23 17:55:06.793294 kernel: pci 0000:00:02.6: bridge window [mem 0x10400000-0x105fffff] Jan 23 17:55:06.793352 kernel: pci 0000:00:02.6: bridge window [mem 0x8000500000-0x80005fffff 64bit pref] Jan 23 17:55:06.793419 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:55:06.793477 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11281000-0x11281fff] Jan 23 17:55:06.793554 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 23 17:55:06.793617 kernel: pci 0000:00:02.7: bridge window [mem 0x10200000-0x103fffff] Jan 23 17:55:06.793681 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:55:06.793740 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11280000-0x11280fff] Jan 23 17:55:06.793836 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 23 17:55:06.793904 kernel: pci 0000:00:03.0: bridge window [mem 0x10000000-0x101fffff] Jan 23 17:55:06.793976 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 conventional PCI endpoint Jan 23 17:55:06.794039 kernel: pci 0000:00:04.0: BAR 0 [io 0x0000-0x0007] Jan 23 17:55:06.794108 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 23 17:55:06.794169 kernel: pci 0000:01:00.0: BAR 1 [mem 0x11000000-0x11000fff] Jan 23 17:55:06.794229 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jan 23 17:55:06.794289 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 23 17:55:06.794355 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jan 23 17:55:06.794417 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10e00000-0x10e03fff 64bit] Jan 23 17:55:06.794487 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Jan 23 17:55:06.794565 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10c00000-0x10c00fff] Jan 23 17:55:06.794629 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Jan 23 17:55:06.794715 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Jan 23 17:55:06.794780 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Jan 23 17:55:06.794872 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 23 17:55:06.794941 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff] Jan 23 17:55:06.795002 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Jan 23 17:55:06.795069 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Jan 23 17:55:06.795129 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10600000-0x10600fff] Jan 23 17:55:06.795188 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Jan 23 17:55:06.795255 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 23 17:55:06.795315 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10400000-0x10400fff] Jan 23 17:55:06.795378 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000500000-0x8000503fff 64bit pref] Jan 23 17:55:06.795438 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 23 17:55:06.795498 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 23 17:55:06.795592 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 23 17:55:06.795653 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 23 17:55:06.795715 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 23 17:55:06.795773 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 23 17:55:06.795874 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 23 17:55:06.795940 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 23 17:55:06.795999 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 23 17:55:06.796056 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 23 17:55:06.796118 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 23 17:55:06.796176 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 23 17:55:06.796237 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 23 17:55:06.796296 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 23 17:55:06.796357 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 23 17:55:06.796415 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 23 17:55:06.796476 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 23 17:55:06.796569 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 23 17:55:06.796631 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 23 17:55:06.796695 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 23 17:55:06.796754 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 23 17:55:06.796828 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 23 17:55:06.796893 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 23 17:55:06.796951 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 23 17:55:06.797009 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 23 17:55:06.797069 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 23 17:55:06.797129 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 23 17:55:06.797191 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 23 17:55:06.797251 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff]: assigned Jan 23 17:55:06.797308 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Jan 23 17:55:06.797367 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff]: assigned Jan 23 17:55:06.797425 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Jan 23 17:55:06.797485 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff]: assigned Jan 23 17:55:06.797577 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Jan 23 17:55:06.797641 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff]: assigned Jan 23 17:55:06.797700 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Jan 23 17:55:06.797759 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff]: assigned Jan 23 17:55:06.797835 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Jan 23 17:55:06.797895 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Jan 23 17:55:06.797953 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Jan 23 17:55:06.798012 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Jan 23 17:55:06.798074 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Jan 23 17:55:06.798133 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Jan 23 17:55:06.798192 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Jan 23 17:55:06.798251 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff]: assigned Jan 23 17:55:06.798310 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Jan 23 17:55:06.798373 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8001200000-0x8001203fff 64bit pref]: assigned Jan 23 17:55:06.798434 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11200000-0x11200fff]: assigned Jan 23 17:55:06.798493 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11201000-0x11201fff]: assigned Jan 23 17:55:06.798588 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Jan 23 17:55:06.798655 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11202000-0x11202fff]: assigned Jan 23 17:55:06.798713 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Jan 23 17:55:06.798773 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11203000-0x11203fff]: assigned Jan 23 17:55:06.798850 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Jan 23 17:55:06.798914 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11204000-0x11204fff]: assigned Jan 23 17:55:06.798974 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Jan 23 17:55:06.799034 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11205000-0x11205fff]: assigned Jan 23 17:55:06.799092 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Jan 23 17:55:06.799151 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11206000-0x11206fff]: assigned Jan 23 17:55:06.799210 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Jan 23 17:55:06.799271 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11207000-0x11207fff]: assigned Jan 23 17:55:06.799333 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Jan 23 17:55:06.799393 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11208000-0x11208fff]: assigned Jan 23 17:55:06.799451 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Jan 23 17:55:06.799524 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11209000-0x11209fff]: assigned Jan 23 17:55:06.799618 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff]: assigned Jan 23 17:55:06.799689 kernel: pci 0000:00:04.0: BAR 0 [io 0xa000-0xa007]: assigned Jan 23 17:55:06.799755 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Jan 23 17:55:06.799857 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jan 23 17:55:06.799931 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Jan 23 17:55:06.799992 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 23 17:55:06.800317 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 23 17:55:06.800391 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 23 17:55:06.800450 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 23 17:55:06.800561 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Jan 23 17:55:06.800633 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 23 17:55:06.800701 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 23 17:55:06.800759 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 23 17:55:06.800837 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 23 17:55:06.800908 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Jan 23 17:55:06.800969 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Jan 23 17:55:06.801028 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 23 17:55:06.801087 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 23 17:55:06.801148 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 23 17:55:06.801205 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 23 17:55:06.801271 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Jan 23 17:55:06.801330 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 23 17:55:06.801388 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 23 17:55:06.801446 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 23 17:55:06.801503 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 23 17:55:06.802009 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Jan 23 17:55:06.802094 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff]: assigned Jan 23 17:55:06.802155 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 23 17:55:06.802215 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 23 17:55:06.802320 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 23 17:55:06.802384 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 23 17:55:06.802452 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Jan 23 17:55:06.802537 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Jan 23 17:55:06.802604 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 23 17:55:06.802679 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 23 17:55:06.802739 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 23 17:55:06.802813 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 23 17:55:06.802885 kernel: pci 0000:07:00.0: ROM [mem 0x10c00000-0x10c7ffff pref]: assigned Jan 23 17:55:06.802947 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000c00000-0x8000c03fff 64bit pref]: assigned Jan 23 17:55:06.803010 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10c80000-0x10c80fff]: assigned Jan 23 17:55:06.803070 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 23 17:55:06.803132 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 23 17:55:06.803191 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 23 17:55:06.803251 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 23 17:55:06.803317 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 23 17:55:06.803377 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 23 17:55:06.803435 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 23 17:55:06.803495 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 23 17:55:06.803600 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 23 17:55:06.803664 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 23 17:55:06.803727 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 23 17:55:06.803787 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 23 17:55:06.803865 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 23 17:55:06.803920 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 23 17:55:06.803972 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 23 17:55:06.804038 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 23 17:55:06.804093 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 23 17:55:06.804151 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 23 17:55:06.804218 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 23 17:55:06.804273 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 23 17:55:06.804326 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 23 17:55:06.804386 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 23 17:55:06.804440 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 23 17:55:06.804498 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 23 17:55:06.804596 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 23 17:55:06.804656 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 23 17:55:06.804711 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 23 17:55:06.804775 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 23 17:55:06.804879 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 23 17:55:06.804938 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 23 17:55:06.805010 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 23 17:55:06.805065 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 23 17:55:06.805118 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 23 17:55:06.805178 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 23 17:55:06.805233 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 23 17:55:06.805292 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 23 17:55:06.805354 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 23 17:55:06.805411 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 23 17:55:06.805464 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 23 17:55:06.806627 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 23 17:55:06.806708 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 23 17:55:06.806764 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 23 17:55:06.806774 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 23 17:55:06.806781 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 23 17:55:06.806843 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 23 17:55:06.806851 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 23 17:55:06.806858 kernel: iommu: Default domain type: Translated Jan 23 17:55:06.806866 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 17:55:06.806874 kernel: efivars: Registered efivars operations Jan 23 17:55:06.806881 kernel: vgaarb: loaded Jan 23 17:55:06.806888 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 17:55:06.806896 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 17:55:06.806903 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 17:55:06.806913 kernel: pnp: PnP ACPI init Jan 23 17:55:06.807002 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 23 17:55:06.807015 kernel: pnp: PnP ACPI: found 1 devices Jan 23 17:55:06.807022 kernel: NET: Registered PF_INET protocol family Jan 23 17:55:06.807030 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 17:55:06.807037 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 17:55:06.807045 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 17:55:06.807052 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 17:55:06.807062 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 17:55:06.807069 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 17:55:06.807077 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 17:55:06.807084 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 17:55:06.807092 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 17:55:06.807163 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 23 17:55:06.807174 kernel: PCI: CLS 0 bytes, default 64 Jan 23 17:55:06.807181 kernel: kvm [1]: HYP mode not available Jan 23 17:55:06.807189 kernel: Initialise system trusted keyrings Jan 23 17:55:06.807199 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 17:55:06.807206 kernel: Key type asymmetric registered Jan 23 17:55:06.807213 kernel: Asymmetric key parser 'x509' registered Jan 23 17:55:06.807221 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 23 17:55:06.807228 kernel: io scheduler mq-deadline registered Jan 23 17:55:06.807236 kernel: io scheduler kyber registered Jan 23 17:55:06.807243 kernel: io scheduler bfq registered Jan 23 17:55:06.807251 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 23 17:55:06.807314 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 23 17:55:06.807377 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 23 17:55:06.807442 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:55:06.807504 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 23 17:55:06.808664 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 23 17:55:06.808730 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:55:06.808812 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 23 17:55:06.808879 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 23 17:55:06.808939 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:55:06.809010 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 23 17:55:06.809072 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 23 17:55:06.809131 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:55:06.809194 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 23 17:55:06.809253 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 23 17:55:06.809311 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:55:06.809372 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 23 17:55:06.809434 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 23 17:55:06.809492 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:55:06.809571 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 23 17:55:06.809632 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 23 17:55:06.809691 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:55:06.809753 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 23 17:55:06.809857 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 23 17:55:06.809922 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:55:06.809938 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 23 17:55:06.810001 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 23 17:55:06.810060 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 23 17:55:06.810119 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:55:06.810129 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 23 17:55:06.810137 kernel: ACPI: button: Power Button [PWRB] Jan 23 17:55:06.810145 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 23 17:55:06.810208 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 23 17:55:06.810277 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 23 17:55:06.810288 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 17:55:06.810296 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 23 17:55:06.810356 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 23 17:55:06.810367 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 23 17:55:06.810375 kernel: thunder_xcv, ver 1.0 Jan 23 17:55:06.810382 kernel: thunder_bgx, ver 1.0 Jan 23 17:55:06.810389 kernel: nicpf, ver 1.0 Jan 23 17:55:06.810397 kernel: nicvf, ver 1.0 Jan 23 17:55:06.810468 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 17:55:06.812621 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T17:55:06 UTC (1769190906) Jan 23 17:55:06.812644 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 17:55:06.812652 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jan 23 17:55:06.812660 kernel: watchdog: NMI not fully supported Jan 23 17:55:06.812668 kernel: watchdog: Hard watchdog permanently disabled Jan 23 17:55:06.812676 kernel: NET: Registered PF_INET6 protocol family Jan 23 17:55:06.812683 kernel: Segment Routing with IPv6 Jan 23 17:55:06.812698 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 17:55:06.812706 kernel: NET: Registered PF_PACKET protocol family Jan 23 17:55:06.812713 kernel: Key type dns_resolver registered Jan 23 17:55:06.812721 kernel: registered taskstats version 1 Jan 23 17:55:06.812728 kernel: Loading compiled-in X.509 certificates Jan 23 17:55:06.812736 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 3b281aa2bfe49764dd224485ec54e6070c82b8fb' Jan 23 17:55:06.812743 kernel: Demotion targets for Node 0: null Jan 23 17:55:06.812750 kernel: Key type .fscrypt registered Jan 23 17:55:06.812757 kernel: Key type fscrypt-provisioning registered Jan 23 17:55:06.812765 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 17:55:06.812774 kernel: ima: Allocated hash algorithm: sha1 Jan 23 17:55:06.812782 kernel: ima: No architecture policies found Jan 23 17:55:06.812805 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 17:55:06.812814 kernel: clk: Disabling unused clocks Jan 23 17:55:06.812821 kernel: PM: genpd: Disabling unused power domains Jan 23 17:55:06.812828 kernel: Warning: unable to open an initial console. Jan 23 17:55:06.812836 kernel: Freeing unused kernel memory: 39552K Jan 23 17:55:06.812844 kernel: Run /init as init process Jan 23 17:55:06.812851 kernel: with arguments: Jan 23 17:55:06.812862 kernel: /init Jan 23 17:55:06.812869 kernel: with environment: Jan 23 17:55:06.812876 kernel: HOME=/ Jan 23 17:55:06.812883 kernel: TERM=linux Jan 23 17:55:06.812892 systemd[1]: Successfully made /usr/ read-only. Jan 23 17:55:06.812903 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 17:55:06.812911 systemd[1]: Detected virtualization kvm. Jan 23 17:55:06.812920 systemd[1]: Detected architecture arm64. Jan 23 17:55:06.812928 systemd[1]: Running in initrd. Jan 23 17:55:06.812936 systemd[1]: No hostname configured, using default hostname. Jan 23 17:55:06.812944 systemd[1]: Hostname set to . Jan 23 17:55:06.812951 systemd[1]: Initializing machine ID from VM UUID. Jan 23 17:55:06.812959 systemd[1]: Queued start job for default target initrd.target. Jan 23 17:55:06.812967 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:55:06.812975 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:55:06.812984 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 17:55:06.812993 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 17:55:06.813001 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 17:55:06.813009 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 17:55:06.813018 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 17:55:06.813026 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 17:55:06.813034 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:55:06.813043 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:55:06.813053 systemd[1]: Reached target paths.target - Path Units. Jan 23 17:55:06.813060 systemd[1]: Reached target slices.target - Slice Units. Jan 23 17:55:06.813069 systemd[1]: Reached target swap.target - Swaps. Jan 23 17:55:06.813076 systemd[1]: Reached target timers.target - Timer Units. Jan 23 17:55:06.813084 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 17:55:06.813092 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 17:55:06.813100 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 17:55:06.813108 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 17:55:06.813117 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:55:06.813125 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 17:55:06.813133 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:55:06.813141 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 17:55:06.813149 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 17:55:06.813157 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 17:55:06.813165 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 17:55:06.813173 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 17:55:06.813182 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 17:55:06.813190 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 17:55:06.813198 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 17:55:06.813206 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:55:06.813243 systemd-journald[245]: Collecting audit messages is disabled. Jan 23 17:55:06.813265 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 17:55:06.813274 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:55:06.813283 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 17:55:06.813292 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 17:55:06.813301 systemd-journald[245]: Journal started Jan 23 17:55:06.813320 systemd-journald[245]: Runtime Journal (/run/log/journal/26403e4ce34644a685aaf40ec14163f6) is 8M, max 76.5M, 68.5M free. Jan 23 17:55:06.811429 systemd-modules-load[247]: Inserted module 'overlay' Jan 23 17:55:06.816052 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 17:55:06.825730 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 17:55:06.831562 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 17:55:06.834669 systemd-modules-load[247]: Inserted module 'br_netfilter' Jan 23 17:55:06.835538 kernel: Bridge firewalling registered Jan 23 17:55:06.837241 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 17:55:06.841917 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 17:55:06.844649 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:55:06.850759 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 17:55:06.850902 systemd-tmpfiles[260]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 17:55:06.857462 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 17:55:06.860340 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 17:55:06.874433 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:55:06.884619 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:55:06.887372 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:55:06.891311 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 17:55:06.903053 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 17:55:06.909193 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 17:55:06.940225 systemd-resolved[282]: Positive Trust Anchors: Jan 23 17:55:06.941031 systemd-resolved[282]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 17:55:06.941820 systemd-resolved[282]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 17:55:06.952178 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=5fc6d8e43735a6d26d13c2f5b234025ac82c601a45144671feeb457ddade8f9d Jan 23 17:55:06.951237 systemd-resolved[282]: Defaulting to hostname 'linux'. Jan 23 17:55:06.954754 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 17:55:06.957170 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:55:07.050543 kernel: SCSI subsystem initialized Jan 23 17:55:07.054543 kernel: Loading iSCSI transport class v2.0-870. Jan 23 17:55:07.063581 kernel: iscsi: registered transport (tcp) Jan 23 17:55:07.076846 kernel: iscsi: registered transport (qla4xxx) Jan 23 17:55:07.076920 kernel: QLogic iSCSI HBA Driver Jan 23 17:55:07.100838 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 17:55:07.132590 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:55:07.134591 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 17:55:07.189704 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 17:55:07.192580 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 17:55:07.266561 kernel: raid6: neonx8 gen() 15625 MB/s Jan 23 17:55:07.283592 kernel: raid6: neonx4 gen() 15657 MB/s Jan 23 17:55:07.300580 kernel: raid6: neonx2 gen() 13174 MB/s Jan 23 17:55:07.317565 kernel: raid6: neonx1 gen() 10419 MB/s Jan 23 17:55:07.334612 kernel: raid6: int64x8 gen() 6859 MB/s Jan 23 17:55:07.351591 kernel: raid6: int64x4 gen() 7325 MB/s Jan 23 17:55:07.368571 kernel: raid6: int64x2 gen() 6080 MB/s Jan 23 17:55:07.385578 kernel: raid6: int64x1 gen() 5017 MB/s Jan 23 17:55:07.385680 kernel: raid6: using algorithm neonx4 gen() 15657 MB/s Jan 23 17:55:07.402590 kernel: raid6: .... xor() 12295 MB/s, rmw enabled Jan 23 17:55:07.402670 kernel: raid6: using neon recovery algorithm Jan 23 17:55:07.407760 kernel: xor: measuring software checksum speed Jan 23 17:55:07.407844 kernel: 8regs : 21505 MB/sec Jan 23 17:55:07.407866 kernel: 32regs : 21710 MB/sec Jan 23 17:55:07.408547 kernel: arm64_neon : 23414 MB/sec Jan 23 17:55:07.408577 kernel: xor: using function: arm64_neon (23414 MB/sec) Jan 23 17:55:07.462578 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 17:55:07.471010 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 17:55:07.474990 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:55:07.507189 systemd-udevd[494]: Using default interface naming scheme 'v255'. Jan 23 17:55:07.511421 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:55:07.517652 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 17:55:07.550336 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation Jan 23 17:55:07.582181 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 17:55:07.585780 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 17:55:07.650196 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:55:07.654886 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 17:55:07.744542 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Jan 23 17:55:07.750074 kernel: ACPI: bus type USB registered Jan 23 17:55:07.750135 kernel: usbcore: registered new interface driver usbfs Jan 23 17:55:07.750147 kernel: usbcore: registered new interface driver hub Jan 23 17:55:07.750156 kernel: scsi host0: Virtio SCSI HBA Jan 23 17:55:07.750191 kernel: usbcore: registered new device driver usb Jan 23 17:55:07.771862 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 23 17:55:07.771935 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 23 17:55:07.771952 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 23 17:55:07.772099 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 23 17:55:07.774600 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 23 17:55:07.774886 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 23 17:55:07.776147 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 23 17:55:07.776753 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 23 17:55:07.778559 kernel: hub 1-0:1.0: USB hub found Jan 23 17:55:07.778747 kernel: hub 1-0:1.0: 4 ports detected Jan 23 17:55:07.778893 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 23 17:55:07.779058 kernel: hub 2-0:1.0: USB hub found Jan 23 17:55:07.779160 kernel: hub 2-0:1.0: 4 ports detected Jan 23 17:55:07.789586 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 17:55:07.790739 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:55:07.795736 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:55:07.798297 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:55:07.803099 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 17:55:07.831121 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 23 17:55:07.831328 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 23 17:55:07.831405 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 23 17:55:07.831476 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 23 17:55:07.831582 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 23 17:55:07.834238 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:55:07.837362 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 23 17:55:07.837571 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 23 17:55:07.837677 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 17:55:07.837688 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 23 17:55:07.841737 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 17:55:07.841823 kernel: GPT:17805311 != 80003071 Jan 23 17:55:07.841853 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 17:55:07.841874 kernel: GPT:17805311 != 80003071 Jan 23 17:55:07.842642 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 17:55:07.842678 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 17:55:07.843591 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 23 17:55:07.893198 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 23 17:55:07.909910 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 23 17:55:07.924039 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 23 17:55:07.924821 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 23 17:55:07.935822 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 23 17:55:07.938273 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 17:55:07.953998 disk-uuid[599]: Primary Header is updated. Jan 23 17:55:07.953998 disk-uuid[599]: Secondary Entries is updated. Jan 23 17:55:07.953998 disk-uuid[599]: Secondary Header is updated. Jan 23 17:55:07.962858 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 17:55:07.964182 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 17:55:07.966525 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 17:55:07.967232 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:55:07.968115 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 17:55:07.971699 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 17:55:07.989578 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 17:55:08.001479 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 17:55:08.018590 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 23 17:55:08.154385 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 23 17:55:08.154441 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 23 17:55:08.155609 kernel: usbcore: registered new interface driver usbhid Jan 23 17:55:08.156541 kernel: usbhid: USB HID core driver Jan 23 17:55:08.258708 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 23 17:55:08.387547 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 23 17:55:08.441348 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 23 17:55:08.986538 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 17:55:08.987350 disk-uuid[600]: The operation has completed successfully. Jan 23 17:55:09.045698 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 17:55:09.046577 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 17:55:09.070894 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 17:55:09.092733 sh[629]: Success Jan 23 17:55:09.109584 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 17:55:09.109646 kernel: device-mapper: uevent: version 1.0.3 Jan 23 17:55:09.111338 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 17:55:09.121578 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 23 17:55:09.171915 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 17:55:09.180361 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 17:55:09.185222 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 17:55:09.212548 kernel: BTRFS: device fsid 8784b097-3924-47e8-98b3-06e8cbe78a64 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (641) Jan 23 17:55:09.215545 kernel: BTRFS info (device dm-0): first mount of filesystem 8784b097-3924-47e8-98b3-06e8cbe78a64 Jan 23 17:55:09.215606 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:55:09.222088 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 17:55:09.222152 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 17:55:09.222191 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 17:55:09.224605 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 17:55:09.225259 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 17:55:09.226764 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 17:55:09.227475 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 17:55:09.232772 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 17:55:09.265566 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (674) Jan 23 17:55:09.267586 kernel: BTRFS info (device sda6): first mount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:55:09.267655 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:55:09.272557 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 17:55:09.272607 kernel: BTRFS info (device sda6): turning on async discard Jan 23 17:55:09.272618 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 17:55:09.277676 kernel: BTRFS info (device sda6): last unmount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:55:09.279258 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 17:55:09.281889 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 17:55:09.399670 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 17:55:09.407076 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 17:55:09.425065 ignition[724]: Ignition 2.22.0 Jan 23 17:55:09.425081 ignition[724]: Stage: fetch-offline Jan 23 17:55:09.425112 ignition[724]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:55:09.425119 ignition[724]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 17:55:09.429871 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 17:55:09.425198 ignition[724]: parsed url from cmdline: "" Jan 23 17:55:09.425200 ignition[724]: no config URL provided Jan 23 17:55:09.425205 ignition[724]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 17:55:09.425210 ignition[724]: no config at "/usr/lib/ignition/user.ign" Jan 23 17:55:09.425215 ignition[724]: failed to fetch config: resource requires networking Jan 23 17:55:09.425486 ignition[724]: Ignition finished successfully Jan 23 17:55:09.458586 systemd-networkd[818]: lo: Link UP Jan 23 17:55:09.458597 systemd-networkd[818]: lo: Gained carrier Jan 23 17:55:09.460846 systemd-networkd[818]: Enumeration completed Jan 23 17:55:09.461272 systemd-networkd[818]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:55:09.461275 systemd-networkd[818]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 17:55:09.462005 systemd-networkd[818]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:55:09.462009 systemd-networkd[818]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 17:55:09.462495 systemd-networkd[818]: eth0: Link UP Jan 23 17:55:09.462662 systemd-networkd[818]: eth1: Link UP Jan 23 17:55:09.462834 systemd-networkd[818]: eth0: Gained carrier Jan 23 17:55:09.462845 systemd-networkd[818]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:55:09.464813 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 17:55:09.465544 systemd[1]: Reached target network.target - Network. Jan 23 17:55:09.466820 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 17:55:09.466952 systemd-networkd[818]: eth1: Gained carrier Jan 23 17:55:09.466974 systemd-networkd[818]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:55:09.499932 ignition[822]: Ignition 2.22.0 Jan 23 17:55:09.500570 ignition[822]: Stage: fetch Jan 23 17:55:09.500740 ignition[822]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:55:09.500750 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 17:55:09.502390 systemd-networkd[818]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 23 17:55:09.500889 ignition[822]: parsed url from cmdline: "" Jan 23 17:55:09.500892 ignition[822]: no config URL provided Jan 23 17:55:09.500898 ignition[822]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 17:55:09.500906 ignition[822]: no config at "/usr/lib/ignition/user.ign" Jan 23 17:55:09.500946 ignition[822]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 23 17:55:09.501406 ignition[822]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 23 17:55:09.536649 systemd-networkd[818]: eth0: DHCPv4 address 46.224.74.11/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 23 17:55:09.702311 ignition[822]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 23 17:55:09.707917 ignition[822]: GET result: OK Jan 23 17:55:09.708083 ignition[822]: parsing config with SHA512: 31fcc807d5fe6f280b001f4847ce42c2b99e8aeff9fedcd2524476ccbc6c85e5cb7cb871c8ab1e744bab8c58cf9d9acbdd65ff5bf08acae3714680436d38a52d Jan 23 17:55:09.713908 unknown[822]: fetched base config from "system" Jan 23 17:55:09.713919 unknown[822]: fetched base config from "system" Jan 23 17:55:09.714422 ignition[822]: fetch: fetch complete Jan 23 17:55:09.713928 unknown[822]: fetched user config from "hetzner" Jan 23 17:55:09.714431 ignition[822]: fetch: fetch passed Jan 23 17:55:09.714477 ignition[822]: Ignition finished successfully Jan 23 17:55:09.719284 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 17:55:09.722859 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 17:55:09.749232 ignition[829]: Ignition 2.22.0 Jan 23 17:55:09.749973 ignition[829]: Stage: kargs Jan 23 17:55:09.750132 ignition[829]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:55:09.750151 ignition[829]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 17:55:09.752842 ignition[829]: kargs: kargs passed Jan 23 17:55:09.752899 ignition[829]: Ignition finished successfully Jan 23 17:55:09.755910 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 17:55:09.758645 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 17:55:09.794033 ignition[836]: Ignition 2.22.0 Jan 23 17:55:09.794046 ignition[836]: Stage: disks Jan 23 17:55:09.794189 ignition[836]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:55:09.794197 ignition[836]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 17:55:09.796488 ignition[836]: disks: disks passed Jan 23 17:55:09.796559 ignition[836]: Ignition finished successfully Jan 23 17:55:09.800451 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 17:55:09.801769 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 17:55:09.803707 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 17:55:09.805426 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 17:55:09.806935 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 17:55:09.808124 systemd[1]: Reached target basic.target - Basic System. Jan 23 17:55:09.810090 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 17:55:09.837233 systemd-fsck[845]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jan 23 17:55:09.840269 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 17:55:09.843691 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 17:55:09.927543 kernel: EXT4-fs (sda9): mounted filesystem 5f1f19a2-81b4-48e9-bfdb-d3843ff70e8e r/w with ordered data mode. Quota mode: none. Jan 23 17:55:09.928738 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 17:55:09.930345 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 17:55:09.933054 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 17:55:09.934610 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 17:55:09.940153 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 23 17:55:09.941241 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 17:55:09.941277 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 17:55:09.947937 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 17:55:09.949215 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 17:55:09.960584 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (853) Jan 23 17:55:09.963807 kernel: BTRFS info (device sda6): first mount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:55:09.963874 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:55:09.973629 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 17:55:09.973699 kernel: BTRFS info (device sda6): turning on async discard Jan 23 17:55:09.973711 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 17:55:09.977227 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 17:55:10.012035 initrd-setup-root[880]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 17:55:10.016363 coreos-metadata[855]: Jan 23 17:55:10.016 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 23 17:55:10.019809 coreos-metadata[855]: Jan 23 17:55:10.019 INFO Fetch successful Jan 23 17:55:10.020441 coreos-metadata[855]: Jan 23 17:55:10.019 INFO wrote hostname ci-4459-2-3-3-b08bb0c7a1 to /sysroot/etc/hostname Jan 23 17:55:10.022201 initrd-setup-root[887]: cut: /sysroot/etc/group: No such file or directory Jan 23 17:55:10.024735 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 17:55:10.029617 initrd-setup-root[895]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 17:55:10.034458 initrd-setup-root[902]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 17:55:10.136104 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 17:55:10.138198 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 17:55:10.141115 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 17:55:10.154540 kernel: BTRFS info (device sda6): last unmount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:55:10.171705 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 17:55:10.185359 ignition[970]: INFO : Ignition 2.22.0 Jan 23 17:55:10.185359 ignition[970]: INFO : Stage: mount Jan 23 17:55:10.186575 ignition[970]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:55:10.186575 ignition[970]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 17:55:10.186575 ignition[970]: INFO : mount: mount passed Jan 23 17:55:10.186575 ignition[970]: INFO : Ignition finished successfully Jan 23 17:55:10.188206 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 17:55:10.192230 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 17:55:10.213594 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 17:55:10.217129 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 17:55:10.246551 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (981) Jan 23 17:55:10.249855 kernel: BTRFS info (device sda6): first mount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:55:10.249925 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:55:10.253617 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 17:55:10.253691 kernel: BTRFS info (device sda6): turning on async discard Jan 23 17:55:10.253705 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 17:55:10.256850 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 17:55:10.293893 ignition[998]: INFO : Ignition 2.22.0 Jan 23 17:55:10.293893 ignition[998]: INFO : Stage: files Jan 23 17:55:10.295223 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:55:10.295223 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 17:55:10.295223 ignition[998]: DEBUG : files: compiled without relabeling support, skipping Jan 23 17:55:10.298249 ignition[998]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 17:55:10.298249 ignition[998]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 17:55:10.300465 ignition[998]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 17:55:10.301502 ignition[998]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 17:55:10.302662 unknown[998]: wrote ssh authorized keys file for user: core Jan 23 17:55:10.304047 ignition[998]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 17:55:10.305246 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 17:55:10.305246 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 23 17:55:10.389958 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 17:55:10.475540 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 17:55:10.475540 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 17:55:10.475540 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 17:55:10.475540 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 17:55:10.475540 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 17:55:10.475540 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 17:55:10.484490 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 17:55:10.484490 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 17:55:10.484490 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 17:55:10.484490 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 17:55:10.484490 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 17:55:10.484490 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 17:55:10.493465 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 17:55:10.493465 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 17:55:10.493465 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jan 23 17:55:10.824407 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 17:55:10.912698 systemd-networkd[818]: eth1: Gained IPv6LL Jan 23 17:55:10.976671 systemd-networkd[818]: eth0: Gained IPv6LL Jan 23 17:55:11.396822 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 17:55:11.399897 ignition[998]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 17:55:11.401733 ignition[998]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 17:55:11.411545 ignition[998]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 17:55:11.411545 ignition[998]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 17:55:11.411545 ignition[998]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 23 17:55:11.411545 ignition[998]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 23 17:55:11.425908 ignition[998]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 23 17:55:11.425908 ignition[998]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 23 17:55:11.425908 ignition[998]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 23 17:55:11.425908 ignition[998]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 17:55:11.425908 ignition[998]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 17:55:11.425908 ignition[998]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 17:55:11.425908 ignition[998]: INFO : files: files passed Jan 23 17:55:11.425908 ignition[998]: INFO : Ignition finished successfully Jan 23 17:55:11.417849 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 17:55:11.419867 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 17:55:11.429656 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 17:55:11.438656 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 17:55:11.444004 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 17:55:11.456358 initrd-setup-root-after-ignition[1027]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:55:11.456358 initrd-setup-root-after-ignition[1027]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:55:11.459200 initrd-setup-root-after-ignition[1031]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:55:11.461337 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 17:55:11.463759 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 17:55:11.466122 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 17:55:11.517176 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 17:55:11.517373 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 17:55:11.519963 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 17:55:11.521557 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 17:55:11.523848 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 17:55:11.525071 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 17:55:11.553434 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 17:55:11.557050 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 17:55:11.582065 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:55:11.583553 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:55:11.584273 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 17:55:11.585556 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 17:55:11.585742 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 17:55:11.587362 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 17:55:11.588697 systemd[1]: Stopped target basic.target - Basic System. Jan 23 17:55:11.589638 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 17:55:11.590684 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 17:55:11.591744 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 17:55:11.592875 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 17:55:11.593974 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 17:55:11.595040 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 17:55:11.596190 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 17:55:11.597242 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 17:55:11.598206 systemd[1]: Stopped target swap.target - Swaps. Jan 23 17:55:11.599091 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 17:55:11.599266 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 17:55:11.600559 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:55:11.601737 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:55:11.602800 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 17:55:11.602911 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:55:11.604015 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 17:55:11.604175 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 17:55:11.605719 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 17:55:11.605892 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 17:55:11.606968 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 17:55:11.607116 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 17:55:11.608035 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 23 17:55:11.608194 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 17:55:11.611014 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 17:55:11.611546 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 17:55:11.612957 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:55:11.619742 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 17:55:11.621137 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 17:55:11.621308 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:55:11.624109 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 17:55:11.624234 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 17:55:11.633602 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 17:55:11.634377 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 17:55:11.647711 ignition[1051]: INFO : Ignition 2.22.0 Jan 23 17:55:11.647711 ignition[1051]: INFO : Stage: umount Jan 23 17:55:11.650196 ignition[1051]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:55:11.650196 ignition[1051]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 17:55:11.650196 ignition[1051]: INFO : umount: umount passed Jan 23 17:55:11.650196 ignition[1051]: INFO : Ignition finished successfully Jan 23 17:55:11.652213 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 17:55:11.656972 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 17:55:11.657087 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 17:55:11.665211 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 17:55:11.666315 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 17:55:11.667111 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 17:55:11.667163 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 17:55:11.667824 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 17:55:11.667863 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 17:55:11.671628 systemd[1]: Stopped target network.target - Network. Jan 23 17:55:11.674934 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 17:55:11.675017 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 17:55:11.681214 systemd[1]: Stopped target paths.target - Path Units. Jan 23 17:55:11.682300 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 17:55:11.685596 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:55:11.687608 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 17:55:11.690078 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 17:55:11.692660 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 17:55:11.692719 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 17:55:11.693439 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 17:55:11.693479 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 17:55:11.696743 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 17:55:11.696825 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 17:55:11.698350 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 17:55:11.698399 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 17:55:11.699451 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 17:55:11.700453 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 17:55:11.703071 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 17:55:11.703195 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 17:55:11.705133 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 17:55:11.705195 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 17:55:11.706974 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 17:55:11.707096 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 17:55:11.713492 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 17:55:11.714019 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 17:55:11.714140 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 17:55:11.716833 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 17:55:11.717433 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 17:55:11.718397 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 17:55:11.718440 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:55:11.721017 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 17:55:11.721613 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 17:55:11.721674 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 17:55:11.722471 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 17:55:11.722546 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:55:11.724399 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 17:55:11.724444 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 17:55:11.727390 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 17:55:11.727443 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:55:11.730286 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:55:11.736909 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 17:55:11.736992 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 17:55:11.751703 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 17:55:11.753528 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:55:11.755920 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 17:55:11.755968 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 17:55:11.758326 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 17:55:11.758392 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:55:11.760445 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 17:55:11.760504 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 17:55:11.762260 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 17:55:11.762315 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 17:55:11.764009 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 17:55:11.764061 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 17:55:11.766561 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 17:55:11.769142 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 17:55:11.769220 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:55:11.771287 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 17:55:11.771355 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:55:11.774730 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 17:55:11.774795 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:55:11.779962 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 17:55:11.780060 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 17:55:11.780100 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 17:55:11.780390 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 17:55:11.783829 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 17:55:11.792000 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 17:55:11.792132 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 17:55:11.794946 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 17:55:11.797653 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 17:55:11.819693 systemd[1]: Switching root. Jan 23 17:55:11.869369 systemd-journald[245]: Journal stopped Jan 23 17:55:12.831284 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Jan 23 17:55:12.831375 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 17:55:12.831388 kernel: SELinux: policy capability open_perms=1 Jan 23 17:55:12.831397 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 17:55:12.831406 kernel: SELinux: policy capability always_check_network=0 Jan 23 17:55:12.831420 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 17:55:12.831429 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 17:55:12.831443 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 17:55:12.831452 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 17:55:12.831463 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 17:55:12.831472 kernel: audit: type=1403 audit(1769190912.007:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 17:55:12.831482 systemd[1]: Successfully loaded SELinux policy in 63.547ms. Jan 23 17:55:12.831498 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.203ms. Jan 23 17:55:12.831901 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 17:55:12.831933 systemd[1]: Detected virtualization kvm. Jan 23 17:55:12.831944 systemd[1]: Detected architecture arm64. Jan 23 17:55:12.831954 systemd[1]: Detected first boot. Jan 23 17:55:12.831970 systemd[1]: Hostname set to . Jan 23 17:55:12.831980 systemd[1]: Initializing machine ID from VM UUID. Jan 23 17:55:12.831990 zram_generator::config[1095]: No configuration found. Jan 23 17:55:12.832005 kernel: NET: Registered PF_VSOCK protocol family Jan 23 17:55:12.832018 systemd[1]: Populated /etc with preset unit settings. Jan 23 17:55:12.832032 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 17:55:12.832043 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 17:55:12.832052 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 17:55:12.832062 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 17:55:12.832076 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 17:55:12.832086 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 17:55:12.832158 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 17:55:12.832172 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 17:55:12.832188 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 17:55:12.832200 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 17:55:12.832211 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 17:55:12.832221 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 17:55:12.832231 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:55:12.832241 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:55:12.832251 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 17:55:12.832261 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 17:55:12.832271 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 17:55:12.832283 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 17:55:12.832293 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 23 17:55:12.832303 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:55:12.832313 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:55:12.832323 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 17:55:12.832334 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 17:55:12.832344 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 17:55:12.832356 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 17:55:12.832366 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:55:12.832376 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 17:55:12.832386 systemd[1]: Reached target slices.target - Slice Units. Jan 23 17:55:12.832396 systemd[1]: Reached target swap.target - Swaps. Jan 23 17:55:12.832406 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 17:55:12.832415 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 17:55:12.832425 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 17:55:12.832435 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:55:12.832446 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 17:55:12.832457 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:55:12.832471 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 17:55:12.832481 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 17:55:12.832491 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 17:55:12.832501 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 17:55:12.832526 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 17:55:12.832537 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 17:55:12.832547 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 17:55:12.832560 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 17:55:12.832571 systemd[1]: Reached target machines.target - Containers. Jan 23 17:55:12.832581 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 17:55:12.832591 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:55:12.832601 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 17:55:12.832611 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 17:55:12.832621 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 17:55:12.832631 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 17:55:12.832642 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 17:55:12.832652 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 17:55:12.832664 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 17:55:12.832675 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 17:55:12.832685 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 17:55:12.832695 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 17:55:12.832705 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 17:55:12.832715 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 17:55:12.832727 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:55:12.832737 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 17:55:12.832747 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 17:55:12.832768 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 17:55:12.832780 kernel: fuse: init (API version 7.41) Jan 23 17:55:12.832790 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 17:55:12.832801 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 17:55:12.832812 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 17:55:12.832825 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 17:55:12.832835 systemd[1]: Stopped verity-setup.service. Jan 23 17:55:12.832846 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 17:55:12.832856 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 17:55:12.832866 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 17:55:12.832876 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 17:55:12.832886 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 17:55:12.832896 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 17:55:12.832906 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:55:12.832917 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 17:55:12.832927 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 17:55:12.832939 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 17:55:12.832949 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 17:55:12.832958 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 17:55:12.832968 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 17:55:12.832978 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 17:55:12.832988 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 17:55:12.832998 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 17:55:12.833050 systemd-journald[1159]: Collecting audit messages is disabled. Jan 23 17:55:12.833133 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 17:55:12.833148 kernel: loop: module loaded Jan 23 17:55:12.833160 systemd-journald[1159]: Journal started Jan 23 17:55:12.833183 systemd-journald[1159]: Runtime Journal (/run/log/journal/26403e4ce34644a685aaf40ec14163f6) is 8M, max 76.5M, 68.5M free. Jan 23 17:55:12.844628 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 17:55:12.844688 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 17:55:12.844703 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 17:55:12.550591 systemd[1]: Queued start job for default target multi-user.target. Jan 23 17:55:12.562499 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 23 17:55:12.563119 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 17:55:12.852571 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 17:55:12.862903 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 17:55:12.866139 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:55:12.867552 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 17:55:12.872530 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 17:55:12.874896 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 17:55:12.881053 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 17:55:12.883697 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 17:55:12.885559 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 17:55:12.886471 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 17:55:12.887567 kernel: ACPI: bus type drm_connector registered Jan 23 17:55:12.891298 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 17:55:12.893476 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 17:55:12.893686 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 17:55:12.894639 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 17:55:12.896050 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:55:12.898221 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 17:55:12.899152 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 17:55:12.901860 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 17:55:12.918612 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 17:55:12.922791 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 17:55:12.924708 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 17:55:12.926936 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 17:55:12.933719 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 17:55:12.936584 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 17:55:12.938375 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 17:55:12.944601 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 17:55:12.955595 kernel: loop0: detected capacity change from 0 to 100632 Jan 23 17:55:12.986931 systemd-journald[1159]: Time spent on flushing to /var/log/journal/26403e4ce34644a685aaf40ec14163f6 is 68.311ms for 1178 entries. Jan 23 17:55:12.986931 systemd-journald[1159]: System Journal (/var/log/journal/26403e4ce34644a685aaf40ec14163f6) is 8M, max 584.8M, 576.8M free. Jan 23 17:55:13.084969 systemd-journald[1159]: Received client request to flush runtime journal. Jan 23 17:55:13.085396 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 17:55:13.085428 kernel: loop1: detected capacity change from 0 to 119840 Jan 23 17:55:12.986570 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:55:13.021085 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:55:13.051555 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 17:55:13.054822 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 17:55:13.089238 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 17:55:13.092112 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 17:55:13.095549 kernel: loop2: detected capacity change from 0 to 211168 Jan 23 17:55:13.117938 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Jan 23 17:55:13.117956 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Jan 23 17:55:13.126591 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:55:13.146965 kernel: loop3: detected capacity change from 0 to 8 Jan 23 17:55:13.171556 kernel: loop4: detected capacity change from 0 to 100632 Jan 23 17:55:13.195540 kernel: loop5: detected capacity change from 0 to 119840 Jan 23 17:55:13.215725 kernel: loop6: detected capacity change from 0 to 211168 Jan 23 17:55:13.241572 kernel: loop7: detected capacity change from 0 to 8 Jan 23 17:55:13.243353 (sd-merge)[1242]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 23 17:55:13.244253 (sd-merge)[1242]: Merged extensions into '/usr'. Jan 23 17:55:13.249795 systemd[1]: Reload requested from client PID 1195 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 17:55:13.249818 systemd[1]: Reloading... Jan 23 17:55:13.382558 zram_generator::config[1269]: No configuration found. Jan 23 17:55:13.437467 ldconfig[1192]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 17:55:13.585903 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 17:55:13.586136 systemd[1]: Reloading finished in 335 ms. Jan 23 17:55:13.607560 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 17:55:13.608589 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 17:55:13.609727 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 17:55:13.627845 systemd[1]: Starting ensure-sysext.service... Jan 23 17:55:13.631672 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 17:55:13.634244 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:55:13.655477 systemd[1]: Reload requested from client PID 1307 ('systemctl') (unit ensure-sysext.service)... Jan 23 17:55:13.655497 systemd[1]: Reloading... Jan 23 17:55:13.682116 systemd-tmpfiles[1308]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 17:55:13.682149 systemd-tmpfiles[1308]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 17:55:13.682417 systemd-tmpfiles[1308]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 17:55:13.683716 systemd-tmpfiles[1308]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 17:55:13.684003 systemd-udevd[1309]: Using default interface naming scheme 'v255'. Jan 23 17:55:13.684404 systemd-tmpfiles[1308]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 17:55:13.688743 systemd-tmpfiles[1308]: ACLs are not supported, ignoring. Jan 23 17:55:13.688825 systemd-tmpfiles[1308]: ACLs are not supported, ignoring. Jan 23 17:55:13.691822 systemd-tmpfiles[1308]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 17:55:13.691836 systemd-tmpfiles[1308]: Skipping /boot Jan 23 17:55:13.708930 systemd-tmpfiles[1308]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 17:55:13.708948 systemd-tmpfiles[1308]: Skipping /boot Jan 23 17:55:13.788535 zram_generator::config[1357]: No configuration found. Jan 23 17:55:14.009781 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 23 17:55:14.010065 systemd[1]: Reloading finished in 354 ms. Jan 23 17:55:14.014602 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 17:55:14.041204 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:55:14.051213 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:55:14.062645 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 17:55:14.067695 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 17:55:14.075955 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 17:55:14.081822 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 17:55:14.086618 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 17:55:14.093873 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 17:55:14.104032 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:55:14.106931 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 17:55:14.113411 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 17:55:14.120844 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 17:55:14.122639 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:55:14.122836 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:55:14.129883 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 17:55:14.132408 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:55:14.133650 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:55:14.133785 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:55:14.137241 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:55:14.142850 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 17:55:14.143821 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:55:14.143934 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:55:14.163578 systemd[1]: Finished ensure-sysext.service. Jan 23 17:55:14.169834 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 23 17:55:14.182724 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 17:55:14.185539 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 17:55:14.187177 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 17:55:14.187333 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 17:55:14.203534 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 17:55:14.205593 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 17:55:14.212235 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 17:55:14.212836 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 17:55:14.216527 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 17:55:14.217086 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 17:55:14.235072 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 23 17:55:14.235311 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 17:55:14.237593 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 17:55:14.251476 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:55:14.253663 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 17:55:14.259916 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 17:55:14.260682 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:55:14.260725 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:55:14.260773 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 17:55:14.261267 augenrules[1463]: No rules Jan 23 17:55:14.263907 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 17:55:14.282368 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 17:55:14.283505 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 17:55:14.290185 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 23 17:55:14.291246 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 17:55:14.291456 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 17:55:14.294937 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 17:55:14.295618 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 17:55:14.311303 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 17:55:14.312294 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 17:55:14.312365 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 17:55:14.312534 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 17:55:14.328533 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 23 17:55:14.328633 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 23 17:55:14.328652 kernel: [drm] features: -context_init Jan 23 17:55:14.332818 kernel: [drm] number of scanouts: 1 Jan 23 17:55:14.332899 kernel: [drm] number of cap sets: 0 Jan 23 17:55:14.342539 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Jan 23 17:55:14.368800 kernel: Console: switching to colour frame buffer device 160x50 Jan 23 17:55:14.381559 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 23 17:55:14.387098 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 17:55:14.416287 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:55:14.443054 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 17:55:14.444617 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:55:14.448135 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:55:14.575174 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:55:14.580282 systemd-networkd[1423]: lo: Link UP Jan 23 17:55:14.580850 systemd-networkd[1423]: lo: Gained carrier Jan 23 17:55:14.583060 systemd-networkd[1423]: Enumeration completed Jan 23 17:55:14.583625 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 17:55:14.583772 systemd-networkd[1423]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:55:14.583842 systemd-networkd[1423]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 17:55:14.584496 systemd-networkd[1423]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:55:14.584617 systemd-networkd[1423]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 17:55:14.585105 systemd-networkd[1423]: eth0: Link UP Jan 23 17:55:14.585312 systemd-networkd[1423]: eth0: Gained carrier Jan 23 17:55:14.585374 systemd-networkd[1423]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:55:14.589774 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 17:55:14.595808 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 17:55:14.609883 systemd-networkd[1423]: eth1: Link UP Jan 23 17:55:14.611424 systemd-networkd[1423]: eth1: Gained carrier Jan 23 17:55:14.612573 systemd-networkd[1423]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:55:14.623018 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 23 17:55:14.624691 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 17:55:14.638564 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 17:55:14.640935 systemd-resolved[1425]: Positive Trust Anchors: Jan 23 17:55:14.641273 systemd-resolved[1425]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 17:55:14.641366 systemd-resolved[1425]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 17:55:14.645407 systemd-resolved[1425]: Using system hostname 'ci-4459-2-3-3-b08bb0c7a1'. Jan 23 17:55:14.647267 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 17:55:14.648771 systemd[1]: Reached target network.target - Network. Jan 23 17:55:14.649569 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:55:14.649660 systemd-networkd[1423]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 23 17:55:14.651334 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 17:55:14.651982 systemd-timesyncd[1442]: Network configuration changed, trying to establish connection. Jan 23 17:55:14.652452 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 17:55:14.654167 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 17:55:14.655959 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 17:55:14.656649 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 17:55:14.657396 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 17:55:14.658413 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 17:55:14.658449 systemd[1]: Reached target paths.target - Path Units. Jan 23 17:55:14.659282 systemd[1]: Reached target timers.target - Timer Units. Jan 23 17:55:14.661013 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 17:55:14.661625 systemd-networkd[1423]: eth0: DHCPv4 address 46.224.74.11/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 23 17:55:14.663473 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 17:55:14.666809 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 17:55:14.667721 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 17:55:14.668406 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 17:55:14.671185 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 17:55:14.672265 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 17:55:14.674154 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 17:55:14.675152 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 17:55:14.675711 systemd[1]: Reached target basic.target - Basic System. Jan 23 17:55:14.676275 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 17:55:14.676313 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 17:55:14.677474 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 17:55:14.679203 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 17:55:14.682936 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 17:55:14.684802 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 17:55:14.688735 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 17:55:14.696295 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 17:55:14.698618 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 17:55:14.699838 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 17:55:14.705060 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 17:55:14.706387 jq[1519]: false Jan 23 17:55:14.708038 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 23 17:55:14.711828 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 17:55:14.718000 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 17:55:14.725628 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 17:55:14.727338 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 17:55:14.729056 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 17:55:14.734043 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 17:55:14.739915 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 17:55:14.748245 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 17:55:14.749880 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 17:55:14.751550 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 17:55:14.764957 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 17:55:14.778576 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 17:55:14.779953 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 17:55:14.780152 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 17:55:14.789678 extend-filesystems[1520]: Found /dev/sda6 Jan 23 17:55:14.793357 jq[1533]: true Jan 23 17:55:14.803472 (ntainerd)[1553]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 17:55:14.820120 coreos-metadata[1516]: Jan 23 17:55:14.817 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 23 17:55:14.820413 extend-filesystems[1520]: Found /dev/sda9 Jan 23 17:55:14.827181 update_engine[1531]: I20260123 17:55:14.825165 1531 main.cc:92] Flatcar Update Engine starting Jan 23 17:55:14.827718 extend-filesystems[1520]: Checking size of /dev/sda9 Jan 23 17:55:14.833174 coreos-metadata[1516]: Jan 23 17:55:14.827 INFO Fetch successful Jan 23 17:55:14.833174 coreos-metadata[1516]: Jan 23 17:55:14.827 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 23 17:55:14.833174 coreos-metadata[1516]: Jan 23 17:55:14.828 INFO Fetch successful Jan 23 17:55:14.833282 tar[1539]: linux-arm64/LICENSE Jan 23 17:55:14.837312 tar[1539]: linux-arm64/helm Jan 23 17:55:14.844362 jq[1556]: true Jan 23 17:55:14.845482 dbus-daemon[1517]: [system] SELinux support is enabled Jan 23 17:55:14.851917 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 17:55:14.855887 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 17:55:14.855929 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 17:55:14.856832 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 17:55:14.856846 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 17:55:14.864918 systemd-timesyncd[1442]: Contacted time server 185.252.140.125:123 (3.flatcar.pool.ntp.org). Jan 23 17:55:14.864995 systemd-timesyncd[1442]: Initial clock synchronization to Fri 2026-01-23 17:55:14.717265 UTC. Jan 23 17:55:14.872418 systemd[1]: Started update-engine.service - Update Engine. Jan 23 17:55:14.876839 update_engine[1531]: I20260123 17:55:14.873706 1531 update_check_scheduler.cc:74] Next update check in 7m17s Jan 23 17:55:14.890815 extend-filesystems[1520]: Resized partition /dev/sda9 Jan 23 17:55:14.891840 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 17:55:14.898716 extend-filesystems[1570]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 17:55:14.900942 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 23 17:55:14.979905 systemd-logind[1529]: New seat seat0. Jan 23 17:55:14.983955 systemd-logind[1529]: Watching system buttons on /dev/input/event0 (Power Button) Jan 23 17:55:14.983984 systemd-logind[1529]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 23 17:55:14.984228 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 17:55:15.007601 bash[1588]: Updated "/home/core/.ssh/authorized_keys" Jan 23 17:55:15.023212 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 17:55:15.035883 systemd[1]: Starting sshkeys.service... Jan 23 17:55:15.043548 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 17:55:15.044841 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 17:55:15.060762 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 23 17:55:15.079386 extend-filesystems[1570]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 23 17:55:15.079386 extend-filesystems[1570]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 23 17:55:15.079386 extend-filesystems[1570]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 23 17:55:15.089569 extend-filesystems[1520]: Resized filesystem in /dev/sda9 Jan 23 17:55:15.081182 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 17:55:15.082601 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 17:55:15.119826 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 17:55:15.127943 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 17:55:15.216348 locksmithd[1567]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 17:55:15.225276 coreos-metadata[1604]: Jan 23 17:55:15.224 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 23 17:55:15.229372 coreos-metadata[1604]: Jan 23 17:55:15.229 INFO Fetch successful Jan 23 17:55:15.234129 unknown[1604]: wrote ssh authorized keys file for user: core Jan 23 17:55:15.237779 containerd[1553]: time="2026-01-23T17:55:15Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 17:55:15.239757 containerd[1553]: time="2026-01-23T17:55:15.239701191Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 17:55:15.267198 containerd[1553]: time="2026-01-23T17:55:15.267160022Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.091µs" Jan 23 17:55:15.267913 containerd[1553]: time="2026-01-23T17:55:15.267882533Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 17:55:15.267997 containerd[1553]: time="2026-01-23T17:55:15.267983752Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 17:55:15.268672 containerd[1553]: time="2026-01-23T17:55:15.268644621Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 17:55:15.269126 containerd[1553]: time="2026-01-23T17:55:15.269099322Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 17:55:15.269264 containerd[1553]: time="2026-01-23T17:55:15.269246125Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 17:55:15.270003 containerd[1553]: time="2026-01-23T17:55:15.269976724Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 17:55:15.270256 containerd[1553]: time="2026-01-23T17:55:15.270237820Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 17:55:15.271168 containerd[1553]: time="2026-01-23T17:55:15.271135325Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 17:55:15.271851 containerd[1553]: time="2026-01-23T17:55:15.271824541Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 17:55:15.271974 containerd[1553]: time="2026-01-23T17:55:15.271955639Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 17:55:15.272133 containerd[1553]: time="2026-01-23T17:55:15.272115516Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 17:55:15.272913 update-ssh-keys[1610]: Updated "/home/core/.ssh/authorized_keys" Jan 23 17:55:15.273641 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 17:55:15.275067 containerd[1553]: time="2026-01-23T17:55:15.274870968Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 17:55:15.279666 containerd[1553]: time="2026-01-23T17:55:15.277812759Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 17:55:15.279666 containerd[1553]: time="2026-01-23T17:55:15.277847821Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 17:55:15.279666 containerd[1553]: time="2026-01-23T17:55:15.277859364Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 17:55:15.279666 containerd[1553]: time="2026-01-23T17:55:15.277895525Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 17:55:15.279666 containerd[1553]: time="2026-01-23T17:55:15.278117084Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 17:55:15.279666 containerd[1553]: time="2026-01-23T17:55:15.278172562Z" level=info msg="metadata content store policy set" policy=shared Jan 23 17:55:15.282564 systemd[1]: Finished sshkeys.service. Jan 23 17:55:15.289557 containerd[1553]: time="2026-01-23T17:55:15.289503090Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 17:55:15.289929 containerd[1553]: time="2026-01-23T17:55:15.289910008Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 17:55:15.290106 containerd[1553]: time="2026-01-23T17:55:15.290087789Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 17:55:15.290271 containerd[1553]: time="2026-01-23T17:55:15.290256697Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 17:55:15.290388 containerd[1553]: time="2026-01-23T17:55:15.290371815Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 17:55:15.290488 containerd[1553]: time="2026-01-23T17:55:15.290472445Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 17:55:15.290728 containerd[1553]: time="2026-01-23T17:55:15.290708845Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 17:55:15.290788 containerd[1553]: time="2026-01-23T17:55:15.290775160Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 17:55:15.290894 containerd[1553]: time="2026-01-23T17:55:15.290877400Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 17:55:15.291021 containerd[1553]: time="2026-01-23T17:55:15.291007634Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 17:55:15.291073 containerd[1553]: time="2026-01-23T17:55:15.291061070Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 17:55:15.291167 containerd[1553]: time="2026-01-23T17:55:15.291152592Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 17:55:15.291702 containerd[1553]: time="2026-01-23T17:55:15.291430924Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 17:55:15.291702 containerd[1553]: time="2026-01-23T17:55:15.291462963Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 17:55:15.291702 containerd[1553]: time="2026-01-23T17:55:15.291479178Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 17:55:15.292565 containerd[1553]: time="2026-01-23T17:55:15.291500616Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 17:55:15.292565 containerd[1553]: time="2026-01-23T17:55:15.292035255Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 17:55:15.292565 containerd[1553]: time="2026-01-23T17:55:15.292059951Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 17:55:15.292565 containerd[1553]: time="2026-01-23T17:55:15.292074007Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 17:55:15.292565 containerd[1553]: time="2026-01-23T17:55:15.292085118Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 17:55:15.292565 containerd[1553]: time="2026-01-23T17:55:15.292098389Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 17:55:15.292565 containerd[1553]: time="2026-01-23T17:55:15.292109383Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 17:55:15.292565 containerd[1553]: time="2026-01-23T17:55:15.292120847Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 17:55:15.292565 containerd[1553]: time="2026-01-23T17:55:15.292339737Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 17:55:15.292565 containerd[1553]: time="2026-01-23T17:55:15.292368555Z" level=info msg="Start snapshots syncer" Jan 23 17:55:15.293114 containerd[1553]: time="2026-01-23T17:55:15.293091498Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 17:55:15.294150 containerd[1553]: time="2026-01-23T17:55:15.293838273Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 17:55:15.294519 containerd[1553]: time="2026-01-23T17:55:15.294477116Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 17:55:15.294843 containerd[1553]: time="2026-01-23T17:55:15.294793808Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 17:55:15.295706 containerd[1553]: time="2026-01-23T17:55:15.295382983Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 17:55:15.295706 containerd[1553]: time="2026-01-23T17:55:15.295415414Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 17:55:15.295706 containerd[1553]: time="2026-01-23T17:55:15.295427625Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 17:55:15.295706 containerd[1553]: time="2026-01-23T17:55:15.295439796Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 17:55:15.295706 containerd[1553]: time="2026-01-23T17:55:15.295452321Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 17:55:15.295706 containerd[1553]: time="2026-01-23T17:55:15.295462843Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 17:55:15.295706 containerd[1553]: time="2026-01-23T17:55:15.295474033Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 17:55:15.296848 containerd[1553]: time="2026-01-23T17:55:15.296034704Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 17:55:15.296848 containerd[1553]: time="2026-01-23T17:55:15.296061363Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 17:55:15.296848 containerd[1553]: time="2026-01-23T17:55:15.296074437Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 17:55:15.296848 containerd[1553]: time="2026-01-23T17:55:15.296113857Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 17:55:15.296848 containerd[1553]: time="2026-01-23T17:55:15.296128855Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 17:55:15.296848 containerd[1553]: time="2026-01-23T17:55:15.296137846Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 17:55:15.296848 containerd[1553]: time="2026-01-23T17:55:15.296147387Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 17:55:15.296848 containerd[1553]: time="2026-01-23T17:55:15.296154847Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 17:55:15.296848 containerd[1553]: time="2026-01-23T17:55:15.296167333Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 17:55:15.296848 containerd[1553]: time="2026-01-23T17:55:15.296179111Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 17:55:15.296848 containerd[1553]: time="2026-01-23T17:55:15.296296860Z" level=info msg="runtime interface created" Jan 23 17:55:15.296848 containerd[1553]: time="2026-01-23T17:55:15.296304830Z" level=info msg="created NRI interface" Jan 23 17:55:15.296848 containerd[1553]: time="2026-01-23T17:55:15.296314489Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 17:55:15.296848 containerd[1553]: time="2026-01-23T17:55:15.296328624Z" level=info msg="Connect containerd service" Jan 23 17:55:15.296848 containerd[1553]: time="2026-01-23T17:55:15.296362978Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 17:55:15.299324 containerd[1553]: time="2026-01-23T17:55:15.299050270Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 17:55:15.464810 containerd[1553]: time="2026-01-23T17:55:15.464067692Z" level=info msg="Start subscribing containerd event" Jan 23 17:55:15.464810 containerd[1553]: time="2026-01-23T17:55:15.464164317Z" level=info msg="Start recovering state" Jan 23 17:55:15.464810 containerd[1553]: time="2026-01-23T17:55:15.464269423Z" level=info msg="Start event monitor" Jan 23 17:55:15.464810 containerd[1553]: time="2026-01-23T17:55:15.464301972Z" level=info msg="Start cni network conf syncer for default" Jan 23 17:55:15.464810 containerd[1553]: time="2026-01-23T17:55:15.464309864Z" level=info msg="Start streaming server" Jan 23 17:55:15.464810 containerd[1553]: time="2026-01-23T17:55:15.464318659Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 17:55:15.464810 containerd[1553]: time="2026-01-23T17:55:15.464325687Z" level=info msg="runtime interface starting up..." Jan 23 17:55:15.464810 containerd[1553]: time="2026-01-23T17:55:15.464330987Z" level=info msg="starting plugins..." Jan 23 17:55:15.464810 containerd[1553]: time="2026-01-23T17:55:15.464346339Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 17:55:15.465184 containerd[1553]: time="2026-01-23T17:55:15.465159154Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 17:55:15.465606 containerd[1553]: time="2026-01-23T17:55:15.465564148Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 17:55:15.465785 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 17:55:15.469717 containerd[1553]: time="2026-01-23T17:55:15.469677615Z" level=info msg="containerd successfully booted in 0.234825s" Jan 23 17:55:15.512293 sshd_keygen[1550]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 17:55:15.514526 tar[1539]: linux-arm64/README.md Jan 23 17:55:15.534213 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 17:55:15.540991 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 17:55:15.545047 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 17:55:15.572897 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 17:55:15.573401 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 17:55:15.578137 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 17:55:15.600575 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 17:55:15.606742 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 17:55:15.609979 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 23 17:55:15.611231 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 17:55:16.544705 systemd-networkd[1423]: eth1: Gained IPv6LL Jan 23 17:55:16.548867 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 17:55:16.551483 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 17:55:16.555311 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:55:16.557735 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 17:55:16.600702 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 17:55:16.609768 systemd-networkd[1423]: eth0: Gained IPv6LL Jan 23 17:55:17.278501 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:55:17.280921 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 17:55:17.283758 systemd[1]: Startup finished in 2.395s (kernel) + 5.385s (initrd) + 5.340s (userspace) = 13.122s. Jan 23 17:55:17.290055 (kubelet)[1664]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:55:17.811118 kubelet[1664]: E0123 17:55:17.811051 1664 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:55:17.814672 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:55:17.814893 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:55:17.815385 systemd[1]: kubelet.service: Consumed 873ms CPU time, 257.4M memory peak. Jan 23 17:55:28.065475 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 17:55:28.067573 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:55:28.237907 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:55:28.258060 (kubelet)[1683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:55:28.302128 kubelet[1683]: E0123 17:55:28.302053 1683 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:55:28.306123 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:55:28.306258 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:55:28.306946 systemd[1]: kubelet.service: Consumed 166ms CPU time, 106.9M memory peak. Jan 23 17:55:38.556919 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 17:55:38.559856 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:55:38.725943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:55:38.738946 (kubelet)[1698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:55:38.787875 kubelet[1698]: E0123 17:55:38.787807 1698 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:55:38.790676 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:55:38.790938 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:55:38.791767 systemd[1]: kubelet.service: Consumed 173ms CPU time, 104.9M memory peak. Jan 23 17:55:47.757961 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 17:55:47.759978 systemd[1]: Started sshd@0-46.224.74.11:22-68.220.241.50:58934.service - OpenSSH per-connection server daemon (68.220.241.50:58934). Jan 23 17:55:48.424548 sshd[1706]: Accepted publickey for core from 68.220.241.50 port 58934 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:55:48.427091 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:55:48.436182 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 17:55:48.438849 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 17:55:48.446562 systemd-logind[1529]: New session 1 of user core. Jan 23 17:55:48.464566 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 17:55:48.468851 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 17:55:48.484982 (systemd)[1711]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 17:55:48.489041 systemd-logind[1529]: New session c1 of user core. Jan 23 17:55:48.623871 systemd[1711]: Queued start job for default target default.target. Jan 23 17:55:48.636638 systemd[1711]: Created slice app.slice - User Application Slice. Jan 23 17:55:48.636711 systemd[1711]: Reached target paths.target - Paths. Jan 23 17:55:48.636792 systemd[1711]: Reached target timers.target - Timers. Jan 23 17:55:48.638672 systemd[1711]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 17:55:48.666674 systemd[1711]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 17:55:48.667173 systemd[1711]: Reached target sockets.target - Sockets. Jan 23 17:55:48.667504 systemd[1711]: Reached target basic.target - Basic System. Jan 23 17:55:48.667808 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 17:55:48.668054 systemd[1711]: Reached target default.target - Main User Target. Jan 23 17:55:48.668232 systemd[1711]: Startup finished in 170ms. Jan 23 17:55:48.680033 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 17:55:49.041372 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 17:55:49.044830 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:55:49.133562 systemd[1]: Started sshd@1-46.224.74.11:22-68.220.241.50:58938.service - OpenSSH per-connection server daemon (68.220.241.50:58938). Jan 23 17:55:49.212555 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:55:49.223433 (kubelet)[1733]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:55:49.268891 kubelet[1733]: E0123 17:55:49.268809 1733 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:55:49.271960 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:55:49.272151 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:55:49.273222 systemd[1]: kubelet.service: Consumed 163ms CPU time, 105M memory peak. Jan 23 17:55:49.759698 sshd[1725]: Accepted publickey for core from 68.220.241.50 port 58938 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:55:49.761929 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:55:49.768706 systemd-logind[1529]: New session 2 of user core. Jan 23 17:55:49.776797 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 17:55:50.190855 sshd[1739]: Connection closed by 68.220.241.50 port 58938 Jan 23 17:55:50.190350 sshd-session[1725]: pam_unix(sshd:session): session closed for user core Jan 23 17:55:50.198267 systemd-logind[1529]: Session 2 logged out. Waiting for processes to exit. Jan 23 17:55:50.198496 systemd[1]: sshd@1-46.224.74.11:22-68.220.241.50:58938.service: Deactivated successfully. Jan 23 17:55:50.201438 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 17:55:50.204970 systemd-logind[1529]: Removed session 2. Jan 23 17:55:50.298494 systemd[1]: Started sshd@2-46.224.74.11:22-68.220.241.50:58948.service - OpenSSH per-connection server daemon (68.220.241.50:58948). Jan 23 17:55:50.916611 sshd[1745]: Accepted publickey for core from 68.220.241.50 port 58948 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:55:50.918448 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:55:50.923768 systemd-logind[1529]: New session 3 of user core. Jan 23 17:55:50.935299 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 17:55:51.336615 sshd[1748]: Connection closed by 68.220.241.50 port 58948 Jan 23 17:55:51.336345 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Jan 23 17:55:51.344118 systemd[1]: sshd@2-46.224.74.11:22-68.220.241.50:58948.service: Deactivated successfully. Jan 23 17:55:51.347102 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 17:55:51.350280 systemd-logind[1529]: Session 3 logged out. Waiting for processes to exit. Jan 23 17:55:51.352112 systemd-logind[1529]: Removed session 3. Jan 23 17:55:51.446644 systemd[1]: Started sshd@3-46.224.74.11:22-68.220.241.50:58964.service - OpenSSH per-connection server daemon (68.220.241.50:58964). Jan 23 17:55:52.071762 sshd[1754]: Accepted publickey for core from 68.220.241.50 port 58964 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:55:52.073938 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:55:52.079154 systemd-logind[1529]: New session 4 of user core. Jan 23 17:55:52.089832 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 17:55:52.503387 sshd[1757]: Connection closed by 68.220.241.50 port 58964 Jan 23 17:55:52.505072 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Jan 23 17:55:52.509667 systemd[1]: sshd@3-46.224.74.11:22-68.220.241.50:58964.service: Deactivated successfully. Jan 23 17:55:52.512633 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 17:55:52.515868 systemd-logind[1529]: Session 4 logged out. Waiting for processes to exit. Jan 23 17:55:52.517313 systemd-logind[1529]: Removed session 4. Jan 23 17:55:52.622044 systemd[1]: Started sshd@4-46.224.74.11:22-68.220.241.50:52138.service - OpenSSH per-connection server daemon (68.220.241.50:52138). Jan 23 17:55:53.271168 sshd[1763]: Accepted publickey for core from 68.220.241.50 port 52138 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:55:53.273481 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:55:53.278697 systemd-logind[1529]: New session 5 of user core. Jan 23 17:55:53.286795 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 17:55:53.622758 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 17:55:53.623049 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:55:53.636183 sudo[1767]: pam_unix(sudo:session): session closed for user root Jan 23 17:55:53.735178 sshd[1766]: Connection closed by 68.220.241.50 port 52138 Jan 23 17:55:53.736296 sshd-session[1763]: pam_unix(sshd:session): session closed for user core Jan 23 17:55:53.743042 systemd[1]: sshd@4-46.224.74.11:22-68.220.241.50:52138.service: Deactivated successfully. Jan 23 17:55:53.745927 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 17:55:53.748876 systemd-logind[1529]: Session 5 logged out. Waiting for processes to exit. Jan 23 17:55:53.751030 systemd-logind[1529]: Removed session 5. Jan 23 17:55:53.854139 systemd[1]: Started sshd@5-46.224.74.11:22-68.220.241.50:52140.service - OpenSSH per-connection server daemon (68.220.241.50:52140). Jan 23 17:55:54.504146 sshd[1773]: Accepted publickey for core from 68.220.241.50 port 52140 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:55:54.506484 sshd-session[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:55:54.513156 systemd-logind[1529]: New session 6 of user core. Jan 23 17:55:54.520078 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 17:55:54.845555 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 17:55:54.845836 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:55:54.852791 sudo[1778]: pam_unix(sudo:session): session closed for user root Jan 23 17:55:54.860062 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 17:55:54.860382 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:55:54.872396 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 17:55:54.915799 augenrules[1800]: No rules Jan 23 17:55:54.917906 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 17:55:54.918127 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 17:55:54.920613 sudo[1777]: pam_unix(sudo:session): session closed for user root Jan 23 17:55:55.018918 sshd[1776]: Connection closed by 68.220.241.50 port 52140 Jan 23 17:55:55.019967 sshd-session[1773]: pam_unix(sshd:session): session closed for user core Jan 23 17:55:55.027478 systemd[1]: sshd@5-46.224.74.11:22-68.220.241.50:52140.service: Deactivated successfully. Jan 23 17:55:55.029477 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 17:55:55.030554 systemd-logind[1529]: Session 6 logged out. Waiting for processes to exit. Jan 23 17:55:55.031970 systemd-logind[1529]: Removed session 6. Jan 23 17:55:55.128597 systemd[1]: Started sshd@6-46.224.74.11:22-68.220.241.50:52150.service - OpenSSH per-connection server daemon (68.220.241.50:52150). Jan 23 17:55:55.746842 sshd[1809]: Accepted publickey for core from 68.220.241.50 port 52150 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:55:55.748946 sshd-session[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:55:55.754810 systemd-logind[1529]: New session 7 of user core. Jan 23 17:55:55.771929 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 17:55:56.076641 sudo[1813]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 17:55:56.076910 sudo[1813]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:55:56.390694 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 17:55:56.407090 (dockerd)[1830]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 17:55:56.629385 dockerd[1830]: time="2026-01-23T17:55:56.629304046Z" level=info msg="Starting up" Jan 23 17:55:56.630587 dockerd[1830]: time="2026-01-23T17:55:56.630405599Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 17:55:56.643664 dockerd[1830]: time="2026-01-23T17:55:56.643306340Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 17:55:56.672770 systemd[1]: var-lib-docker-metacopy\x2dcheck3123218518-merged.mount: Deactivated successfully. Jan 23 17:55:56.681976 dockerd[1830]: time="2026-01-23T17:55:56.681909730Z" level=info msg="Loading containers: start." Jan 23 17:55:56.695548 kernel: Initializing XFRM netlink socket Jan 23 17:55:56.928794 systemd-networkd[1423]: docker0: Link UP Jan 23 17:55:56.934306 dockerd[1830]: time="2026-01-23T17:55:56.934228755Z" level=info msg="Loading containers: done." Jan 23 17:55:56.950657 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4114363959-merged.mount: Deactivated successfully. Jan 23 17:55:56.953721 dockerd[1830]: time="2026-01-23T17:55:56.953673218Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 17:55:56.953810 dockerd[1830]: time="2026-01-23T17:55:56.953761651Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 17:55:56.953867 dockerd[1830]: time="2026-01-23T17:55:56.953849124Z" level=info msg="Initializing buildkit" Jan 23 17:55:56.978130 dockerd[1830]: time="2026-01-23T17:55:56.978072931Z" level=info msg="Completed buildkit initialization" Jan 23 17:55:56.987393 dockerd[1830]: time="2026-01-23T17:55:56.987315360Z" level=info msg="Daemon has completed initialization" Jan 23 17:55:56.987734 dockerd[1830]: time="2026-01-23T17:55:56.987619816Z" level=info msg="API listen on /run/docker.sock" Jan 23 17:55:56.988643 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 17:55:58.034378 containerd[1553]: time="2026-01-23T17:55:58.034332944Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 23 17:55:58.657747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount780528257.mount: Deactivated successfully. Jan 23 17:55:59.454063 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 17:55:59.457224 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:55:59.609544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:55:59.620938 (kubelet)[2107]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:55:59.672583 kubelet[2107]: E0123 17:55:59.672452 2107 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:55:59.676988 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:55:59.677360 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:55:59.678305 systemd[1]: kubelet.service: Consumed 158ms CPU time, 105.3M memory peak. Jan 23 17:55:59.747915 containerd[1553]: time="2026-01-23T17:55:59.747770404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:55:59.750149 containerd[1553]: time="2026-01-23T17:55:59.750095927Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:55:59.750291 containerd[1553]: time="2026-01-23T17:55:59.750167322Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=27387379" Jan 23 17:55:59.753535 containerd[1553]: time="2026-01-23T17:55:59.753147521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:55:59.754454 containerd[1553]: time="2026-01-23T17:55:59.754297083Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 1.71982619s" Jan 23 17:55:59.754454 containerd[1553]: time="2026-01-23T17:55:59.754333121Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Jan 23 17:55:59.756206 containerd[1553]: time="2026-01-23T17:55:59.756112121Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 23 17:56:00.165728 update_engine[1531]: I20260123 17:56:00.164569 1531 update_attempter.cc:509] Updating boot flags... Jan 23 17:56:01.091658 containerd[1553]: time="2026-01-23T17:56:01.091603481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:01.093653 containerd[1553]: time="2026-01-23T17:56:01.093598000Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23553101" Jan 23 17:56:01.094389 containerd[1553]: time="2026-01-23T17:56:01.094341314Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:01.098067 containerd[1553]: time="2026-01-23T17:56:01.098006331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:01.101612 containerd[1553]: time="2026-01-23T17:56:01.101551634Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.345245527s" Jan 23 17:56:01.101612 containerd[1553]: time="2026-01-23T17:56:01.101590472Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Jan 23 17:56:01.105269 containerd[1553]: time="2026-01-23T17:56:01.104857793Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 23 17:56:02.318069 containerd[1553]: time="2026-01-23T17:56:02.317990379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:02.320418 containerd[1553]: time="2026-01-23T17:56:02.320350242Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18298087" Jan 23 17:56:02.321472 containerd[1553]: time="2026-01-23T17:56:02.321415941Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:02.325893 containerd[1553]: time="2026-01-23T17:56:02.325801126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:02.326953 containerd[1553]: time="2026-01-23T17:56:02.326470927Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.221556098s" Jan 23 17:56:02.326953 containerd[1553]: time="2026-01-23T17:56:02.326526564Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Jan 23 17:56:02.327157 containerd[1553]: time="2026-01-23T17:56:02.327031855Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 23 17:56:03.301553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3279755647.mount: Deactivated successfully. Jan 23 17:56:03.698128 containerd[1553]: time="2026-01-23T17:56:03.697955468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:03.699560 containerd[1553]: time="2026-01-23T17:56:03.699493143Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=28258699" Jan 23 17:56:03.700552 containerd[1553]: time="2026-01-23T17:56:03.700346136Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:03.705683 containerd[1553]: time="2026-01-23T17:56:03.705555528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:03.706951 containerd[1553]: time="2026-01-23T17:56:03.706741183Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.379681129s" Jan 23 17:56:03.706951 containerd[1553]: time="2026-01-23T17:56:03.706790900Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Jan 23 17:56:03.707387 containerd[1553]: time="2026-01-23T17:56:03.707337350Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 23 17:56:04.290446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount634132893.mount: Deactivated successfully. Jan 23 17:56:05.070543 containerd[1553]: time="2026-01-23T17:56:05.070341677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:05.073810 containerd[1553]: time="2026-01-23T17:56:05.072992224Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152209" Jan 23 17:56:05.073810 containerd[1553]: time="2026-01-23T17:56:05.073086539Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:05.076337 containerd[1553]: time="2026-01-23T17:56:05.076299898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:05.077771 containerd[1553]: time="2026-01-23T17:56:05.077738785Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.370271523s" Jan 23 17:56:05.077898 containerd[1553]: time="2026-01-23T17:56:05.077880738Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jan 23 17:56:05.078670 containerd[1553]: time="2026-01-23T17:56:05.078623061Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 17:56:05.630839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4120877875.mount: Deactivated successfully. Jan 23 17:56:05.636655 containerd[1553]: time="2026-01-23T17:56:05.636572677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:56:05.638170 containerd[1553]: time="2026-01-23T17:56:05.638085961Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Jan 23 17:56:05.638930 containerd[1553]: time="2026-01-23T17:56:05.638873842Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:56:05.642139 containerd[1553]: time="2026-01-23T17:56:05.642059282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:56:05.642870 containerd[1553]: time="2026-01-23T17:56:05.642810764Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 564.151584ms" Jan 23 17:56:05.642870 containerd[1553]: time="2026-01-23T17:56:05.642852122Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 23 17:56:05.643487 containerd[1553]: time="2026-01-23T17:56:05.643416653Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 23 17:56:06.242235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3882200355.mount: Deactivated successfully. Jan 23 17:56:08.026017 containerd[1553]: time="2026-01-23T17:56:08.025932235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:08.028477 containerd[1553]: time="2026-01-23T17:56:08.028420126Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=70013713" Jan 23 17:56:08.029264 containerd[1553]: time="2026-01-23T17:56:08.029214451Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:08.033380 containerd[1553]: time="2026-01-23T17:56:08.033305992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:08.035568 containerd[1553]: time="2026-01-23T17:56:08.035485177Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.392031925s" Jan 23 17:56:08.035568 containerd[1553]: time="2026-01-23T17:56:08.035557653Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jan 23 17:56:09.703919 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 23 17:56:09.710722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:56:09.859779 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:56:09.870151 (kubelet)[2290]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:56:09.915524 kubelet[2290]: E0123 17:56:09.912822 2290 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:56:09.915807 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:56:09.915930 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:56:09.917639 systemd[1]: kubelet.service: Consumed 154ms CPU time, 106.9M memory peak. Jan 23 17:56:12.883343 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:56:12.883640 systemd[1]: kubelet.service: Consumed 154ms CPU time, 106.9M memory peak. Jan 23 17:56:12.887295 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:56:12.928240 systemd[1]: Reload requested from client PID 2305 ('systemctl') (unit session-7.scope)... Jan 23 17:56:12.928262 systemd[1]: Reloading... Jan 23 17:56:13.073590 zram_generator::config[2355]: No configuration found. Jan 23 17:56:13.245568 systemd[1]: Reloading finished in 316 ms. Jan 23 17:56:13.308202 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 17:56:13.308712 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 17:56:13.310574 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:56:13.310653 systemd[1]: kubelet.service: Consumed 112ms CPU time, 95.2M memory peak. Jan 23 17:56:13.314931 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:56:13.479350 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:56:13.491169 (kubelet)[2396]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 17:56:13.537667 kubelet[2396]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:56:13.538140 kubelet[2396]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 17:56:13.538187 kubelet[2396]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:56:13.538345 kubelet[2396]: I0123 17:56:13.538310 2396 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 17:56:14.699461 kubelet[2396]: I0123 17:56:14.699404 2396 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 17:56:14.700602 kubelet[2396]: I0123 17:56:14.700011 2396 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 17:56:14.700602 kubelet[2396]: I0123 17:56:14.700466 2396 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 17:56:14.736803 kubelet[2396]: E0123 17:56:14.736742 2396 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://46.224.74.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 46.224.74.11:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 17:56:14.738392 kubelet[2396]: I0123 17:56:14.738333 2396 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 17:56:14.748884 kubelet[2396]: I0123 17:56:14.748828 2396 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 17:56:14.751663 kubelet[2396]: I0123 17:56:14.751633 2396 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 17:56:14.753396 kubelet[2396]: I0123 17:56:14.753328 2396 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 17:56:14.753592 kubelet[2396]: I0123 17:56:14.753390 2396 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-3-3-b08bb0c7a1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 17:56:14.753704 kubelet[2396]: I0123 17:56:14.753649 2396 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 17:56:14.753704 kubelet[2396]: I0123 17:56:14.753660 2396 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 17:56:14.753896 kubelet[2396]: I0123 17:56:14.753861 2396 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:56:14.757328 kubelet[2396]: I0123 17:56:14.757285 2396 kubelet.go:480] "Attempting to sync node with API server" Jan 23 17:56:14.757328 kubelet[2396]: I0123 17:56:14.757321 2396 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 17:56:14.757613 kubelet[2396]: I0123 17:56:14.757351 2396 kubelet.go:386] "Adding apiserver pod source" Jan 23 17:56:14.757613 kubelet[2396]: I0123 17:56:14.757367 2396 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 17:56:14.761485 kubelet[2396]: E0123 17:56:14.760822 2396 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://46.224.74.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 46.224.74.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 17:56:14.761485 kubelet[2396]: E0123 17:56:14.761222 2396 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://46.224.74.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-3-3-b08bb0c7a1&limit=500&resourceVersion=0\": dial tcp 46.224.74.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 17:56:14.761632 kubelet[2396]: I0123 17:56:14.761550 2396 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 17:56:14.762288 kubelet[2396]: I0123 17:56:14.762261 2396 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 17:56:14.762411 kubelet[2396]: W0123 17:56:14.762392 2396 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 17:56:14.767342 kubelet[2396]: I0123 17:56:14.767305 2396 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 17:56:14.767342 kubelet[2396]: I0123 17:56:14.767350 2396 server.go:1289] "Started kubelet" Jan 23 17:56:14.771533 kubelet[2396]: I0123 17:56:14.770492 2396 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 17:56:14.771789 kubelet[2396]: I0123 17:56:14.771772 2396 server.go:317] "Adding debug handlers to kubelet server" Jan 23 17:56:14.774017 kubelet[2396]: I0123 17:56:14.773670 2396 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 17:56:14.774118 kubelet[2396]: I0123 17:56:14.774036 2396 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 17:56:14.775535 kubelet[2396]: E0123 17:56:14.774185 2396 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://46.224.74.11:6443/api/v1/namespaces/default/events\": dial tcp 46.224.74.11:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-3-3-b08bb0c7a1.188d6dd166d565a7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-3-3-b08bb0c7a1,UID:ci-4459-2-3-3-b08bb0c7a1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-3-3-b08bb0c7a1,},FirstTimestamp:2026-01-23 17:56:14.767326631 +0000 UTC m=+1.269287759,LastTimestamp:2026-01-23 17:56:14.767326631 +0000 UTC m=+1.269287759,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-3-3-b08bb0c7a1,}" Jan 23 17:56:14.778478 kubelet[2396]: I0123 17:56:14.778315 2396 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 17:56:14.782076 kubelet[2396]: I0123 17:56:14.781489 2396 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 17:56:14.782076 kubelet[2396]: I0123 17:56:14.778842 2396 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 17:56:14.782076 kubelet[2396]: I0123 17:56:14.781744 2396 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 17:56:14.782076 kubelet[2396]: I0123 17:56:14.781787 2396 reconciler.go:26] "Reconciler: start to sync state" Jan 23 17:56:14.783057 kubelet[2396]: E0123 17:56:14.783007 2396 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://46.224.74.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 46.224.74.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 17:56:14.783879 kubelet[2396]: E0123 17:56:14.783672 2396 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-3-3-b08bb0c7a1\" not found" Jan 23 17:56:14.783879 kubelet[2396]: E0123 17:56:14.783782 2396 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.224.74.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-3-3-b08bb0c7a1?timeout=10s\": dial tcp 46.224.74.11:6443: connect: connection refused" interval="200ms" Jan 23 17:56:14.784009 kubelet[2396]: E0123 17:56:14.783897 2396 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 17:56:14.784333 kubelet[2396]: I0123 17:56:14.784111 2396 factory.go:223] Registration of the systemd container factory successfully Jan 23 17:56:14.784333 kubelet[2396]: I0123 17:56:14.784200 2396 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 17:56:14.785685 kubelet[2396]: I0123 17:56:14.785655 2396 factory.go:223] Registration of the containerd container factory successfully Jan 23 17:56:14.807737 kubelet[2396]: I0123 17:56:14.807564 2396 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 17:56:14.809752 kubelet[2396]: I0123 17:56:14.809723 2396 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 17:56:14.809875 kubelet[2396]: I0123 17:56:14.809863 2396 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 17:56:14.809954 kubelet[2396]: I0123 17:56:14.809941 2396 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 17:56:14.810004 kubelet[2396]: I0123 17:56:14.809995 2396 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 17:56:14.810109 kubelet[2396]: E0123 17:56:14.810091 2396 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 17:56:14.811750 kubelet[2396]: E0123 17:56:14.811718 2396 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://46.224.74.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 46.224.74.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 17:56:14.819399 kubelet[2396]: I0123 17:56:14.819122 2396 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 17:56:14.819399 kubelet[2396]: I0123 17:56:14.819139 2396 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 17:56:14.819399 kubelet[2396]: I0123 17:56:14.819161 2396 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:56:14.821064 kubelet[2396]: I0123 17:56:14.821022 2396 policy_none.go:49] "None policy: Start" Jan 23 17:56:14.821196 kubelet[2396]: I0123 17:56:14.821183 2396 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 17:56:14.821266 kubelet[2396]: I0123 17:56:14.821256 2396 state_mem.go:35] "Initializing new in-memory state store" Jan 23 17:56:14.828133 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 17:56:14.841972 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 17:56:14.866358 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 17:56:14.868016 kubelet[2396]: E0123 17:56:14.867994 2396 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 17:56:14.869542 kubelet[2396]: I0123 17:56:14.869180 2396 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 17:56:14.869542 kubelet[2396]: I0123 17:56:14.869197 2396 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 17:56:14.869712 kubelet[2396]: I0123 17:56:14.869700 2396 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 17:56:14.871475 kubelet[2396]: E0123 17:56:14.871455 2396 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 17:56:14.871623 kubelet[2396]: E0123 17:56:14.871610 2396 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-3-3-b08bb0c7a1\" not found" Jan 23 17:56:14.899621 kubelet[2396]: E0123 17:56:14.899430 2396 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://46.224.74.11:6443/api/v1/namespaces/default/events\": dial tcp 46.224.74.11:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-3-3-b08bb0c7a1.188d6dd166d565a7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-3-3-b08bb0c7a1,UID:ci-4459-2-3-3-b08bb0c7a1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-3-3-b08bb0c7a1,},FirstTimestamp:2026-01-23 17:56:14.767326631 +0000 UTC m=+1.269287759,LastTimestamp:2026-01-23 17:56:14.767326631 +0000 UTC m=+1.269287759,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-3-3-b08bb0c7a1,}" Jan 23 17:56:14.926020 systemd[1]: Created slice kubepods-burstable-podb6e196e44ab1b71476ab95a6ede4ece0.slice - libcontainer container kubepods-burstable-podb6e196e44ab1b71476ab95a6ede4ece0.slice. Jan 23 17:56:14.948270 kubelet[2396]: E0123 17:56:14.948169 2396 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-3-b08bb0c7a1\" not found" node="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:14.952976 systemd[1]: Created slice kubepods-burstable-pod79e42748197e1b325033be20ff13cee1.slice - libcontainer container kubepods-burstable-pod79e42748197e1b325033be20ff13cee1.slice. Jan 23 17:56:14.957787 kubelet[2396]: E0123 17:56:14.957761 2396 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-3-b08bb0c7a1\" not found" node="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:14.959134 systemd[1]: Created slice kubepods-burstable-pod87c5f1b495b2801ebdcb4e77dba0b154.slice - libcontainer container kubepods-burstable-pod87c5f1b495b2801ebdcb4e77dba0b154.slice. Jan 23 17:56:14.961567 kubelet[2396]: E0123 17:56:14.961507 2396 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-3-b08bb0c7a1\" not found" node="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:14.972207 kubelet[2396]: I0123 17:56:14.972146 2396 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:14.972795 kubelet[2396]: E0123 17:56:14.972759 2396 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.224.74.11:6443/api/v1/nodes\": dial tcp 46.224.74.11:6443: connect: connection refused" node="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:14.982729 kubelet[2396]: I0123 17:56:14.982531 2396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/79e42748197e1b325033be20ff13cee1-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-3-3-b08bb0c7a1\" (UID: \"79e42748197e1b325033be20ff13cee1\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:14.982729 kubelet[2396]: I0123 17:56:14.982649 2396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/79e42748197e1b325033be20ff13cee1-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-3-3-b08bb0c7a1\" (UID: \"79e42748197e1b325033be20ff13cee1\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:14.982939 kubelet[2396]: I0123 17:56:14.982758 2396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/79e42748197e1b325033be20ff13cee1-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-3-3-b08bb0c7a1\" (UID: \"79e42748197e1b325033be20ff13cee1\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:14.982939 kubelet[2396]: I0123 17:56:14.982850 2396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b6e196e44ab1b71476ab95a6ede4ece0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-3-3-b08bb0c7a1\" (UID: \"b6e196e44ab1b71476ab95a6ede4ece0\") " pod="kube-system/kube-apiserver-ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:14.982939 kubelet[2396]: I0123 17:56:14.982914 2396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/79e42748197e1b325033be20ff13cee1-ca-certs\") pod \"kube-controller-manager-ci-4459-2-3-3-b08bb0c7a1\" (UID: \"79e42748197e1b325033be20ff13cee1\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:14.983090 kubelet[2396]: I0123 17:56:14.982972 2396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/79e42748197e1b325033be20ff13cee1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-3-3-b08bb0c7a1\" (UID: \"79e42748197e1b325033be20ff13cee1\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:14.983134 kubelet[2396]: I0123 17:56:14.983088 2396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/87c5f1b495b2801ebdcb4e77dba0b154-kubeconfig\") pod \"kube-scheduler-ci-4459-2-3-3-b08bb0c7a1\" (UID: \"87c5f1b495b2801ebdcb4e77dba0b154\") " pod="kube-system/kube-scheduler-ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:14.983255 kubelet[2396]: I0123 17:56:14.983145 2396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b6e196e44ab1b71476ab95a6ede4ece0-ca-certs\") pod \"kube-apiserver-ci-4459-2-3-3-b08bb0c7a1\" (UID: \"b6e196e44ab1b71476ab95a6ede4ece0\") " pod="kube-system/kube-apiserver-ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:14.983255 kubelet[2396]: I0123 17:56:14.983180 2396 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b6e196e44ab1b71476ab95a6ede4ece0-k8s-certs\") pod \"kube-apiserver-ci-4459-2-3-3-b08bb0c7a1\" (UID: \"b6e196e44ab1b71476ab95a6ede4ece0\") " pod="kube-system/kube-apiserver-ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:14.985237 kubelet[2396]: E0123 17:56:14.985149 2396 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.224.74.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-3-3-b08bb0c7a1?timeout=10s\": dial tcp 46.224.74.11:6443: connect: connection refused" interval="400ms" Jan 23 17:56:15.176229 kubelet[2396]: I0123 17:56:15.176158 2396 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:15.176816 kubelet[2396]: E0123 17:56:15.176724 2396 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.224.74.11:6443/api/v1/nodes\": dial tcp 46.224.74.11:6443: connect: connection refused" node="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:15.250640 containerd[1553]: time="2026-01-23T17:56:15.250577528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-3-3-b08bb0c7a1,Uid:b6e196e44ab1b71476ab95a6ede4ece0,Namespace:kube-system,Attempt:0,}" Jan 23 17:56:15.259554 containerd[1553]: time="2026-01-23T17:56:15.259336440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-3-3-b08bb0c7a1,Uid:79e42748197e1b325033be20ff13cee1,Namespace:kube-system,Attempt:0,}" Jan 23 17:56:15.262567 containerd[1553]: time="2026-01-23T17:56:15.262524575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-3-3-b08bb0c7a1,Uid:87c5f1b495b2801ebdcb4e77dba0b154,Namespace:kube-system,Attempt:0,}" Jan 23 17:56:15.303252 containerd[1553]: time="2026-01-23T17:56:15.303170400Z" level=info msg="connecting to shim 82eef3b8dc390616bd026ae186bb68693badf9d655b3d87d4a2bc6a59c8f5626" address="unix:///run/containerd/s/469631e4446112069354d2f2c9dfaa26f3f4baf79e38fd273f8f76c07dabe2af" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:56:15.305817 containerd[1553]: time="2026-01-23T17:56:15.305757715Z" level=info msg="connecting to shim 27ab573afc5333e4e97f5f97815aad07680a9cc3447f17f47b95daefb0933e09" address="unix:///run/containerd/s/bbc7fdc360ce5f26094cacbefbe3de523f0fbe7ee39d3e4a252702fe1ef53027" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:56:15.313780 containerd[1553]: time="2026-01-23T17:56:15.313669535Z" level=info msg="connecting to shim 50d682d1d1cb6ec80ec4a4ec84c461935d31733cb3ff73b5e80c361c51b77833" address="unix:///run/containerd/s/79fd21cad128a47240780fb907272b4146e57575ac12f1c3832df64e7457c3a3" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:56:15.341719 systemd[1]: Started cri-containerd-27ab573afc5333e4e97f5f97815aad07680a9cc3447f17f47b95daefb0933e09.scope - libcontainer container 27ab573afc5333e4e97f5f97815aad07680a9cc3447f17f47b95daefb0933e09. Jan 23 17:56:15.357701 systemd[1]: Started cri-containerd-82eef3b8dc390616bd026ae186bb68693badf9d655b3d87d4a2bc6a59c8f5626.scope - libcontainer container 82eef3b8dc390616bd026ae186bb68693badf9d655b3d87d4a2bc6a59c8f5626. Jan 23 17:56:15.362136 systemd[1]: Started cri-containerd-50d682d1d1cb6ec80ec4a4ec84c461935d31733cb3ff73b5e80c361c51b77833.scope - libcontainer container 50d682d1d1cb6ec80ec4a4ec84c461935d31733cb3ff73b5e80c361c51b77833. Jan 23 17:56:15.386421 kubelet[2396]: E0123 17:56:15.386291 2396 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.224.74.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-3-3-b08bb0c7a1?timeout=10s\": dial tcp 46.224.74.11:6443: connect: connection refused" interval="800ms" Jan 23 17:56:15.424300 containerd[1553]: time="2026-01-23T17:56:15.423915194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-3-3-b08bb0c7a1,Uid:b6e196e44ab1b71476ab95a6ede4ece0,Namespace:kube-system,Attempt:0,} returns sandbox id \"82eef3b8dc390616bd026ae186bb68693badf9d655b3d87d4a2bc6a59c8f5626\"" Jan 23 17:56:15.435128 containerd[1553]: time="2026-01-23T17:56:15.434394930Z" level=info msg="CreateContainer within sandbox \"82eef3b8dc390616bd026ae186bb68693badf9d655b3d87d4a2bc6a59c8f5626\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 17:56:15.436331 containerd[1553]: time="2026-01-23T17:56:15.436255348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-3-3-b08bb0c7a1,Uid:87c5f1b495b2801ebdcb4e77dba0b154,Namespace:kube-system,Attempt:0,} returns sandbox id \"27ab573afc5333e4e97f5f97815aad07680a9cc3447f17f47b95daefb0933e09\"" Jan 23 17:56:15.439613 containerd[1553]: time="2026-01-23T17:56:15.438982019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-3-3-b08bb0c7a1,Uid:79e42748197e1b325033be20ff13cee1,Namespace:kube-system,Attempt:0,} returns sandbox id \"50d682d1d1cb6ec80ec4a4ec84c461935d31733cb3ff73b5e80c361c51b77833\"" Jan 23 17:56:15.441654 containerd[1553]: time="2026-01-23T17:56:15.441611893Z" level=info msg="CreateContainer within sandbox \"27ab573afc5333e4e97f5f97815aad07680a9cc3447f17f47b95daefb0933e09\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 17:56:15.445647 containerd[1553]: time="2026-01-23T17:56:15.445605481Z" level=info msg="CreateContainer within sandbox \"50d682d1d1cb6ec80ec4a4ec84c461935d31733cb3ff73b5e80c361c51b77833\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 17:56:15.447744 containerd[1553]: time="2026-01-23T17:56:15.447705012Z" level=info msg="Container c4a6aeb8861c31c37cd1d5c7cdb55d169075775e512df0a342567553fb4af4c9: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:56:15.452490 containerd[1553]: time="2026-01-23T17:56:15.452190505Z" level=info msg="Container 6326a0e39ae7fae2f6a371bfec59db5a83166faba391addc75136f0fff41bd2e: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:56:15.457989 containerd[1553]: time="2026-01-23T17:56:15.457941996Z" level=info msg="CreateContainer within sandbox \"82eef3b8dc390616bd026ae186bb68693badf9d655b3d87d4a2bc6a59c8f5626\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c4a6aeb8861c31c37cd1d5c7cdb55d169075775e512df0a342567553fb4af4c9\"" Jan 23 17:56:15.459362 containerd[1553]: time="2026-01-23T17:56:15.459279232Z" level=info msg="StartContainer for \"c4a6aeb8861c31c37cd1d5c7cdb55d169075775e512df0a342567553fb4af4c9\"" Jan 23 17:56:15.459776 containerd[1553]: time="2026-01-23T17:56:15.459741137Z" level=info msg="Container b364011ef08efc2535d520c921544fe7d96987411aa9b09d74813ae5b1603b99: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:56:15.461174 containerd[1553]: time="2026-01-23T17:56:15.461148011Z" level=info msg="connecting to shim c4a6aeb8861c31c37cd1d5c7cdb55d169075775e512df0a342567553fb4af4c9" address="unix:///run/containerd/s/469631e4446112069354d2f2c9dfaa26f3f4baf79e38fd273f8f76c07dabe2af" protocol=ttrpc version=3 Jan 23 17:56:15.465951 containerd[1553]: time="2026-01-23T17:56:15.465904615Z" level=info msg="CreateContainer within sandbox \"27ab573afc5333e4e97f5f97815aad07680a9cc3447f17f47b95daefb0933e09\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6326a0e39ae7fae2f6a371bfec59db5a83166faba391addc75136f0fff41bd2e\"" Jan 23 17:56:15.467558 containerd[1553]: time="2026-01-23T17:56:15.466599632Z" level=info msg="StartContainer for \"6326a0e39ae7fae2f6a371bfec59db5a83166faba391addc75136f0fff41bd2e\"" Jan 23 17:56:15.467881 containerd[1553]: time="2026-01-23T17:56:15.467854031Z" level=info msg="connecting to shim 6326a0e39ae7fae2f6a371bfec59db5a83166faba391addc75136f0fff41bd2e" address="unix:///run/containerd/s/bbc7fdc360ce5f26094cacbefbe3de523f0fbe7ee39d3e4a252702fe1ef53027" protocol=ttrpc version=3 Jan 23 17:56:15.474242 containerd[1553]: time="2026-01-23T17:56:15.474192062Z" level=info msg="CreateContainer within sandbox \"50d682d1d1cb6ec80ec4a4ec84c461935d31733cb3ff73b5e80c361c51b77833\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b364011ef08efc2535d520c921544fe7d96987411aa9b09d74813ae5b1603b99\"" Jan 23 17:56:15.475331 containerd[1553]: time="2026-01-23T17:56:15.475297466Z" level=info msg="StartContainer for \"b364011ef08efc2535d520c921544fe7d96987411aa9b09d74813ae5b1603b99\"" Jan 23 17:56:15.478060 containerd[1553]: time="2026-01-23T17:56:15.477974018Z" level=info msg="connecting to shim b364011ef08efc2535d520c921544fe7d96987411aa9b09d74813ae5b1603b99" address="unix:///run/containerd/s/79fd21cad128a47240780fb907272b4146e57575ac12f1c3832df64e7457c3a3" protocol=ttrpc version=3 Jan 23 17:56:15.496729 systemd[1]: Started cri-containerd-c4a6aeb8861c31c37cd1d5c7cdb55d169075775e512df0a342567553fb4af4c9.scope - libcontainer container c4a6aeb8861c31c37cd1d5c7cdb55d169075775e512df0a342567553fb4af4c9. Jan 23 17:56:15.501700 systemd[1]: Started cri-containerd-6326a0e39ae7fae2f6a371bfec59db5a83166faba391addc75136f0fff41bd2e.scope - libcontainer container 6326a0e39ae7fae2f6a371bfec59db5a83166faba391addc75136f0fff41bd2e. Jan 23 17:56:15.515726 systemd[1]: Started cri-containerd-b364011ef08efc2535d520c921544fe7d96987411aa9b09d74813ae5b1603b99.scope - libcontainer container b364011ef08efc2535d520c921544fe7d96987411aa9b09d74813ae5b1603b99. Jan 23 17:56:15.582052 kubelet[2396]: I0123 17:56:15.581955 2396 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:15.582341 kubelet[2396]: E0123 17:56:15.582306 2396 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.224.74.11:6443/api/v1/nodes\": dial tcp 46.224.74.11:6443: connect: connection refused" node="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:15.600641 containerd[1553]: time="2026-01-23T17:56:15.600372277Z" level=info msg="StartContainer for \"b364011ef08efc2535d520c921544fe7d96987411aa9b09d74813ae5b1603b99\" returns successfully" Jan 23 17:56:15.600641 containerd[1553]: time="2026-01-23T17:56:15.600496033Z" level=info msg="StartContainer for \"c4a6aeb8861c31c37cd1d5c7cdb55d169075775e512df0a342567553fb4af4c9\" returns successfully" Jan 23 17:56:15.605702 containerd[1553]: time="2026-01-23T17:56:15.603919601Z" level=info msg="StartContainer for \"6326a0e39ae7fae2f6a371bfec59db5a83166faba391addc75136f0fff41bd2e\" returns successfully" Jan 23 17:56:15.644872 kubelet[2396]: E0123 17:56:15.644785 2396 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://46.224.74.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 46.224.74.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 17:56:15.822477 kubelet[2396]: E0123 17:56:15.821681 2396 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-3-b08bb0c7a1\" not found" node="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:15.826559 kubelet[2396]: E0123 17:56:15.826189 2396 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-3-b08bb0c7a1\" not found" node="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:15.828774 kubelet[2396]: E0123 17:56:15.828744 2396 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-3-b08bb0c7a1\" not found" node="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:16.384783 kubelet[2396]: I0123 17:56:16.384748 2396 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:16.832643 kubelet[2396]: E0123 17:56:16.832494 2396 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-3-b08bb0c7a1\" not found" node="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:16.832994 kubelet[2396]: E0123 17:56:16.832599 2396 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-3-b08bb0c7a1\" not found" node="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:19.112736 kubelet[2396]: E0123 17:56:19.112663 2396 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-2-3-3-b08bb0c7a1\" not found" node="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:19.207903 kubelet[2396]: I0123 17:56:19.207805 2396 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:19.285563 kubelet[2396]: I0123 17:56:19.285348 2396 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:19.293478 kubelet[2396]: E0123 17:56:19.293417 2396 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-3-3-b08bb0c7a1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:19.293478 kubelet[2396]: I0123 17:56:19.293461 2396 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:19.296254 kubelet[2396]: E0123 17:56:19.296222 2396 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-3-3-b08bb0c7a1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:19.296254 kubelet[2396]: I0123 17:56:19.296253 2396 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:19.298266 kubelet[2396]: E0123 17:56:19.298226 2396 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-3-3-b08bb0c7a1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:19.761035 kubelet[2396]: I0123 17:56:19.760635 2396 apiserver.go:52] "Watching apiserver" Jan 23 17:56:19.782948 kubelet[2396]: I0123 17:56:19.782829 2396 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 17:56:20.998920 kubelet[2396]: I0123 17:56:20.998584 2396 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:21.445292 systemd[1]: Reload requested from client PID 2679 ('systemctl') (unit session-7.scope)... Jan 23 17:56:21.445996 systemd[1]: Reloading... Jan 23 17:56:21.547555 zram_generator::config[2725]: No configuration found. Jan 23 17:56:21.758063 systemd[1]: Reloading finished in 311 ms. Jan 23 17:56:21.803496 kubelet[2396]: I0123 17:56:21.803424 2396 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 17:56:21.804067 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:56:21.819584 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 17:56:21.820240 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:56:21.820468 systemd[1]: kubelet.service: Consumed 1.720s CPU time, 125.4M memory peak. Jan 23 17:56:21.827045 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:56:21.993378 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:56:22.005009 (kubelet)[2768]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 17:56:22.055957 kubelet[2768]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:56:22.055957 kubelet[2768]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 17:56:22.055957 kubelet[2768]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:56:22.055957 kubelet[2768]: I0123 17:56:22.055876 2768 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 17:56:22.068175 kubelet[2768]: I0123 17:56:22.067948 2768 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 17:56:22.068175 kubelet[2768]: I0123 17:56:22.067992 2768 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 17:56:22.068323 kubelet[2768]: I0123 17:56:22.068231 2768 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 17:56:22.069961 kubelet[2768]: I0123 17:56:22.069916 2768 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 17:56:22.073441 kubelet[2768]: I0123 17:56:22.073410 2768 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 17:56:22.080362 kubelet[2768]: I0123 17:56:22.079064 2768 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 17:56:22.082310 kubelet[2768]: I0123 17:56:22.082270 2768 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 17:56:22.083861 kubelet[2768]: I0123 17:56:22.083812 2768 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 17:56:22.084034 kubelet[2768]: I0123 17:56:22.083863 2768 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-3-3-b08bb0c7a1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 17:56:22.084123 kubelet[2768]: I0123 17:56:22.084040 2768 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 17:56:22.084123 kubelet[2768]: I0123 17:56:22.084050 2768 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 17:56:22.084123 kubelet[2768]: I0123 17:56:22.084096 2768 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:56:22.084374 kubelet[2768]: I0123 17:56:22.084345 2768 kubelet.go:480] "Attempting to sync node with API server" Jan 23 17:56:22.084374 kubelet[2768]: I0123 17:56:22.084370 2768 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 17:56:22.085042 kubelet[2768]: I0123 17:56:22.084904 2768 kubelet.go:386] "Adding apiserver pod source" Jan 23 17:56:22.085042 kubelet[2768]: I0123 17:56:22.084939 2768 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 17:56:22.090464 kubelet[2768]: I0123 17:56:22.090394 2768 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 17:56:22.094261 kubelet[2768]: I0123 17:56:22.094226 2768 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 17:56:22.118931 kubelet[2768]: I0123 17:56:22.118894 2768 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 17:56:22.118931 kubelet[2768]: I0123 17:56:22.118941 2768 server.go:1289] "Started kubelet" Jan 23 17:56:22.122393 kubelet[2768]: I0123 17:56:22.122241 2768 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 17:56:22.135213 kubelet[2768]: I0123 17:56:22.134663 2768 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 17:56:22.137678 kubelet[2768]: I0123 17:56:22.137623 2768 server.go:317] "Adding debug handlers to kubelet server" Jan 23 17:56:22.144285 kubelet[2768]: I0123 17:56:22.142915 2768 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 17:56:22.144285 kubelet[2768]: I0123 17:56:22.143132 2768 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 17:56:22.144285 kubelet[2768]: I0123 17:56:22.143341 2768 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 17:56:22.144774 kubelet[2768]: I0123 17:56:22.144749 2768 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 17:56:22.145994 kubelet[2768]: I0123 17:56:22.145966 2768 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 17:56:22.149565 kubelet[2768]: I0123 17:56:22.147704 2768 reconciler.go:26] "Reconciler: start to sync state" Jan 23 17:56:22.154484 kubelet[2768]: I0123 17:56:22.154451 2768 factory.go:223] Registration of the systemd container factory successfully Jan 23 17:56:22.154642 kubelet[2768]: I0123 17:56:22.154588 2768 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 17:56:22.163077 kubelet[2768]: I0123 17:56:22.162909 2768 factory.go:223] Registration of the containerd container factory successfully Jan 23 17:56:22.163626 kubelet[2768]: I0123 17:56:22.163600 2768 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 17:56:22.165035 kubelet[2768]: I0123 17:56:22.165015 2768 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 17:56:22.165270 kubelet[2768]: I0123 17:56:22.165142 2768 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 17:56:22.165270 kubelet[2768]: I0123 17:56:22.165174 2768 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 17:56:22.165270 kubelet[2768]: I0123 17:56:22.165181 2768 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 17:56:22.165270 kubelet[2768]: E0123 17:56:22.165222 2768 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 17:56:22.169047 kubelet[2768]: E0123 17:56:22.169002 2768 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 17:56:22.234720 kubelet[2768]: I0123 17:56:22.234687 2768 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 17:56:22.234720 kubelet[2768]: I0123 17:56:22.234714 2768 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 17:56:22.234912 kubelet[2768]: I0123 17:56:22.234740 2768 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:56:22.234937 kubelet[2768]: I0123 17:56:22.234924 2768 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 17:56:22.234956 kubelet[2768]: I0123 17:56:22.234934 2768 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 17:56:22.234956 kubelet[2768]: I0123 17:56:22.234952 2768 policy_none.go:49] "None policy: Start" Jan 23 17:56:22.234996 kubelet[2768]: I0123 17:56:22.234961 2768 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 17:56:22.234996 kubelet[2768]: I0123 17:56:22.234971 2768 state_mem.go:35] "Initializing new in-memory state store" Jan 23 17:56:22.235258 kubelet[2768]: I0123 17:56:22.235051 2768 state_mem.go:75] "Updated machine memory state" Jan 23 17:56:22.245382 kubelet[2768]: E0123 17:56:22.245336 2768 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 17:56:22.245973 kubelet[2768]: I0123 17:56:22.245937 2768 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 17:56:22.246169 kubelet[2768]: I0123 17:56:22.246130 2768 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 17:56:22.247042 kubelet[2768]: I0123 17:56:22.246847 2768 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 17:56:22.251154 kubelet[2768]: E0123 17:56:22.251084 2768 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 17:56:22.267715 kubelet[2768]: I0123 17:56:22.267371 2768 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:22.268252 kubelet[2768]: I0123 17:56:22.267788 2768 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:22.268500 kubelet[2768]: I0123 17:56:22.267954 2768 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:22.279626 kubelet[2768]: E0123 17:56:22.279582 2768 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-3-3-b08bb0c7a1\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:22.359570 kubelet[2768]: I0123 17:56:22.358006 2768 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:22.368164 kubelet[2768]: I0123 17:56:22.368091 2768 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:22.368164 kubelet[2768]: I0123 17:56:22.368217 2768 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:22.449782 kubelet[2768]: I0123 17:56:22.449711 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/87c5f1b495b2801ebdcb4e77dba0b154-kubeconfig\") pod \"kube-scheduler-ci-4459-2-3-3-b08bb0c7a1\" (UID: \"87c5f1b495b2801ebdcb4e77dba0b154\") " pod="kube-system/kube-scheduler-ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:22.449782 kubelet[2768]: I0123 17:56:22.449767 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b6e196e44ab1b71476ab95a6ede4ece0-k8s-certs\") pod \"kube-apiserver-ci-4459-2-3-3-b08bb0c7a1\" (UID: \"b6e196e44ab1b71476ab95a6ede4ece0\") " pod="kube-system/kube-apiserver-ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:22.450022 kubelet[2768]: I0123 17:56:22.449797 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/79e42748197e1b325033be20ff13cee1-ca-certs\") pod \"kube-controller-manager-ci-4459-2-3-3-b08bb0c7a1\" (UID: \"79e42748197e1b325033be20ff13cee1\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:22.450022 kubelet[2768]: I0123 17:56:22.449817 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/79e42748197e1b325033be20ff13cee1-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-3-3-b08bb0c7a1\" (UID: \"79e42748197e1b325033be20ff13cee1\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:22.450022 kubelet[2768]: I0123 17:56:22.449833 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b6e196e44ab1b71476ab95a6ede4ece0-ca-certs\") pod \"kube-apiserver-ci-4459-2-3-3-b08bb0c7a1\" (UID: \"b6e196e44ab1b71476ab95a6ede4ece0\") " pod="kube-system/kube-apiserver-ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:22.450022 kubelet[2768]: I0123 17:56:22.449850 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b6e196e44ab1b71476ab95a6ede4ece0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-3-3-b08bb0c7a1\" (UID: \"b6e196e44ab1b71476ab95a6ede4ece0\") " pod="kube-system/kube-apiserver-ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:22.450022 kubelet[2768]: I0123 17:56:22.449870 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/79e42748197e1b325033be20ff13cee1-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-3-3-b08bb0c7a1\" (UID: \"79e42748197e1b325033be20ff13cee1\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:22.450251 kubelet[2768]: I0123 17:56:22.449887 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/79e42748197e1b325033be20ff13cee1-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-3-3-b08bb0c7a1\" (UID: \"79e42748197e1b325033be20ff13cee1\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:22.450251 kubelet[2768]: I0123 17:56:22.449904 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/79e42748197e1b325033be20ff13cee1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-3-3-b08bb0c7a1\" (UID: \"79e42748197e1b325033be20ff13cee1\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:23.088619 kubelet[2768]: I0123 17:56:23.088248 2768 apiserver.go:52] "Watching apiserver" Jan 23 17:56:23.149885 kubelet[2768]: I0123 17:56:23.149821 2768 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 17:56:23.205557 kubelet[2768]: I0123 17:56:23.204386 2768 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:23.214398 kubelet[2768]: E0123 17:56:23.214351 2768 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-3-3-b08bb0c7a1\" already exists" pod="kube-system/kube-scheduler-ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:56:23.239030 kubelet[2768]: I0123 17:56:23.238482 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-3-3-b08bb0c7a1" podStartSLOduration=1.238461553 podStartE2EDuration="1.238461553s" podCreationTimestamp="2026-01-23 17:56:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:56:23.227354632 +0000 UTC m=+1.215885585" watchObservedRunningTime="2026-01-23 17:56:23.238461553 +0000 UTC m=+1.226992506" Jan 23 17:56:23.263163 kubelet[2768]: I0123 17:56:23.261789 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-3-3-b08bb0c7a1" podStartSLOduration=1.261331298 podStartE2EDuration="1.261331298s" podCreationTimestamp="2026-01-23 17:56:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:56:23.240384224 +0000 UTC m=+1.228915217" watchObservedRunningTime="2026-01-23 17:56:23.261331298 +0000 UTC m=+1.249862251" Jan 23 17:56:23.284728 kubelet[2768]: I0123 17:56:23.284671 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-3-3-b08bb0c7a1" podStartSLOduration=2.284651992 podStartE2EDuration="2.284651992s" podCreationTimestamp="2026-01-23 17:56:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:56:23.266559407 +0000 UTC m=+1.255090400" watchObservedRunningTime="2026-01-23 17:56:23.284651992 +0000 UTC m=+1.273182945" Jan 23 17:56:26.868542 kubelet[2768]: I0123 17:56:26.868483 2768 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 17:56:26.869152 containerd[1553]: time="2026-01-23T17:56:26.869016083Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 17:56:26.869927 kubelet[2768]: I0123 17:56:26.869571 2768 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 17:56:27.436053 systemd[1]: Created slice kubepods-besteffort-pod982249a3_328f_43b7_8574_4ca6fbb88960.slice - libcontainer container kubepods-besteffort-pod982249a3_328f_43b7_8574_4ca6fbb88960.slice. Jan 23 17:56:27.479554 kubelet[2768]: I0123 17:56:27.479234 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/982249a3-328f-43b7-8574-4ca6fbb88960-kube-proxy\") pod \"kube-proxy-8mmp2\" (UID: \"982249a3-328f-43b7-8574-4ca6fbb88960\") " pod="kube-system/kube-proxy-8mmp2" Jan 23 17:56:27.479554 kubelet[2768]: I0123 17:56:27.479326 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/982249a3-328f-43b7-8574-4ca6fbb88960-lib-modules\") pod \"kube-proxy-8mmp2\" (UID: \"982249a3-328f-43b7-8574-4ca6fbb88960\") " pod="kube-system/kube-proxy-8mmp2" Jan 23 17:56:27.479554 kubelet[2768]: I0123 17:56:27.479367 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/982249a3-328f-43b7-8574-4ca6fbb88960-xtables-lock\") pod \"kube-proxy-8mmp2\" (UID: \"982249a3-328f-43b7-8574-4ca6fbb88960\") " pod="kube-system/kube-proxy-8mmp2" Jan 23 17:56:27.479554 kubelet[2768]: I0123 17:56:27.479400 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dst76\" (UniqueName: \"kubernetes.io/projected/982249a3-328f-43b7-8574-4ca6fbb88960-kube-api-access-dst76\") pod \"kube-proxy-8mmp2\" (UID: \"982249a3-328f-43b7-8574-4ca6fbb88960\") " pod="kube-system/kube-proxy-8mmp2" Jan 23 17:56:27.595187 kubelet[2768]: E0123 17:56:27.594882 2768 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 23 17:56:27.595187 kubelet[2768]: E0123 17:56:27.595135 2768 projected.go:194] Error preparing data for projected volume kube-api-access-dst76 for pod kube-system/kube-proxy-8mmp2: configmap "kube-root-ca.crt" not found Jan 23 17:56:27.596648 kubelet[2768]: E0123 17:56:27.595218 2768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/982249a3-328f-43b7-8574-4ca6fbb88960-kube-api-access-dst76 podName:982249a3-328f-43b7-8574-4ca6fbb88960 nodeName:}" failed. No retries permitted until 2026-01-23 17:56:28.095193814 +0000 UTC m=+6.083724767 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dst76" (UniqueName: "kubernetes.io/projected/982249a3-328f-43b7-8574-4ca6fbb88960-kube-api-access-dst76") pod "kube-proxy-8mmp2" (UID: "982249a3-328f-43b7-8574-4ca6fbb88960") : configmap "kube-root-ca.crt" not found Jan 23 17:56:28.098376 systemd[1]: Created slice kubepods-besteffort-pod33153a50_3b22_47c2_95a1_d681e84de39d.slice - libcontainer container kubepods-besteffort-pod33153a50_3b22_47c2_95a1_d681e84de39d.slice. Jan 23 17:56:28.184725 kubelet[2768]: I0123 17:56:28.184587 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h9x5\" (UniqueName: \"kubernetes.io/projected/33153a50-3b22-47c2-95a1-d681e84de39d-kube-api-access-2h9x5\") pod \"tigera-operator-7dcd859c48-zv5rb\" (UID: \"33153a50-3b22-47c2-95a1-d681e84de39d\") " pod="tigera-operator/tigera-operator-7dcd859c48-zv5rb" Jan 23 17:56:28.185450 kubelet[2768]: I0123 17:56:28.185369 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/33153a50-3b22-47c2-95a1-d681e84de39d-var-lib-calico\") pod \"tigera-operator-7dcd859c48-zv5rb\" (UID: \"33153a50-3b22-47c2-95a1-d681e84de39d\") " pod="tigera-operator/tigera-operator-7dcd859c48-zv5rb" Jan 23 17:56:28.346813 containerd[1553]: time="2026-01-23T17:56:28.346506248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8mmp2,Uid:982249a3-328f-43b7-8574-4ca6fbb88960,Namespace:kube-system,Attempt:0,}" Jan 23 17:56:28.368971 containerd[1553]: time="2026-01-23T17:56:28.368712880Z" level=info msg="connecting to shim ab7874683e2b0940965a83b468f277973b8d078e0ee3a631dc1dab600d11431f" address="unix:///run/containerd/s/e510c890453450f6249fd25cb692587b4ed729dd50dc6c59044a2e3d27791e2d" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:56:28.399863 systemd[1]: Started cri-containerd-ab7874683e2b0940965a83b468f277973b8d078e0ee3a631dc1dab600d11431f.scope - libcontainer container ab7874683e2b0940965a83b468f277973b8d078e0ee3a631dc1dab600d11431f. Jan 23 17:56:28.405100 containerd[1553]: time="2026-01-23T17:56:28.404732009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-zv5rb,Uid:33153a50-3b22-47c2-95a1-d681e84de39d,Namespace:tigera-operator,Attempt:0,}" Jan 23 17:56:28.433256 containerd[1553]: time="2026-01-23T17:56:28.432786632Z" level=info msg="connecting to shim 06bb7fa30b60be150eeb644d305dfab1181d188d8af9acf4ef4e998c56aeba31" address="unix:///run/containerd/s/e3cfd5523036778dc2645b566eb82284e6c3e1e023f44f926061e10acc426df1" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:56:28.440390 containerd[1553]: time="2026-01-23T17:56:28.440352146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8mmp2,Uid:982249a3-328f-43b7-8574-4ca6fbb88960,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab7874683e2b0940965a83b468f277973b8d078e0ee3a631dc1dab600d11431f\"" Jan 23 17:56:28.446860 containerd[1553]: time="2026-01-23T17:56:28.446758406Z" level=info msg="CreateContainer within sandbox \"ab7874683e2b0940965a83b468f277973b8d078e0ee3a631dc1dab600d11431f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 17:56:28.459870 containerd[1553]: time="2026-01-23T17:56:28.459581124Z" level=info msg="Container 6d87fa087a46be153c9ed273dd0183fb77a1c736d6c80488b6d547937a60c267: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:56:28.477934 containerd[1553]: time="2026-01-23T17:56:28.477868802Z" level=info msg="CreateContainer within sandbox \"ab7874683e2b0940965a83b468f277973b8d078e0ee3a631dc1dab600d11431f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6d87fa087a46be153c9ed273dd0183fb77a1c736d6c80488b6d547937a60c267\"" Jan 23 17:56:28.478451 systemd[1]: Started cri-containerd-06bb7fa30b60be150eeb644d305dfab1181d188d8af9acf4ef4e998c56aeba31.scope - libcontainer container 06bb7fa30b60be150eeb644d305dfab1181d188d8af9acf4ef4e998c56aeba31. Jan 23 17:56:28.482550 containerd[1553]: time="2026-01-23T17:56:28.482230066Z" level=info msg="StartContainer for \"6d87fa087a46be153c9ed273dd0183fb77a1c736d6c80488b6d547937a60c267\"" Jan 23 17:56:28.486209 containerd[1553]: time="2026-01-23T17:56:28.485495635Z" level=info msg="connecting to shim 6d87fa087a46be153c9ed273dd0183fb77a1c736d6c80488b6d547937a60c267" address="unix:///run/containerd/s/e510c890453450f6249fd25cb692587b4ed729dd50dc6c59044a2e3d27791e2d" protocol=ttrpc version=3 Jan 23 17:56:28.510935 systemd[1]: Started cri-containerd-6d87fa087a46be153c9ed273dd0183fb77a1c736d6c80488b6d547937a60c267.scope - libcontainer container 6d87fa087a46be153c9ed273dd0183fb77a1c736d6c80488b6d547937a60c267. Jan 23 17:56:28.534402 containerd[1553]: time="2026-01-23T17:56:28.534299403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-zv5rb,Uid:33153a50-3b22-47c2-95a1-d681e84de39d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"06bb7fa30b60be150eeb644d305dfab1181d188d8af9acf4ef4e998c56aeba31\"" Jan 23 17:56:28.537767 containerd[1553]: time="2026-01-23T17:56:28.537726127Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 17:56:28.597836 containerd[1553]: time="2026-01-23T17:56:28.597752929Z" level=info msg="StartContainer for \"6d87fa087a46be153c9ed273dd0183fb77a1c736d6c80488b6d547937a60c267\" returns successfully" Jan 23 17:56:29.239394 kubelet[2768]: I0123 17:56:29.239287 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8mmp2" podStartSLOduration=2.239262522 podStartE2EDuration="2.239262522s" podCreationTimestamp="2026-01-23 17:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:56:29.239123845 +0000 UTC m=+7.227654798" watchObservedRunningTime="2026-01-23 17:56:29.239262522 +0000 UTC m=+7.227793595" Jan 23 17:56:30.121247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3300747404.mount: Deactivated successfully. Jan 23 17:56:30.611392 containerd[1553]: time="2026-01-23T17:56:30.611263552Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:30.613322 containerd[1553]: time="2026-01-23T17:56:30.613268150Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Jan 23 17:56:30.614429 containerd[1553]: time="2026-01-23T17:56:30.614363847Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:30.619443 containerd[1553]: time="2026-01-23T17:56:30.619377742Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:30.621757 containerd[1553]: time="2026-01-23T17:56:30.621701613Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.083800649s" Jan 23 17:56:30.621757 containerd[1553]: time="2026-01-23T17:56:30.621750892Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 23 17:56:30.630150 containerd[1553]: time="2026-01-23T17:56:30.630110397Z" level=info msg="CreateContainer within sandbox \"06bb7fa30b60be150eeb644d305dfab1181d188d8af9acf4ef4e998c56aeba31\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 17:56:30.639468 containerd[1553]: time="2026-01-23T17:56:30.639140288Z" level=info msg="Container 1548b48aa4b2b26c10a5e91add086484f469fc40494e1fb012c01a1f4bf3b715: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:56:30.643329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount275596948.mount: Deactivated successfully. Jan 23 17:56:30.652230 containerd[1553]: time="2026-01-23T17:56:30.652155495Z" level=info msg="CreateContainer within sandbox \"06bb7fa30b60be150eeb644d305dfab1181d188d8af9acf4ef4e998c56aeba31\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1548b48aa4b2b26c10a5e91add086484f469fc40494e1fb012c01a1f4bf3b715\"" Jan 23 17:56:30.653384 containerd[1553]: time="2026-01-23T17:56:30.653330590Z" level=info msg="StartContainer for \"1548b48aa4b2b26c10a5e91add086484f469fc40494e1fb012c01a1f4bf3b715\"" Jan 23 17:56:30.655245 containerd[1553]: time="2026-01-23T17:56:30.655024515Z" level=info msg="connecting to shim 1548b48aa4b2b26c10a5e91add086484f469fc40494e1fb012c01a1f4bf3b715" address="unix:///run/containerd/s/e3cfd5523036778dc2645b566eb82284e6c3e1e023f44f926061e10acc426df1" protocol=ttrpc version=3 Jan 23 17:56:30.684898 systemd[1]: Started cri-containerd-1548b48aa4b2b26c10a5e91add086484f469fc40494e1fb012c01a1f4bf3b715.scope - libcontainer container 1548b48aa4b2b26c10a5e91add086484f469fc40494e1fb012c01a1f4bf3b715. Jan 23 17:56:30.722400 containerd[1553]: time="2026-01-23T17:56:30.722364663Z" level=info msg="StartContainer for \"1548b48aa4b2b26c10a5e91add086484f469fc40494e1fb012c01a1f4bf3b715\" returns successfully" Jan 23 17:56:37.073045 sudo[1813]: pam_unix(sudo:session): session closed for user root Jan 23 17:56:37.172989 sshd[1812]: Connection closed by 68.220.241.50 port 52150 Jan 23 17:56:37.174621 sshd-session[1809]: pam_unix(sshd:session): session closed for user core Jan 23 17:56:37.181153 systemd-logind[1529]: Session 7 logged out. Waiting for processes to exit. Jan 23 17:56:37.181935 systemd[1]: sshd@6-46.224.74.11:22-68.220.241.50:52150.service: Deactivated successfully. Jan 23 17:56:37.186258 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 17:56:37.186499 systemd[1]: session-7.scope: Consumed 6.630s CPU time, 222.6M memory peak. Jan 23 17:56:37.189990 systemd-logind[1529]: Removed session 7. Jan 23 17:56:47.644232 kubelet[2768]: I0123 17:56:47.644020 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-zv5rb" podStartSLOduration=17.556492173 podStartE2EDuration="19.644001064s" podCreationTimestamp="2026-01-23 17:56:28 +0000 UTC" firstStartedPulling="2026-01-23 17:56:28.536927785 +0000 UTC m=+6.525458738" lastFinishedPulling="2026-01-23 17:56:30.624436676 +0000 UTC m=+8.612967629" observedRunningTime="2026-01-23 17:56:31.257210248 +0000 UTC m=+9.245741201" watchObservedRunningTime="2026-01-23 17:56:47.644001064 +0000 UTC m=+25.632532017" Jan 23 17:56:47.657464 systemd[1]: Created slice kubepods-besteffort-pod12dc2524_8c47_40f3_82c5_2ee0665f1b98.slice - libcontainer container kubepods-besteffort-pod12dc2524_8c47_40f3_82c5_2ee0665f1b98.slice. Jan 23 17:56:47.710863 kubelet[2768]: I0123 17:56:47.710780 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12dc2524-8c47-40f3-82c5-2ee0665f1b98-tigera-ca-bundle\") pod \"calico-typha-6ff46598dd-vtk9s\" (UID: \"12dc2524-8c47-40f3-82c5-2ee0665f1b98\") " pod="calico-system/calico-typha-6ff46598dd-vtk9s" Jan 23 17:56:47.711374 kubelet[2768]: I0123 17:56:47.711295 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/12dc2524-8c47-40f3-82c5-2ee0665f1b98-typha-certs\") pod \"calico-typha-6ff46598dd-vtk9s\" (UID: \"12dc2524-8c47-40f3-82c5-2ee0665f1b98\") " pod="calico-system/calico-typha-6ff46598dd-vtk9s" Jan 23 17:56:47.712089 kubelet[2768]: I0123 17:56:47.712010 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j77rt\" (UniqueName: \"kubernetes.io/projected/12dc2524-8c47-40f3-82c5-2ee0665f1b98-kube-api-access-j77rt\") pod \"calico-typha-6ff46598dd-vtk9s\" (UID: \"12dc2524-8c47-40f3-82c5-2ee0665f1b98\") " pod="calico-system/calico-typha-6ff46598dd-vtk9s" Jan 23 17:56:47.889365 systemd[1]: Created slice kubepods-besteffort-pod60554b5d_af84_4b21_850c_d206426522eb.slice - libcontainer container kubepods-besteffort-pod60554b5d_af84_4b21_850c_d206426522eb.slice. Jan 23 17:56:47.913334 kubelet[2768]: I0123 17:56:47.912781 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/60554b5d-af84-4b21-850c-d206426522eb-var-run-calico\") pod \"calico-node-dvz9m\" (UID: \"60554b5d-af84-4b21-850c-d206426522eb\") " pod="calico-system/calico-node-dvz9m" Jan 23 17:56:47.913334 kubelet[2768]: I0123 17:56:47.912837 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt6tv\" (UniqueName: \"kubernetes.io/projected/60554b5d-af84-4b21-850c-d206426522eb-kube-api-access-qt6tv\") pod \"calico-node-dvz9m\" (UID: \"60554b5d-af84-4b21-850c-d206426522eb\") " pod="calico-system/calico-node-dvz9m" Jan 23 17:56:47.913334 kubelet[2768]: I0123 17:56:47.912859 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/60554b5d-af84-4b21-850c-d206426522eb-flexvol-driver-host\") pod \"calico-node-dvz9m\" (UID: \"60554b5d-af84-4b21-850c-d206426522eb\") " pod="calico-system/calico-node-dvz9m" Jan 23 17:56:47.913334 kubelet[2768]: I0123 17:56:47.912877 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/60554b5d-af84-4b21-850c-d206426522eb-cni-bin-dir\") pod \"calico-node-dvz9m\" (UID: \"60554b5d-af84-4b21-850c-d206426522eb\") " pod="calico-system/calico-node-dvz9m" Jan 23 17:56:47.913334 kubelet[2768]: I0123 17:56:47.912894 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/60554b5d-af84-4b21-850c-d206426522eb-policysync\") pod \"calico-node-dvz9m\" (UID: \"60554b5d-af84-4b21-850c-d206426522eb\") " pod="calico-system/calico-node-dvz9m" Jan 23 17:56:47.913682 kubelet[2768]: I0123 17:56:47.912909 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/60554b5d-af84-4b21-850c-d206426522eb-var-lib-calico\") pod \"calico-node-dvz9m\" (UID: \"60554b5d-af84-4b21-850c-d206426522eb\") " pod="calico-system/calico-node-dvz9m" Jan 23 17:56:47.913682 kubelet[2768]: I0123 17:56:47.912924 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60554b5d-af84-4b21-850c-d206426522eb-xtables-lock\") pod \"calico-node-dvz9m\" (UID: \"60554b5d-af84-4b21-850c-d206426522eb\") " pod="calico-system/calico-node-dvz9m" Jan 23 17:56:47.913682 kubelet[2768]: I0123 17:56:47.912940 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60554b5d-af84-4b21-850c-d206426522eb-tigera-ca-bundle\") pod \"calico-node-dvz9m\" (UID: \"60554b5d-af84-4b21-850c-d206426522eb\") " pod="calico-system/calico-node-dvz9m" Jan 23 17:56:47.913682 kubelet[2768]: I0123 17:56:47.912955 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60554b5d-af84-4b21-850c-d206426522eb-lib-modules\") pod \"calico-node-dvz9m\" (UID: \"60554b5d-af84-4b21-850c-d206426522eb\") " pod="calico-system/calico-node-dvz9m" Jan 23 17:56:47.913682 kubelet[2768]: I0123 17:56:47.912970 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/60554b5d-af84-4b21-850c-d206426522eb-cni-log-dir\") pod \"calico-node-dvz9m\" (UID: \"60554b5d-af84-4b21-850c-d206426522eb\") " pod="calico-system/calico-node-dvz9m" Jan 23 17:56:47.913826 kubelet[2768]: I0123 17:56:47.912986 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/60554b5d-af84-4b21-850c-d206426522eb-cni-net-dir\") pod \"calico-node-dvz9m\" (UID: \"60554b5d-af84-4b21-850c-d206426522eb\") " pod="calico-system/calico-node-dvz9m" Jan 23 17:56:47.913826 kubelet[2768]: I0123 17:56:47.913019 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/60554b5d-af84-4b21-850c-d206426522eb-node-certs\") pod \"calico-node-dvz9m\" (UID: \"60554b5d-af84-4b21-850c-d206426522eb\") " pod="calico-system/calico-node-dvz9m" Jan 23 17:56:47.963057 containerd[1553]: time="2026-01-23T17:56:47.963002724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6ff46598dd-vtk9s,Uid:12dc2524-8c47-40f3-82c5-2ee0665f1b98,Namespace:calico-system,Attempt:0,}" Jan 23 17:56:47.992436 containerd[1553]: time="2026-01-23T17:56:47.992381210Z" level=info msg="connecting to shim 8b6e888c35f7d62701b7112c823441492b68b0881647274663adf46841245565" address="unix:///run/containerd/s/938f7aa9176803de7ee99dbb1aa48476abed1f2472e67b808d4cfdde1969e714" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:56:48.017704 systemd[1]: Started cri-containerd-8b6e888c35f7d62701b7112c823441492b68b0881647274663adf46841245565.scope - libcontainer container 8b6e888c35f7d62701b7112c823441492b68b0881647274663adf46841245565. Jan 23 17:56:48.023750 kubelet[2768]: E0123 17:56:48.023721 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.024016 kubelet[2768]: W0123 17:56:48.023936 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.024016 kubelet[2768]: E0123 17:56:48.023983 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.039935 kubelet[2768]: E0123 17:56:48.039748 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.039935 kubelet[2768]: W0123 17:56:48.039775 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.039935 kubelet[2768]: E0123 17:56:48.039798 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.078708 kubelet[2768]: E0123 17:56:48.078548 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2j7b" podUID="c91580b5-1014-472e-a6f0-53c9f68e2405" Jan 23 17:56:48.103936 containerd[1553]: time="2026-01-23T17:56:48.103774471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6ff46598dd-vtk9s,Uid:12dc2524-8c47-40f3-82c5-2ee0665f1b98,Namespace:calico-system,Attempt:0,} returns sandbox id \"8b6e888c35f7d62701b7112c823441492b68b0881647274663adf46841245565\"" Jan 23 17:56:48.105980 kubelet[2768]: E0123 17:56:48.105888 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.105980 kubelet[2768]: W0123 17:56:48.105925 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.106386 kubelet[2768]: E0123 17:56:48.106232 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.107341 kubelet[2768]: E0123 17:56:48.107322 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.107551 kubelet[2768]: W0123 17:56:48.107385 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.107551 kubelet[2768]: E0123 17:56:48.107441 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.108229 kubelet[2768]: E0123 17:56:48.108211 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.108574 kubelet[2768]: W0123 17:56:48.108452 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.108574 kubelet[2768]: E0123 17:56:48.108475 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.108914 kubelet[2768]: E0123 17:56:48.108865 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.108914 kubelet[2768]: W0123 17:56:48.108878 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.108914 kubelet[2768]: E0123 17:56:48.108890 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.111145 containerd[1553]: time="2026-01-23T17:56:48.109740216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 17:56:48.111409 kubelet[2768]: E0123 17:56:48.111396 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.111606 kubelet[2768]: W0123 17:56:48.111540 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.111606 kubelet[2768]: E0123 17:56:48.111564 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.112094 kubelet[2768]: E0123 17:56:48.112031 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.112094 kubelet[2768]: W0123 17:56:48.112045 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.112094 kubelet[2768]: E0123 17:56:48.112056 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.112538 kubelet[2768]: E0123 17:56:48.112473 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.112538 kubelet[2768]: W0123 17:56:48.112485 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.112538 kubelet[2768]: E0123 17:56:48.112497 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.113077 kubelet[2768]: E0123 17:56:48.113041 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.113077 kubelet[2768]: W0123 17:56:48.113055 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.113854 kubelet[2768]: E0123 17:56:48.113539 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.114180 kubelet[2768]: E0123 17:56:48.114165 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.114621 kubelet[2768]: W0123 17:56:48.114381 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.114621 kubelet[2768]: E0123 17:56:48.114399 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.116187 kubelet[2768]: E0123 17:56:48.115994 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.116187 kubelet[2768]: W0123 17:56:48.116008 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.116187 kubelet[2768]: E0123 17:56:48.116021 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.116364 kubelet[2768]: E0123 17:56:48.116349 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.117000 kubelet[2768]: W0123 17:56:48.116426 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.117637 kubelet[2768]: E0123 17:56:48.117100 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.117857 kubelet[2768]: E0123 17:56:48.117801 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.117944 kubelet[2768]: W0123 17:56:48.117923 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.118156 kubelet[2768]: E0123 17:56:48.118105 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.119412 kubelet[2768]: E0123 17:56:48.119382 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.119532 kubelet[2768]: W0123 17:56:48.119482 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.119595 kubelet[2768]: E0123 17:56:48.119499 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.119766 kubelet[2768]: I0123 17:56:48.119751 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c91580b5-1014-472e-a6f0-53c9f68e2405-socket-dir\") pod \"csi-node-driver-r2j7b\" (UID: \"c91580b5-1014-472e-a6f0-53c9f68e2405\") " pod="calico-system/csi-node-driver-r2j7b" Jan 23 17:56:48.120079 kubelet[2768]: E0123 17:56:48.120068 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.120243 kubelet[2768]: W0123 17:56:48.120176 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.120243 kubelet[2768]: E0123 17:56:48.120192 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.120587 kubelet[2768]: E0123 17:56:48.120550 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.120587 kubelet[2768]: W0123 17:56:48.120564 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.120587 kubelet[2768]: E0123 17:56:48.120574 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.120919 kubelet[2768]: E0123 17:56:48.120888 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.120919 kubelet[2768]: W0123 17:56:48.120898 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.120919 kubelet[2768]: E0123 17:56:48.120908 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.121262 kubelet[2768]: E0123 17:56:48.121230 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.121262 kubelet[2768]: W0123 17:56:48.121241 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.121262 kubelet[2768]: E0123 17:56:48.121251 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.121607 kubelet[2768]: E0123 17:56:48.121592 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.121710 kubelet[2768]: W0123 17:56:48.121676 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.121710 kubelet[2768]: E0123 17:56:48.121693 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.121890 kubelet[2768]: I0123 17:56:48.121873 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c91580b5-1014-472e-a6f0-53c9f68e2405-registration-dir\") pod \"csi-node-driver-r2j7b\" (UID: \"c91580b5-1014-472e-a6f0-53c9f68e2405\") " pod="calico-system/csi-node-driver-r2j7b" Jan 23 17:56:48.122145 kubelet[2768]: E0123 17:56:48.122094 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.122145 kubelet[2768]: W0123 17:56:48.122107 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.122145 kubelet[2768]: E0123 17:56:48.122117 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.122453 kubelet[2768]: E0123 17:56:48.122416 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.122453 kubelet[2768]: W0123 17:56:48.122428 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.122453 kubelet[2768]: E0123 17:56:48.122442 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.122778 kubelet[2768]: E0123 17:56:48.122766 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.122891 kubelet[2768]: W0123 17:56:48.122838 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.122891 kubelet[2768]: E0123 17:56:48.122853 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.123240 kubelet[2768]: E0123 17:56:48.123203 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.123240 kubelet[2768]: W0123 17:56:48.123217 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.123240 kubelet[2768]: E0123 17:56:48.123228 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.123551 kubelet[2768]: E0123 17:56:48.123534 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.123897 kubelet[2768]: W0123 17:56:48.123624 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.123897 kubelet[2768]: E0123 17:56:48.123641 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.124259 kubelet[2768]: I0123 17:56:48.124225 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c91580b5-1014-472e-a6f0-53c9f68e2405-kubelet-dir\") pod \"csi-node-driver-r2j7b\" (UID: \"c91580b5-1014-472e-a6f0-53c9f68e2405\") " pod="calico-system/csi-node-driver-r2j7b" Jan 23 17:56:48.124664 kubelet[2768]: E0123 17:56:48.124587 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.124664 kubelet[2768]: W0123 17:56:48.124695 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.124664 kubelet[2768]: E0123 17:56:48.124716 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.126813 kubelet[2768]: E0123 17:56:48.126796 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.126887 kubelet[2768]: W0123 17:56:48.126876 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.126953 kubelet[2768]: E0123 17:56:48.126943 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.127263 kubelet[2768]: E0123 17:56:48.127248 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.127369 kubelet[2768]: W0123 17:56:48.127343 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.127369 kubelet[2768]: E0123 17:56:48.127359 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.128155 kubelet[2768]: E0123 17:56:48.128099 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.128155 kubelet[2768]: W0123 17:56:48.128112 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.128155 kubelet[2768]: E0123 17:56:48.128133 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.129263 kubelet[2768]: E0123 17:56:48.129218 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.129263 kubelet[2768]: W0123 17:56:48.129234 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.129263 kubelet[2768]: E0123 17:56:48.129245 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.129714 kubelet[2768]: E0123 17:56:48.129672 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.129714 kubelet[2768]: W0123 17:56:48.129684 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.129714 kubelet[2768]: E0123 17:56:48.129695 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.204580 containerd[1553]: time="2026-01-23T17:56:48.202296898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dvz9m,Uid:60554b5d-af84-4b21-850c-d206426522eb,Namespace:calico-system,Attempt:0,}" Jan 23 17:56:48.229959 kubelet[2768]: E0123 17:56:48.229924 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.229959 kubelet[2768]: W0123 17:56:48.229949 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.229959 kubelet[2768]: E0123 17:56:48.229971 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.230459 kubelet[2768]: I0123 17:56:48.230001 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c91580b5-1014-472e-a6f0-53c9f68e2405-varrun\") pod \"csi-node-driver-r2j7b\" (UID: \"c91580b5-1014-472e-a6f0-53c9f68e2405\") " pod="calico-system/csi-node-driver-r2j7b" Jan 23 17:56:48.230459 kubelet[2768]: E0123 17:56:48.230439 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.230459 kubelet[2768]: W0123 17:56:48.230457 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.230915 kubelet[2768]: E0123 17:56:48.230471 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.231074 kubelet[2768]: E0123 17:56:48.231054 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.231074 kubelet[2768]: W0123 17:56:48.231072 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.231185 kubelet[2768]: E0123 17:56:48.231086 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.232631 containerd[1553]: time="2026-01-23T17:56:48.231415874Z" level=info msg="connecting to shim 502b8430be604f21d21da23717accfdec52e57fa17f157fd78a6547bb503493c" address="unix:///run/containerd/s/f8bc184ecc6befaeece0b45f964816c0cb5de19143d8707b97eb036788f60006" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:56:48.232803 kubelet[2768]: E0123 17:56:48.232779 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.232803 kubelet[2768]: W0123 17:56:48.232800 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.232874 kubelet[2768]: E0123 17:56:48.232815 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.232960 kubelet[2768]: I0123 17:56:48.232935 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvfkx\" (UniqueName: \"kubernetes.io/projected/c91580b5-1014-472e-a6f0-53c9f68e2405-kube-api-access-qvfkx\") pod \"csi-node-driver-r2j7b\" (UID: \"c91580b5-1014-472e-a6f0-53c9f68e2405\") " pod="calico-system/csi-node-driver-r2j7b" Jan 23 17:56:48.233831 kubelet[2768]: E0123 17:56:48.233800 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.233831 kubelet[2768]: W0123 17:56:48.233822 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.233831 kubelet[2768]: E0123 17:56:48.233835 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.234164 kubelet[2768]: E0123 17:56:48.234135 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.234164 kubelet[2768]: W0123 17:56:48.234150 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.234164 kubelet[2768]: E0123 17:56:48.234161 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.234620 kubelet[2768]: E0123 17:56:48.234596 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.234620 kubelet[2768]: W0123 17:56:48.234613 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.234744 kubelet[2768]: E0123 17:56:48.234627 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.235063 kubelet[2768]: E0123 17:56:48.235042 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.235063 kubelet[2768]: W0123 17:56:48.235058 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.235382 kubelet[2768]: E0123 17:56:48.235171 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.235647 kubelet[2768]: E0123 17:56:48.235629 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.235647 kubelet[2768]: W0123 17:56:48.235644 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.235857 kubelet[2768]: E0123 17:56:48.235829 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.236708 kubelet[2768]: E0123 17:56:48.236675 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.236869 kubelet[2768]: W0123 17:56:48.236721 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.236869 kubelet[2768]: E0123 17:56:48.236737 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.237106 kubelet[2768]: E0123 17:56:48.237089 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.237106 kubelet[2768]: W0123 17:56:48.237103 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.237220 kubelet[2768]: E0123 17:56:48.237126 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.237619 kubelet[2768]: E0123 17:56:48.237332 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.237619 kubelet[2768]: W0123 17:56:48.237351 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.237619 kubelet[2768]: E0123 17:56:48.237361 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.237964 kubelet[2768]: E0123 17:56:48.237880 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.237964 kubelet[2768]: W0123 17:56:48.237896 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.237964 kubelet[2768]: E0123 17:56:48.237909 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.238607 kubelet[2768]: E0123 17:56:48.238587 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.238607 kubelet[2768]: W0123 17:56:48.238602 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.238749 kubelet[2768]: E0123 17:56:48.238614 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.238844 kubelet[2768]: E0123 17:56:48.238827 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.238844 kubelet[2768]: W0123 17:56:48.238840 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.239350 kubelet[2768]: E0123 17:56:48.238940 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.239562 kubelet[2768]: E0123 17:56:48.239485 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.239623 kubelet[2768]: W0123 17:56:48.239554 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.239623 kubelet[2768]: E0123 17:56:48.239600 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.240623 kubelet[2768]: E0123 17:56:48.240591 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.240623 kubelet[2768]: W0123 17:56:48.240606 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.241276 kubelet[2768]: E0123 17:56:48.240636 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.241276 kubelet[2768]: E0123 17:56:48.241057 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.241276 kubelet[2768]: W0123 17:56:48.241068 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.241276 kubelet[2768]: E0123 17:56:48.241079 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.241402 kubelet[2768]: E0123 17:56:48.241378 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.241433 kubelet[2768]: W0123 17:56:48.241411 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.241433 kubelet[2768]: E0123 17:56:48.241426 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.241783 kubelet[2768]: E0123 17:56:48.241713 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.241783 kubelet[2768]: W0123 17:56:48.241742 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.241783 kubelet[2768]: E0123 17:56:48.241753 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.242756 kubelet[2768]: E0123 17:56:48.241983 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.242756 kubelet[2768]: W0123 17:56:48.241992 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.242756 kubelet[2768]: E0123 17:56:48.242001 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.264818 systemd[1]: Started cri-containerd-502b8430be604f21d21da23717accfdec52e57fa17f157fd78a6547bb503493c.scope - libcontainer container 502b8430be604f21d21da23717accfdec52e57fa17f157fd78a6547bb503493c. Jan 23 17:56:48.305368 containerd[1553]: time="2026-01-23T17:56:48.305319934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dvz9m,Uid:60554b5d-af84-4b21-850c-d206426522eb,Namespace:calico-system,Attempt:0,} returns sandbox id \"502b8430be604f21d21da23717accfdec52e57fa17f157fd78a6547bb503493c\"" Jan 23 17:56:48.334907 kubelet[2768]: E0123 17:56:48.334873 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.335341 kubelet[2768]: W0123 17:56:48.335109 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.335341 kubelet[2768]: E0123 17:56:48.335188 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.336253 kubelet[2768]: E0123 17:56:48.336199 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.336548 kubelet[2768]: W0123 17:56:48.336350 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.336548 kubelet[2768]: E0123 17:56:48.336385 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.337370 kubelet[2768]: E0123 17:56:48.337340 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.337472 kubelet[2768]: W0123 17:56:48.337376 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.337472 kubelet[2768]: E0123 17:56:48.337404 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.337754 kubelet[2768]: E0123 17:56:48.337732 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.337818 kubelet[2768]: W0123 17:56:48.337753 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.337818 kubelet[2768]: E0123 17:56:48.337771 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.338035 kubelet[2768]: E0123 17:56:48.338019 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.338066 kubelet[2768]: W0123 17:56:48.338038 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.338066 kubelet[2768]: E0123 17:56:48.338055 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.338421 kubelet[2768]: E0123 17:56:48.338402 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.338478 kubelet[2768]: W0123 17:56:48.338424 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.338478 kubelet[2768]: E0123 17:56:48.338442 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.338746 kubelet[2768]: E0123 17:56:48.338728 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.338786 kubelet[2768]: W0123 17:56:48.338749 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.338786 kubelet[2768]: E0123 17:56:48.338766 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.339032 kubelet[2768]: E0123 17:56:48.339013 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.339079 kubelet[2768]: W0123 17:56:48.339036 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.339079 kubelet[2768]: E0123 17:56:48.339052 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.339324 kubelet[2768]: E0123 17:56:48.339308 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.339357 kubelet[2768]: W0123 17:56:48.339328 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.339357 kubelet[2768]: E0123 17:56:48.339344 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.339651 kubelet[2768]: E0123 17:56:48.339632 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.339698 kubelet[2768]: W0123 17:56:48.339661 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.339698 kubelet[2768]: E0123 17:56:48.339686 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:48.354536 kubelet[2768]: E0123 17:56:48.354438 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:48.354536 kubelet[2768]: W0123 17:56:48.354461 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:48.354536 kubelet[2768]: E0123 17:56:48.354479 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:49.630256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1914051856.mount: Deactivated successfully. Jan 23 17:56:50.166177 kubelet[2768]: E0123 17:56:50.166100 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2j7b" podUID="c91580b5-1014-472e-a6f0-53c9f68e2405" Jan 23 17:56:50.641903 containerd[1553]: time="2026-01-23T17:56:50.641828840Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:50.643571 containerd[1553]: time="2026-01-23T17:56:50.643303896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Jan 23 17:56:50.644824 containerd[1553]: time="2026-01-23T17:56:50.644753394Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:50.649949 containerd[1553]: time="2026-01-23T17:56:50.649876673Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:50.650521 containerd[1553]: time="2026-01-23T17:56:50.650379985Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.540597691s" Jan 23 17:56:50.650521 containerd[1553]: time="2026-01-23T17:56:50.650444024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 23 17:56:50.652850 containerd[1553]: time="2026-01-23T17:56:50.652561711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 17:56:50.675646 containerd[1553]: time="2026-01-23T17:56:50.675606750Z" level=info msg="CreateContainer within sandbox \"8b6e888c35f7d62701b7112c823441492b68b0881647274663adf46841245565\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 17:56:50.690576 containerd[1553]: time="2026-01-23T17:56:50.690531276Z" level=info msg="Container aa95461cc09c35960244bd31792d47a2352585a6cfce5fc517dc27f259c166b3: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:56:50.693266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2322676703.mount: Deactivated successfully. Jan 23 17:56:50.702155 containerd[1553]: time="2026-01-23T17:56:50.702111614Z" level=info msg="CreateContainer within sandbox \"8b6e888c35f7d62701b7112c823441492b68b0881647274663adf46841245565\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"aa95461cc09c35960244bd31792d47a2352585a6cfce5fc517dc27f259c166b3\"" Jan 23 17:56:50.703851 containerd[1553]: time="2026-01-23T17:56:50.703669190Z" level=info msg="StartContainer for \"aa95461cc09c35960244bd31792d47a2352585a6cfce5fc517dc27f259c166b3\"" Jan 23 17:56:50.705258 containerd[1553]: time="2026-01-23T17:56:50.705230925Z" level=info msg="connecting to shim aa95461cc09c35960244bd31792d47a2352585a6cfce5fc517dc27f259c166b3" address="unix:///run/containerd/s/938f7aa9176803de7ee99dbb1aa48476abed1f2472e67b808d4cfdde1969e714" protocol=ttrpc version=3 Jan 23 17:56:50.730716 systemd[1]: Started cri-containerd-aa95461cc09c35960244bd31792d47a2352585a6cfce5fc517dc27f259c166b3.scope - libcontainer container aa95461cc09c35960244bd31792d47a2352585a6cfce5fc517dc27f259c166b3. Jan 23 17:56:50.775575 containerd[1553]: time="2026-01-23T17:56:50.774387840Z" level=info msg="StartContainer for \"aa95461cc09c35960244bd31792d47a2352585a6cfce5fc517dc27f259c166b3\" returns successfully" Jan 23 17:56:51.311368 kubelet[2768]: I0123 17:56:51.311170 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6ff46598dd-vtk9s" podStartSLOduration=1.767511082 podStartE2EDuration="4.311151284s" podCreationTimestamp="2026-01-23 17:56:47 +0000 UTC" firstStartedPulling="2026-01-23 17:56:48.107953124 +0000 UTC m=+26.096484077" lastFinishedPulling="2026-01-23 17:56:50.651593326 +0000 UTC m=+28.640124279" observedRunningTime="2026-01-23 17:56:51.310380691 +0000 UTC m=+29.298911844" watchObservedRunningTime="2026-01-23 17:56:51.311151284 +0000 UTC m=+29.299682237" Jan 23 17:56:51.351507 kubelet[2768]: E0123 17:56:51.351444 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:51.351507 kubelet[2768]: W0123 17:56:51.351485 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:51.351841 kubelet[2768]: E0123 17:56:51.351604 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:51.352196 kubelet[2768]: E0123 17:56:51.352153 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:51.352277 kubelet[2768]: W0123 17:56:51.352198 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:51.352277 kubelet[2768]: E0123 17:56:51.352223 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:51.352578 kubelet[2768]: E0123 17:56:51.352551 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:51.352578 kubelet[2768]: W0123 17:56:51.352574 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:51.352691 kubelet[2768]: E0123 17:56:51.352595 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:51.352938 kubelet[2768]: E0123 17:56:51.352918 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:51.352992 kubelet[2768]: W0123 17:56:51.352955 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:51.352992 kubelet[2768]: E0123 17:56:51.352976 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:51.353401 kubelet[2768]: E0123 17:56:51.353380 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:51.353454 kubelet[2768]: W0123 17:56:51.353404 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:51.353454 kubelet[2768]: E0123 17:56:51.353424 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:51.353832 kubelet[2768]: E0123 17:56:51.353812 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:51.353887 kubelet[2768]: W0123 17:56:51.353850 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:51.353887 kubelet[2768]: E0123 17:56:51.353870 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:51.354330 kubelet[2768]: E0123 17:56:51.354307 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:51.354394 kubelet[2768]: W0123 17:56:51.354333 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:51.354394 kubelet[2768]: E0123 17:56:51.354355 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:51.354750 kubelet[2768]: E0123 17:56:51.354730 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:51.354797 kubelet[2768]: W0123 17:56:51.354753 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:51.354797 kubelet[2768]: E0123 17:56:51.354770 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:51.355171 kubelet[2768]: E0123 17:56:51.355150 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:51.355271 kubelet[2768]: W0123 17:56:51.355206 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:51.355271 kubelet[2768]: E0123 17:56:51.355219 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:51.355440 kubelet[2768]: E0123 17:56:51.355427 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:51.355466 kubelet[2768]: W0123 17:56:51.355440 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:51.355466 kubelet[2768]: E0123 17:56:51.355452 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:51.355616 kubelet[2768]: E0123 17:56:51.355611 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:51.355644 kubelet[2768]: W0123 17:56:51.355630 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:51.355644 kubelet[2768]: E0123 17:56:51.355640 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:51.355802 kubelet[2768]: E0123 17:56:51.355791 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:51.355802 kubelet[2768]: W0123 17:56:51.355801 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:51.355856 kubelet[2768]: E0123 17:56:51.355810 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:51.355981 kubelet[2768]: E0123 17:56:51.355965 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:51.356024 kubelet[2768]: W0123 17:56:51.355987 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:51.356024 kubelet[2768]: E0123 17:56:51.355996 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:51.356178 kubelet[2768]: E0123 17:56:51.356165 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:51.356178 kubelet[2768]: W0123 17:56:51.356177 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:51.356260 kubelet[2768]: E0123 17:56:51.356185 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:51.356324 kubelet[2768]: E0123 17:56:51.356312 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:51.356324 kubelet[2768]: W0123 17:56:51.356323 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:51.356374 kubelet[2768]: E0123 17:56:51.356331 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:51.360025 kubelet[2768]: E0123 17:56:51.359864 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:51.360025 kubelet[2768]: W0123 17:56:51.359891 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:51.360025 kubelet[2768]: E0123 17:56:51.359909 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:51.360547 kubelet[2768]: E0123 17:56:51.360472 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:51.360735 kubelet[2768]: W0123 17:56:51.360611 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:51.360735 kubelet[2768]: E0123 17:56:51.360633 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:51.361078 kubelet[2768]: E0123 17:56:51.361033 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:51.361171 kubelet[2768]: W0123 17:56:51.361155 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:51.361386 kubelet[2768]: E0123 17:56:51.361271 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:51.361558 kubelet[2768]: E0123 17:56:51.361535 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:51.361630 kubelet[2768]: W0123 17:56:51.361559 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:51.361630 kubelet[2768]: E0123 17:56:51.361576 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:51.361775 kubelet[2768]: E0123 17:56:51.361757 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:51.361775 kubelet[2768]: W0123 17:56:51.361769 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:51.361918 kubelet[2768]: E0123 17:56:51.361781 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:51.361999 kubelet[2768]: E0123 17:56:51.361938 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:51.361999 kubelet[2768]: W0123 17:56:51.361947 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:51.361999 kubelet[2768]: E0123 17:56:51.361957 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:51.362160 kubelet[2768]: E0123 17:56:51.362152 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:51.362240 kubelet[2768]: W0123 17:56:51.362162 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:51.362240 kubelet[2768]: E0123 17:56:51.362173 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:51.362350 kubelet[2768]: E0123 17:56:51.362333 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:51.362350 kubelet[2768]: W0123 17:56:51.362347 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:51.362476 kubelet[2768]: E0123 17:56:51.362358 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:51.362547 kubelet[2768]: E0123 17:56:51.362535 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:51.362547 kubelet[2768]: W0123 17:56:51.362546 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:51.362679 kubelet[2768]: E0123 17:56:51.362556 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:51.362764 kubelet[2768]: E0123 17:56:51.362748 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:51.362764 kubelet[2768]: W0123 17:56:51.362763 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:51.362842 kubelet[2768]: E0123 17:56:51.362774 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:51.362984 kubelet[2768]: E0123 17:56:51.362970 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:51.362984 kubelet[2768]: W0123 17:56:51.362984 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:51.363116 kubelet[2768]: E0123 17:56:51.362995 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:51.363533 kubelet[2768]: E0123 17:56:51.363474 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:51.363533 kubelet[2768]: W0123 17:56:51.363490 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:51.363533 kubelet[2768]: E0123 17:56:51.363502 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:51.363961 kubelet[2768]: E0123 17:56:51.363872 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:51.363961 kubelet[2768]: W0123 17:56:51.363888 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:51.363961 kubelet[2768]: E0123 17:56:51.363899 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:51.364115 kubelet[2768]: E0123 17:56:51.364006 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:51.364115 kubelet[2768]: W0123 17:56:51.364012 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:51.364115 kubelet[2768]: E0123 17:56:51.364019 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:51.364176 kubelet[2768]: E0123 17:56:51.364163 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:51.364176 kubelet[2768]: W0123 17:56:51.364170 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:51.364216 kubelet[2768]: E0123 17:56:51.364178 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:51.364460 kubelet[2768]: E0123 17:56:51.364420 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:51.364460 kubelet[2768]: W0123 17:56:51.364434 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:51.364460 kubelet[2768]: E0123 17:56:51.364446 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:51.364886 kubelet[2768]: E0123 17:56:51.364850 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:51.364886 kubelet[2768]: W0123 17:56:51.364863 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:51.364886 kubelet[2768]: E0123 17:56:51.364874 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:51.365262 kubelet[2768]: E0123 17:56:51.365215 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:56:51.365262 kubelet[2768]: W0123 17:56:51.365228 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:56:51.365262 kubelet[2768]: E0123 17:56:51.365240 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:56:52.059439 containerd[1553]: time="2026-01-23T17:56:52.059351684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:52.060674 containerd[1553]: time="2026-01-23T17:56:52.060630739Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Jan 23 17:56:52.061420 containerd[1553]: time="2026-01-23T17:56:52.061362370Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:52.063587 containerd[1553]: time="2026-01-23T17:56:52.063535063Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:52.065153 containerd[1553]: time="2026-01-23T17:56:52.065090330Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.412478819s" Jan 23 17:56:52.065153 containerd[1553]: time="2026-01-23T17:56:52.065142732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 23 17:56:52.074505 containerd[1553]: time="2026-01-23T17:56:52.073871187Z" level=info msg="CreateContainer within sandbox \"502b8430be604f21d21da23717accfdec52e57fa17f157fd78a6547bb503493c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 17:56:52.084718 containerd[1553]: time="2026-01-23T17:56:52.084672770Z" level=info msg="Container 7e6beb256330471b6eeb37eaff88d3ebb2ebc4c14a00e9caf1ff4f92cfc5ec62: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:56:52.096910 containerd[1553]: time="2026-01-23T17:56:52.096850613Z" level=info msg="CreateContainer within sandbox \"502b8430be604f21d21da23717accfdec52e57fa17f157fd78a6547bb503493c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7e6beb256330471b6eeb37eaff88d3ebb2ebc4c14a00e9caf1ff4f92cfc5ec62\"" Jan 23 17:56:52.098489 containerd[1553]: time="2026-01-23T17:56:52.098446881Z" level=info msg="StartContainer for \"7e6beb256330471b6eeb37eaff88d3ebb2ebc4c14a00e9caf1ff4f92cfc5ec62\"" Jan 23 17:56:52.101192 containerd[1553]: time="2026-01-23T17:56:52.101142157Z" level=info msg="connecting to shim 7e6beb256330471b6eeb37eaff88d3ebb2ebc4c14a00e9caf1ff4f92cfc5ec62" address="unix:///run/containerd/s/f8bc184ecc6befaeece0b45f964816c0cb5de19143d8707b97eb036788f60006" protocol=ttrpc version=3 Jan 23 17:56:52.131857 systemd[1]: Started cri-containerd-7e6beb256330471b6eeb37eaff88d3ebb2ebc4c14a00e9caf1ff4f92cfc5ec62.scope - libcontainer container 7e6beb256330471b6eeb37eaff88d3ebb2ebc4c14a00e9caf1ff4f92cfc5ec62. Jan 23 17:56:52.166761 kubelet[2768]: E0123 17:56:52.166689 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2j7b" podUID="c91580b5-1014-472e-a6f0-53c9f68e2405" Jan 23 17:56:52.202429 containerd[1553]: time="2026-01-23T17:56:52.202297096Z" level=info msg="StartContainer for \"7e6beb256330471b6eeb37eaff88d3ebb2ebc4c14a00e9caf1ff4f92cfc5ec62\" returns successfully" Jan 23 17:56:52.225171 systemd[1]: cri-containerd-7e6beb256330471b6eeb37eaff88d3ebb2ebc4c14a00e9caf1ff4f92cfc5ec62.scope: Deactivated successfully. Jan 23 17:56:52.229936 containerd[1553]: time="2026-01-23T17:56:52.229804357Z" level=info msg="received container exit event container_id:\"7e6beb256330471b6eeb37eaff88d3ebb2ebc4c14a00e9caf1ff4f92cfc5ec62\" id:\"7e6beb256330471b6eeb37eaff88d3ebb2ebc4c14a00e9caf1ff4f92cfc5ec62\" pid:3447 exited_at:{seconds:1769191012 nanos:228791073}" Jan 23 17:56:52.254219 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e6beb256330471b6eeb37eaff88d3ebb2ebc4c14a00e9caf1ff4f92cfc5ec62-rootfs.mount: Deactivated successfully. Jan 23 17:56:52.302328 kubelet[2768]: I0123 17:56:52.302239 2768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 17:56:53.309692 containerd[1553]: time="2026-01-23T17:56:53.309631919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 17:56:54.171234 kubelet[2768]: E0123 17:56:54.169830 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2j7b" podUID="c91580b5-1014-472e-a6f0-53c9f68e2405" Jan 23 17:56:55.953830 containerd[1553]: time="2026-01-23T17:56:55.953756543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:55.955483 containerd[1553]: time="2026-01-23T17:56:55.955434687Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Jan 23 17:56:55.956352 containerd[1553]: time="2026-01-23T17:56:55.956286760Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:55.959173 containerd[1553]: time="2026-01-23T17:56:55.959121229Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:56:55.960492 containerd[1553]: time="2026-01-23T17:56:55.960441599Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.650753438s" Jan 23 17:56:55.960492 containerd[1553]: time="2026-01-23T17:56:55.960481681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 23 17:56:55.971880 containerd[1553]: time="2026-01-23T17:56:55.971822196Z" level=info msg="CreateContainer within sandbox \"502b8430be604f21d21da23717accfdec52e57fa17f157fd78a6547bb503493c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 17:56:55.981595 containerd[1553]: time="2026-01-23T17:56:55.981547888Z" level=info msg="Container 2c92c3f56efc38ec64a7acc72d621bdfd9ebd57ad77859c01d65ca35f825ddd6: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:56:56.001530 containerd[1553]: time="2026-01-23T17:56:56.001371327Z" level=info msg="CreateContainer within sandbox \"502b8430be604f21d21da23717accfdec52e57fa17f157fd78a6547bb503493c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2c92c3f56efc38ec64a7acc72d621bdfd9ebd57ad77859c01d65ca35f825ddd6\"" Jan 23 17:56:56.003242 containerd[1553]: time="2026-01-23T17:56:56.003198754Z" level=info msg="StartContainer for \"2c92c3f56efc38ec64a7acc72d621bdfd9ebd57ad77859c01d65ca35f825ddd6\"" Jan 23 17:56:56.006369 containerd[1553]: time="2026-01-23T17:56:56.006303269Z" level=info msg="connecting to shim 2c92c3f56efc38ec64a7acc72d621bdfd9ebd57ad77859c01d65ca35f825ddd6" address="unix:///run/containerd/s/f8bc184ecc6befaeece0b45f964816c0cb5de19143d8707b97eb036788f60006" protocol=ttrpc version=3 Jan 23 17:56:56.031848 systemd[1]: Started cri-containerd-2c92c3f56efc38ec64a7acc72d621bdfd9ebd57ad77859c01d65ca35f825ddd6.scope - libcontainer container 2c92c3f56efc38ec64a7acc72d621bdfd9ebd57ad77859c01d65ca35f825ddd6. Jan 23 17:56:56.114196 containerd[1553]: time="2026-01-23T17:56:56.114145529Z" level=info msg="StartContainer for \"2c92c3f56efc38ec64a7acc72d621bdfd9ebd57ad77859c01d65ca35f825ddd6\" returns successfully" Jan 23 17:56:56.169475 kubelet[2768]: E0123 17:56:56.166983 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r2j7b" podUID="c91580b5-1014-472e-a6f0-53c9f68e2405" Jan 23 17:56:56.653866 containerd[1553]: time="2026-01-23T17:56:56.653818404Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 17:56:56.656580 systemd[1]: cri-containerd-2c92c3f56efc38ec64a7acc72d621bdfd9ebd57ad77859c01d65ca35f825ddd6.scope: Deactivated successfully. Jan 23 17:56:56.659082 systemd[1]: cri-containerd-2c92c3f56efc38ec64a7acc72d621bdfd9ebd57ad77859c01d65ca35f825ddd6.scope: Consumed 534ms CPU time, 185.9M memory peak, 165.9M written to disk. Jan 23 17:56:56.661332 containerd[1553]: time="2026-01-23T17:56:56.659647579Z" level=info msg="received container exit event container_id:\"2c92c3f56efc38ec64a7acc72d621bdfd9ebd57ad77859c01d65ca35f825ddd6\" id:\"2c92c3f56efc38ec64a7acc72d621bdfd9ebd57ad77859c01d65ca35f825ddd6\" pid:3504 exited_at:{seconds:1769191016 nanos:657317093}" Jan 23 17:56:56.665240 kubelet[2768]: I0123 17:56:56.665163 2768 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 17:56:56.699928 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c92c3f56efc38ec64a7acc72d621bdfd9ebd57ad77859c01d65ca35f825ddd6-rootfs.mount: Deactivated successfully. Jan 23 17:56:56.780475 systemd[1]: Created slice kubepods-burstable-podc5662b76_1476_436c_8fcf_b63b80628b31.slice - libcontainer container kubepods-burstable-podc5662b76_1476_436c_8fcf_b63b80628b31.slice. Jan 23 17:56:56.797374 systemd[1]: Created slice kubepods-besteffort-podf1b8de16_9b1c_4d91_a786_2640c9f09491.slice - libcontainer container kubepods-besteffort-podf1b8de16_9b1c_4d91_a786_2640c9f09491.slice. Jan 23 17:56:56.813719 kubelet[2768]: I0123 17:56:56.813455 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d3d96596-5d19-4484-a8c6-296b60023534-calico-apiserver-certs\") pod \"calico-apiserver-6d67bdb5bc-4l6tt\" (UID: \"d3d96596-5d19-4484-a8c6-296b60023534\") " pod="calico-apiserver/calico-apiserver-6d67bdb5bc-4l6tt" Jan 23 17:56:56.813719 kubelet[2768]: I0123 17:56:56.813532 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t89m\" (UniqueName: \"kubernetes.io/projected/c5662b76-1476-436c-8fcf-b63b80628b31-kube-api-access-6t89m\") pod \"coredns-674b8bbfcf-nmxbw\" (UID: \"c5662b76-1476-436c-8fcf-b63b80628b31\") " pod="kube-system/coredns-674b8bbfcf-nmxbw" Jan 23 17:56:56.813719 kubelet[2768]: I0123 17:56:56.813557 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1b8de16-9b1c-4d91-a786-2640c9f09491-whisker-ca-bundle\") pod \"whisker-796c85d5cf-gzjm5\" (UID: \"f1b8de16-9b1c-4d91-a786-2640c9f09491\") " pod="calico-system/whisker-796c85d5cf-gzjm5" Jan 23 17:56:56.814538 kubelet[2768]: I0123 17:56:56.814044 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f1b8de16-9b1c-4d91-a786-2640c9f09491-whisker-backend-key-pair\") pod \"whisker-796c85d5cf-gzjm5\" (UID: \"f1b8de16-9b1c-4d91-a786-2640c9f09491\") " pod="calico-system/whisker-796c85d5cf-gzjm5" Jan 23 17:56:56.814538 kubelet[2768]: I0123 17:56:56.814107 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfk4m\" (UniqueName: \"kubernetes.io/projected/f1b8de16-9b1c-4d91-a786-2640c9f09491-kube-api-access-pfk4m\") pod \"whisker-796c85d5cf-gzjm5\" (UID: \"f1b8de16-9b1c-4d91-a786-2640c9f09491\") " pod="calico-system/whisker-796c85d5cf-gzjm5" Jan 23 17:56:56.814538 kubelet[2768]: I0123 17:56:56.814128 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5662b76-1476-436c-8fcf-b63b80628b31-config-volume\") pod \"coredns-674b8bbfcf-nmxbw\" (UID: \"c5662b76-1476-436c-8fcf-b63b80628b31\") " pod="kube-system/coredns-674b8bbfcf-nmxbw" Jan 23 17:56:56.814538 kubelet[2768]: I0123 17:56:56.814148 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9csw4\" (UniqueName: \"kubernetes.io/projected/d3d96596-5d19-4484-a8c6-296b60023534-kube-api-access-9csw4\") pod \"calico-apiserver-6d67bdb5bc-4l6tt\" (UID: \"d3d96596-5d19-4484-a8c6-296b60023534\") " pod="calico-apiserver/calico-apiserver-6d67bdb5bc-4l6tt" Jan 23 17:56:56.821813 systemd[1]: Created slice kubepods-besteffort-pod0e19fa73_ca02_4aed_bbc2_3496e7625b06.slice - libcontainer container kubepods-besteffort-pod0e19fa73_ca02_4aed_bbc2_3496e7625b06.slice. Jan 23 17:56:56.833377 systemd[1]: Created slice kubepods-besteffort-podff25427e_89d3_494f_819f_e42ac2ef9668.slice - libcontainer container kubepods-besteffort-podff25427e_89d3_494f_819f_e42ac2ef9668.slice. Jan 23 17:56:56.844309 systemd[1]: Created slice kubepods-burstable-pod8eaf0d49_a15b_4271_a48f_ef6ce1231911.slice - libcontainer container kubepods-burstable-pod8eaf0d49_a15b_4271_a48f_ef6ce1231911.slice. Jan 23 17:56:56.857566 systemd[1]: Created slice kubepods-besteffort-pod880a2ec4_932d_40e4_a1c5_e4529584127c.slice - libcontainer container kubepods-besteffort-pod880a2ec4_932d_40e4_a1c5_e4529584127c.slice. Jan 23 17:56:56.870197 systemd[1]: Created slice kubepods-besteffort-pod505697ac_a88f_4c60_b275_ddbfae3b76e6.slice - libcontainer container kubepods-besteffort-pod505697ac_a88f_4c60_b275_ddbfae3b76e6.slice. Jan 23 17:56:56.880370 systemd[1]: Created slice kubepods-besteffort-podd3d96596_5d19_4484_a8c6_296b60023534.slice - libcontainer container kubepods-besteffort-podd3d96596_5d19_4484_a8c6_296b60023534.slice. Jan 23 17:56:56.915662 kubelet[2768]: I0123 17:56:56.914973 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpw7x\" (UniqueName: \"kubernetes.io/projected/0e19fa73-ca02-4aed-bbc2-3496e7625b06-kube-api-access-kpw7x\") pod \"calico-apiserver-5cf56b4c9c-b4lk9\" (UID: \"0e19fa73-ca02-4aed-bbc2-3496e7625b06\") " pod="calico-apiserver/calico-apiserver-5cf56b4c9c-b4lk9" Jan 23 17:56:56.915662 kubelet[2768]: I0123 17:56:56.915055 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/880a2ec4-932d-40e4-a1c5-e4529584127c-goldmane-key-pair\") pod \"goldmane-666569f655-n4dtl\" (UID: \"880a2ec4-932d-40e4-a1c5-e4529584127c\") " pod="calico-system/goldmane-666569f655-n4dtl" Jan 23 17:56:56.915662 kubelet[2768]: I0123 17:56:56.915086 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8eaf0d49-a15b-4271-a48f-ef6ce1231911-config-volume\") pod \"coredns-674b8bbfcf-j22p7\" (UID: \"8eaf0d49-a15b-4271-a48f-ef6ce1231911\") " pod="kube-system/coredns-674b8bbfcf-j22p7" Jan 23 17:56:56.915662 kubelet[2768]: I0123 17:56:56.915130 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqkjs\" (UniqueName: \"kubernetes.io/projected/880a2ec4-932d-40e4-a1c5-e4529584127c-kube-api-access-wqkjs\") pod \"goldmane-666569f655-n4dtl\" (UID: \"880a2ec4-932d-40e4-a1c5-e4529584127c\") " pod="calico-system/goldmane-666569f655-n4dtl" Jan 23 17:56:56.915662 kubelet[2768]: I0123 17:56:56.915155 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5x9w\" (UniqueName: \"kubernetes.io/projected/8eaf0d49-a15b-4271-a48f-ef6ce1231911-kube-api-access-q5x9w\") pod \"coredns-674b8bbfcf-j22p7\" (UID: \"8eaf0d49-a15b-4271-a48f-ef6ce1231911\") " pod="kube-system/coredns-674b8bbfcf-j22p7" Jan 23 17:56:56.915933 kubelet[2768]: I0123 17:56:56.915195 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbpt2\" (UniqueName: \"kubernetes.io/projected/505697ac-a88f-4c60-b275-ddbfae3b76e6-kube-api-access-bbpt2\") pod \"calico-kube-controllers-5d89c9c97d-bcllg\" (UID: \"505697ac-a88f-4c60-b275-ddbfae3b76e6\") " pod="calico-system/calico-kube-controllers-5d89c9c97d-bcllg" Jan 23 17:56:56.915933 kubelet[2768]: I0123 17:56:56.915795 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/880a2ec4-932d-40e4-a1c5-e4529584127c-config\") pod \"goldmane-666569f655-n4dtl\" (UID: \"880a2ec4-932d-40e4-a1c5-e4529584127c\") " pod="calico-system/goldmane-666569f655-n4dtl" Jan 23 17:56:56.916698 kubelet[2768]: I0123 17:56:56.916634 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnsqs\" (UniqueName: \"kubernetes.io/projected/ff25427e-89d3-494f-819f-e42ac2ef9668-kube-api-access-fnsqs\") pod \"calico-apiserver-6d67bdb5bc-9sgp8\" (UID: \"ff25427e-89d3-494f-819f-e42ac2ef9668\") " pod="calico-apiserver/calico-apiserver-6d67bdb5bc-9sgp8" Jan 23 17:56:56.916887 kubelet[2768]: I0123 17:56:56.916701 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ff25427e-89d3-494f-819f-e42ac2ef9668-calico-apiserver-certs\") pod \"calico-apiserver-6d67bdb5bc-9sgp8\" (UID: \"ff25427e-89d3-494f-819f-e42ac2ef9668\") " pod="calico-apiserver/calico-apiserver-6d67bdb5bc-9sgp8" Jan 23 17:56:56.916887 kubelet[2768]: I0123 17:56:56.916730 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/880a2ec4-932d-40e4-a1c5-e4529584127c-goldmane-ca-bundle\") pod \"goldmane-666569f655-n4dtl\" (UID: \"880a2ec4-932d-40e4-a1c5-e4529584127c\") " pod="calico-system/goldmane-666569f655-n4dtl" Jan 23 17:56:56.916887 kubelet[2768]: I0123 17:56:56.916754 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/505697ac-a88f-4c60-b275-ddbfae3b76e6-tigera-ca-bundle\") pod \"calico-kube-controllers-5d89c9c97d-bcllg\" (UID: \"505697ac-a88f-4c60-b275-ddbfae3b76e6\") " pod="calico-system/calico-kube-controllers-5d89c9c97d-bcllg" Jan 23 17:56:56.916887 kubelet[2768]: I0123 17:56:56.916801 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0e19fa73-ca02-4aed-bbc2-3496e7625b06-calico-apiserver-certs\") pod \"calico-apiserver-5cf56b4c9c-b4lk9\" (UID: \"0e19fa73-ca02-4aed-bbc2-3496e7625b06\") " pod="calico-apiserver/calico-apiserver-5cf56b4c9c-b4lk9" Jan 23 17:56:57.088914 containerd[1553]: time="2026-01-23T17:56:57.088841536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nmxbw,Uid:c5662b76-1476-436c-8fcf-b63b80628b31,Namespace:kube-system,Attempt:0,}" Jan 23 17:56:57.106249 containerd[1553]: time="2026-01-23T17:56:57.106085748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-796c85d5cf-gzjm5,Uid:f1b8de16-9b1c-4d91-a786-2640c9f09491,Namespace:calico-system,Attempt:0,}" Jan 23 17:56:57.131288 containerd[1553]: time="2026-01-23T17:56:57.130603579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cf56b4c9c-b4lk9,Uid:0e19fa73-ca02-4aed-bbc2-3496e7625b06,Namespace:calico-apiserver,Attempt:0,}" Jan 23 17:56:57.140138 containerd[1553]: time="2026-01-23T17:56:57.140092436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d67bdb5bc-9sgp8,Uid:ff25427e-89d3-494f-819f-e42ac2ef9668,Namespace:calico-apiserver,Attempt:0,}" Jan 23 17:56:57.153049 containerd[1553]: time="2026-01-23T17:56:57.153002975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j22p7,Uid:8eaf0d49-a15b-4271-a48f-ef6ce1231911,Namespace:kube-system,Attempt:0,}" Jan 23 17:56:57.163165 containerd[1553]: time="2026-01-23T17:56:57.163107094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-n4dtl,Uid:880a2ec4-932d-40e4-a1c5-e4529584127c,Namespace:calico-system,Attempt:0,}" Jan 23 17:56:57.177720 containerd[1553]: time="2026-01-23T17:56:57.177058109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d89c9c97d-bcllg,Uid:505697ac-a88f-4c60-b275-ddbfae3b76e6,Namespace:calico-system,Attempt:0,}" Jan 23 17:56:57.186160 containerd[1553]: time="2026-01-23T17:56:57.186100350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d67bdb5bc-4l6tt,Uid:d3d96596-5d19-4484-a8c6-296b60023534,Namespace:calico-apiserver,Attempt:0,}" Jan 23 17:56:57.255721 containerd[1553]: time="2026-01-23T17:56:57.255663901Z" level=error msg="Failed to destroy network for sandbox \"7df864c9ee251740ba1039e327190ab45f22dab77e8efdf8bbd1439a5445b2d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:56:57.256648 containerd[1553]: time="2026-01-23T17:56:57.256612375Z" level=error msg="Failed to destroy network for sandbox \"1bbb512b2324c4791055e35d9e9f5b5cc4a6b9cdfa44c1d69fbfe2027a069d85\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:56:57.263361 containerd[1553]: time="2026-01-23T17:56:57.263178088Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nmxbw,Uid:c5662b76-1476-436c-8fcf-b63b80628b31,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7df864c9ee251740ba1039e327190ab45f22dab77e8efdf8bbd1439a5445b2d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:56:57.263878 kubelet[2768]: E0123 17:56:57.263841 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7df864c9ee251740ba1039e327190ab45f22dab77e8efdf8bbd1439a5445b2d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:56:57.265005 kubelet[2768]: E0123 17:56:57.264350 2768 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7df864c9ee251740ba1039e327190ab45f22dab77e8efdf8bbd1439a5445b2d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nmxbw" Jan 23 17:56:57.265005 kubelet[2768]: E0123 17:56:57.264386 2768 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7df864c9ee251740ba1039e327190ab45f22dab77e8efdf8bbd1439a5445b2d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nmxbw" Jan 23 17:56:57.265005 kubelet[2768]: E0123 17:56:57.264897 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-nmxbw_kube-system(c5662b76-1476-436c-8fcf-b63b80628b31)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-nmxbw_kube-system(c5662b76-1476-436c-8fcf-b63b80628b31)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7df864c9ee251740ba1039e327190ab45f22dab77e8efdf8bbd1439a5445b2d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-nmxbw" podUID="c5662b76-1476-436c-8fcf-b63b80628b31" Jan 23 17:56:57.266636 containerd[1553]: time="2026-01-23T17:56:57.265992348Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-796c85d5cf-gzjm5,Uid:f1b8de16-9b1c-4d91-a786-2640c9f09491,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bbb512b2324c4791055e35d9e9f5b5cc4a6b9cdfa44c1d69fbfe2027a069d85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:56:57.266797 kubelet[2768]: E0123 17:56:57.266751 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bbb512b2324c4791055e35d9e9f5b5cc4a6b9cdfa44c1d69fbfe2027a069d85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:56:57.266919 kubelet[2768]: E0123 17:56:57.266820 2768 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bbb512b2324c4791055e35d9e9f5b5cc4a6b9cdfa44c1d69fbfe2027a069d85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-796c85d5cf-gzjm5" Jan 23 17:56:57.266919 kubelet[2768]: E0123 17:56:57.266844 2768 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bbb512b2324c4791055e35d9e9f5b5cc4a6b9cdfa44c1d69fbfe2027a069d85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-796c85d5cf-gzjm5" Jan 23 17:56:57.266979 kubelet[2768]: E0123 17:56:57.266910 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-796c85d5cf-gzjm5_calico-system(f1b8de16-9b1c-4d91-a786-2640c9f09491)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-796c85d5cf-gzjm5_calico-system(f1b8de16-9b1c-4d91-a786-2640c9f09491)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1bbb512b2324c4791055e35d9e9f5b5cc4a6b9cdfa44c1d69fbfe2027a069d85\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-796c85d5cf-gzjm5" podUID="f1b8de16-9b1c-4d91-a786-2640c9f09491" Jan 23 17:56:57.332944 containerd[1553]: time="2026-01-23T17:56:57.332903205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 17:56:57.352176 containerd[1553]: time="2026-01-23T17:56:57.352055965Z" level=error msg="Failed to destroy network for sandbox \"ad4fff337b1469840019db589d626cda2405e34e534d83eb4141cf695cbfd790\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:56:57.357444 containerd[1553]: time="2026-01-23T17:56:57.356554605Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cf56b4c9c-b4lk9,Uid:0e19fa73-ca02-4aed-bbc2-3496e7625b06,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad4fff337b1469840019db589d626cda2405e34e534d83eb4141cf695cbfd790\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:56:57.357624 kubelet[2768]: E0123 17:56:57.356886 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad4fff337b1469840019db589d626cda2405e34e534d83eb4141cf695cbfd790\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:56:57.357624 kubelet[2768]: E0123 17:56:57.356940 2768 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad4fff337b1469840019db589d626cda2405e34e534d83eb4141cf695cbfd790\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cf56b4c9c-b4lk9" Jan 23 17:56:57.357624 kubelet[2768]: E0123 17:56:57.356960 2768 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad4fff337b1469840019db589d626cda2405e34e534d83eb4141cf695cbfd790\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cf56b4c9c-b4lk9" Jan 23 17:56:57.357715 kubelet[2768]: E0123 17:56:57.357006 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cf56b4c9c-b4lk9_calico-apiserver(0e19fa73-ca02-4aed-bbc2-3496e7625b06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cf56b4c9c-b4lk9_calico-apiserver(0e19fa73-ca02-4aed-bbc2-3496e7625b06)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad4fff337b1469840019db589d626cda2405e34e534d83eb4141cf695cbfd790\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cf56b4c9c-b4lk9" podUID="0e19fa73-ca02-4aed-bbc2-3496e7625b06" Jan 23 17:56:57.381810 containerd[1553]: time="2026-01-23T17:56:57.381685017Z" level=error msg="Failed to destroy network for sandbox \"7f392aec4edbfb9dd3c718bded35fa6e7f1ffcf7302dc641026b33409e06fbc3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:56:57.386891 containerd[1553]: time="2026-01-23T17:56:57.386842680Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j22p7,Uid:8eaf0d49-a15b-4271-a48f-ef6ce1231911,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f392aec4edbfb9dd3c718bded35fa6e7f1ffcf7302dc641026b33409e06fbc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:56:57.388318 containerd[1553]: time="2026-01-23T17:56:57.388279452Z" level=error msg="Failed to destroy network for sandbox \"d6baabb6a90624e1ba04abcc384b7c537eafda7040a010d1a012adc562e0cc8a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:56:57.388399 kubelet[2768]: E0123 17:56:57.388147 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f392aec4edbfb9dd3c718bded35fa6e7f1ffcf7302dc641026b33409e06fbc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:56:57.388454 kubelet[2768]: E0123 17:56:57.388426 2768 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f392aec4edbfb9dd3c718bded35fa6e7f1ffcf7302dc641026b33409e06fbc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-j22p7" Jan 23 17:56:57.388491 kubelet[2768]: E0123 17:56:57.388473 2768 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f392aec4edbfb9dd3c718bded35fa6e7f1ffcf7302dc641026b33409e06fbc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-j22p7" Jan 23 17:56:57.388570 kubelet[2768]: E0123 17:56:57.388545 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-j22p7_kube-system(8eaf0d49-a15b-4271-a48f-ef6ce1231911)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-j22p7_kube-system(8eaf0d49-a15b-4271-a48f-ef6ce1231911)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f392aec4edbfb9dd3c718bded35fa6e7f1ffcf7302dc641026b33409e06fbc3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-j22p7" podUID="8eaf0d49-a15b-4271-a48f-ef6ce1231911" Jan 23 17:56:57.392845 containerd[1553]: time="2026-01-23T17:56:57.392611125Z" level=error msg="Failed to destroy network for sandbox \"2de6232264371cad6ef5fbc9645a5b44f02d2f09892d176d49ae6ef20fab13cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:56:57.393863 containerd[1553]: time="2026-01-23T17:56:57.393807208Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d89c9c97d-bcllg,Uid:505697ac-a88f-4c60-b275-ddbfae3b76e6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6baabb6a90624e1ba04abcc384b7c537eafda7040a010d1a012adc562e0cc8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:56:57.394450 kubelet[2768]: E0123 17:56:57.394355 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6baabb6a90624e1ba04abcc384b7c537eafda7040a010d1a012adc562e0cc8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:56:57.394450 kubelet[2768]: E0123 17:56:57.394430 2768 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6baabb6a90624e1ba04abcc384b7c537eafda7040a010d1a012adc562e0cc8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d89c9c97d-bcllg" Jan 23 17:56:57.394768 kubelet[2768]: E0123 17:56:57.394462 2768 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6baabb6a90624e1ba04abcc384b7c537eafda7040a010d1a012adc562e0cc8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d89c9c97d-bcllg" Jan 23 17:56:57.394768 kubelet[2768]: E0123 17:56:57.394746 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d89c9c97d-bcllg_calico-system(505697ac-a88f-4c60-b275-ddbfae3b76e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d89c9c97d-bcllg_calico-system(505697ac-a88f-4c60-b275-ddbfae3b76e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d6baabb6a90624e1ba04abcc384b7c537eafda7040a010d1a012adc562e0cc8a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d89c9c97d-bcllg" podUID="505697ac-a88f-4c60-b275-ddbfae3b76e6" Jan 23 17:56:57.396063 containerd[1553]: time="2026-01-23T17:56:57.396007446Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d67bdb5bc-9sgp8,Uid:ff25427e-89d3-494f-819f-e42ac2ef9668,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2de6232264371cad6ef5fbc9645a5b44f02d2f09892d176d49ae6ef20fab13cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:56:57.396805 kubelet[2768]: E0123 17:56:57.396755 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2de6232264371cad6ef5fbc9645a5b44f02d2f09892d176d49ae6ef20fab13cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:56:57.396805 kubelet[2768]: E0123 17:56:57.396805 2768 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2de6232264371cad6ef5fbc9645a5b44f02d2f09892d176d49ae6ef20fab13cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d67bdb5bc-9sgp8" Jan 23 17:56:57.396989 kubelet[2768]: E0123 17:56:57.396826 2768 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2de6232264371cad6ef5fbc9645a5b44f02d2f09892d176d49ae6ef20fab13cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d67bdb5bc-9sgp8" Jan 23 17:56:57.396989 kubelet[2768]: E0123 17:56:57.396865 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d67bdb5bc-9sgp8_calico-apiserver(ff25427e-89d3-494f-819f-e42ac2ef9668)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d67bdb5bc-9sgp8_calico-apiserver(ff25427e-89d3-494f-819f-e42ac2ef9668)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2de6232264371cad6ef5fbc9645a5b44f02d2f09892d176d49ae6ef20fab13cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d67bdb5bc-9sgp8" podUID="ff25427e-89d3-494f-819f-e42ac2ef9668" Jan 23 17:56:57.401012 containerd[1553]: time="2026-01-23T17:56:57.400971142Z" level=error msg="Failed to destroy network for sandbox \"500b44ed0be9afbc19a917bd7dac6560cc922da41350a0574d585f2970cc4a2c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:56:57.402160 containerd[1553]: time="2026-01-23T17:56:57.401673047Z" level=error msg="Failed to destroy network for sandbox \"583161d0314456d11fc27848b18c91a022ec353ebb44debe858035b328261f5c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:56:57.402823 containerd[1553]: time="2026-01-23T17:56:57.402685163Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d67bdb5bc-4l6tt,Uid:d3d96596-5d19-4484-a8c6-296b60023534,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"500b44ed0be9afbc19a917bd7dac6560cc922da41350a0574d585f2970cc4a2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:56:57.403052 kubelet[2768]: E0123 17:56:57.403010 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"500b44ed0be9afbc19a917bd7dac6560cc922da41350a0574d585f2970cc4a2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:56:57.403119 kubelet[2768]: E0123 17:56:57.403089 2768 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"500b44ed0be9afbc19a917bd7dac6560cc922da41350a0574d585f2970cc4a2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d67bdb5bc-4l6tt" Jan 23 17:56:57.403159 kubelet[2768]: E0123 17:56:57.403123 2768 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"500b44ed0be9afbc19a917bd7dac6560cc922da41350a0574d585f2970cc4a2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d67bdb5bc-4l6tt" Jan 23 17:56:57.403429 kubelet[2768]: E0123 17:56:57.403196 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d67bdb5bc-4l6tt_calico-apiserver(d3d96596-5d19-4484-a8c6-296b60023534)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d67bdb5bc-4l6tt_calico-apiserver(d3d96596-5d19-4484-a8c6-296b60023534)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"500b44ed0be9afbc19a917bd7dac6560cc922da41350a0574d585f2970cc4a2c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d67bdb5bc-4l6tt" podUID="d3d96596-5d19-4484-a8c6-296b60023534" Jan 23 17:56:57.404269 containerd[1553]: time="2026-01-23T17:56:57.404145535Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-n4dtl,Uid:880a2ec4-932d-40e4-a1c5-e4529584127c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"583161d0314456d11fc27848b18c91a022ec353ebb44debe858035b328261f5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:56:57.404461 kubelet[2768]: E0123 17:56:57.404373 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"583161d0314456d11fc27848b18c91a022ec353ebb44debe858035b328261f5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:56:57.404461 kubelet[2768]: E0123 17:56:57.404414 2768 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"583161d0314456d11fc27848b18c91a022ec353ebb44debe858035b328261f5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-n4dtl" Jan 23 17:56:57.404461 kubelet[2768]: E0123 17:56:57.404435 2768 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"583161d0314456d11fc27848b18c91a022ec353ebb44debe858035b328261f5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-n4dtl" Jan 23 17:56:57.404847 kubelet[2768]: E0123 17:56:57.404809 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-n4dtl_calico-system(880a2ec4-932d-40e4-a1c5-e4529584127c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-n4dtl_calico-system(880a2ec4-932d-40e4-a1c5-e4529584127c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"583161d0314456d11fc27848b18c91a022ec353ebb44debe858035b328261f5c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-n4dtl" podUID="880a2ec4-932d-40e4-a1c5-e4529584127c" Jan 23 17:56:58.175905 systemd[1]: Created slice kubepods-besteffort-podc91580b5_1014_472e_a6f0_53c9f68e2405.slice - libcontainer container kubepods-besteffort-podc91580b5_1014_472e_a6f0_53c9f68e2405.slice. Jan 23 17:56:58.179240 containerd[1553]: time="2026-01-23T17:56:58.178887414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r2j7b,Uid:c91580b5-1014-472e-a6f0-53c9f68e2405,Namespace:calico-system,Attempt:0,}" Jan 23 17:56:58.242968 containerd[1553]: time="2026-01-23T17:56:58.242785638Z" level=error msg="Failed to destroy network for sandbox \"eb998cc6b1b937f4816c22c8880bb81206fd4b88db8171fe80d123a173e58119\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:56:58.246301 systemd[1]: run-netns-cni\x2d61781274\x2d786b\x2db5d5\x2d2f19\x2d3a170bbaba6f.mount: Deactivated successfully. Jan 23 17:56:58.247730 containerd[1553]: time="2026-01-23T17:56:58.247637844Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r2j7b,Uid:c91580b5-1014-472e-a6f0-53c9f68e2405,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb998cc6b1b937f4816c22c8880bb81206fd4b88db8171fe80d123a173e58119\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:56:58.248360 kubelet[2768]: E0123 17:56:58.248321 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb998cc6b1b937f4816c22c8880bb81206fd4b88db8171fe80d123a173e58119\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:56:58.248505 kubelet[2768]: E0123 17:56:58.248464 2768 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb998cc6b1b937f4816c22c8880bb81206fd4b88db8171fe80d123a173e58119\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r2j7b" Jan 23 17:56:58.248565 kubelet[2768]: E0123 17:56:58.248538 2768 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb998cc6b1b937f4816c22c8880bb81206fd4b88db8171fe80d123a173e58119\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r2j7b" Jan 23 17:56:58.248735 kubelet[2768]: E0123 17:56:58.248703 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r2j7b_calico-system(c91580b5-1014-472e-a6f0-53c9f68e2405)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r2j7b_calico-system(c91580b5-1014-472e-a6f0-53c9f68e2405)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb998cc6b1b937f4816c22c8880bb81206fd4b88db8171fe80d123a173e58119\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r2j7b" podUID="c91580b5-1014-472e-a6f0-53c9f68e2405" Jan 23 17:57:01.805821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3407632822.mount: Deactivated successfully. Jan 23 17:57:01.847484 containerd[1553]: time="2026-01-23T17:57:01.847405981Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Jan 23 17:57:01.850733 containerd[1553]: time="2026-01-23T17:57:01.850675800Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.516624595s" Jan 23 17:57:01.850733 containerd[1553]: time="2026-01-23T17:57:01.850723882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 23 17:57:01.858649 containerd[1553]: time="2026-01-23T17:57:01.856930711Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:01.858649 containerd[1553]: time="2026-01-23T17:57:01.857763256Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:01.858649 containerd[1553]: time="2026-01-23T17:57:01.858439917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:57:01.875467 containerd[1553]: time="2026-01-23T17:57:01.875372311Z" level=info msg="CreateContainer within sandbox \"502b8430be604f21d21da23717accfdec52e57fa17f157fd78a6547bb503493c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 17:57:01.916168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4106523822.mount: Deactivated successfully. Jan 23 17:57:01.918151 containerd[1553]: time="2026-01-23T17:57:01.916656806Z" level=info msg="Container 3ea0cec377ad802771a0785be3bf3e807539d2bad3956f37952284dcb0487dd9: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:57:01.931744 containerd[1553]: time="2026-01-23T17:57:01.931642022Z" level=info msg="CreateContainer within sandbox \"502b8430be604f21d21da23717accfdec52e57fa17f157fd78a6547bb503493c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3ea0cec377ad802771a0785be3bf3e807539d2bad3956f37952284dcb0487dd9\"" Jan 23 17:57:01.933445 containerd[1553]: time="2026-01-23T17:57:01.932922581Z" level=info msg="StartContainer for \"3ea0cec377ad802771a0785be3bf3e807539d2bad3956f37952284dcb0487dd9\"" Jan 23 17:57:01.936774 containerd[1553]: time="2026-01-23T17:57:01.936739017Z" level=info msg="connecting to shim 3ea0cec377ad802771a0785be3bf3e807539d2bad3956f37952284dcb0487dd9" address="unix:///run/containerd/s/f8bc184ecc6befaeece0b45f964816c0cb5de19143d8707b97eb036788f60006" protocol=ttrpc version=3 Jan 23 17:57:02.001770 systemd[1]: Started cri-containerd-3ea0cec377ad802771a0785be3bf3e807539d2bad3956f37952284dcb0487dd9.scope - libcontainer container 3ea0cec377ad802771a0785be3bf3e807539d2bad3956f37952284dcb0487dd9. Jan 23 17:57:02.090627 containerd[1553]: time="2026-01-23T17:57:02.090472184Z" level=info msg="StartContainer for \"3ea0cec377ad802771a0785be3bf3e807539d2bad3956f37952284dcb0487dd9\" returns successfully" Jan 23 17:57:02.284919 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 17:57:02.285034 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 17:57:02.485417 kubelet[2768]: I0123 17:57:02.485306 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dvz9m" podStartSLOduration=1.940051051 podStartE2EDuration="15.485286401s" podCreationTimestamp="2026-01-23 17:56:47 +0000 UTC" firstStartedPulling="2026-01-23 17:56:48.307200304 +0000 UTC m=+26.295731217" lastFinishedPulling="2026-01-23 17:57:01.852435654 +0000 UTC m=+39.840966567" observedRunningTime="2026-01-23 17:57:02.367407676 +0000 UTC m=+40.355938669" watchObservedRunningTime="2026-01-23 17:57:02.485286401 +0000 UTC m=+40.473817354" Jan 23 17:57:02.557467 kubelet[2768]: I0123 17:57:02.557411 2768 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f1b8de16-9b1c-4d91-a786-2640c9f09491-whisker-backend-key-pair\") pod \"f1b8de16-9b1c-4d91-a786-2640c9f09491\" (UID: \"f1b8de16-9b1c-4d91-a786-2640c9f09491\") " Jan 23 17:57:02.557467 kubelet[2768]: I0123 17:57:02.557468 2768 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1b8de16-9b1c-4d91-a786-2640c9f09491-whisker-ca-bundle\") pod \"f1b8de16-9b1c-4d91-a786-2640c9f09491\" (UID: \"f1b8de16-9b1c-4d91-a786-2640c9f09491\") " Jan 23 17:57:02.557642 kubelet[2768]: I0123 17:57:02.557491 2768 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfk4m\" (UniqueName: \"kubernetes.io/projected/f1b8de16-9b1c-4d91-a786-2640c9f09491-kube-api-access-pfk4m\") pod \"f1b8de16-9b1c-4d91-a786-2640c9f09491\" (UID: \"f1b8de16-9b1c-4d91-a786-2640c9f09491\") " Jan 23 17:57:02.567609 kubelet[2768]: I0123 17:57:02.567507 2768 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1b8de16-9b1c-4d91-a786-2640c9f09491-kube-api-access-pfk4m" (OuterVolumeSpecName: "kube-api-access-pfk4m") pod "f1b8de16-9b1c-4d91-a786-2640c9f09491" (UID: "f1b8de16-9b1c-4d91-a786-2640c9f09491"). InnerVolumeSpecName "kube-api-access-pfk4m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 17:57:02.567901 kubelet[2768]: I0123 17:57:02.567876 2768 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1b8de16-9b1c-4d91-a786-2640c9f09491-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f1b8de16-9b1c-4d91-a786-2640c9f09491" (UID: "f1b8de16-9b1c-4d91-a786-2640c9f09491"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 17:57:02.571036 kubelet[2768]: I0123 17:57:02.570972 2768 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1b8de16-9b1c-4d91-a786-2640c9f09491-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f1b8de16-9b1c-4d91-a786-2640c9f09491" (UID: "f1b8de16-9b1c-4d91-a786-2640c9f09491"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 17:57:02.658140 kubelet[2768]: I0123 17:57:02.658066 2768 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f1b8de16-9b1c-4d91-a786-2640c9f09491-whisker-backend-key-pair\") on node \"ci-4459-2-3-3-b08bb0c7a1\" DevicePath \"\"" Jan 23 17:57:02.658140 kubelet[2768]: I0123 17:57:02.658104 2768 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1b8de16-9b1c-4d91-a786-2640c9f09491-whisker-ca-bundle\") on node \"ci-4459-2-3-3-b08bb0c7a1\" DevicePath \"\"" Jan 23 17:57:02.658140 kubelet[2768]: I0123 17:57:02.658114 2768 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pfk4m\" (UniqueName: \"kubernetes.io/projected/f1b8de16-9b1c-4d91-a786-2640c9f09491-kube-api-access-pfk4m\") on node \"ci-4459-2-3-3-b08bb0c7a1\" DevicePath \"\"" Jan 23 17:57:02.808995 systemd[1]: var-lib-kubelet-pods-f1b8de16\x2d9b1c\x2d4d91\x2da786\x2d2640c9f09491-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpfk4m.mount: Deactivated successfully. Jan 23 17:57:02.809185 systemd[1]: var-lib-kubelet-pods-f1b8de16\x2d9b1c\x2d4d91\x2da786\x2d2640c9f09491-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 17:57:03.355863 systemd[1]: Removed slice kubepods-besteffort-podf1b8de16_9b1c_4d91_a786_2640c9f09491.slice - libcontainer container kubepods-besteffort-podf1b8de16_9b1c_4d91_a786_2640c9f09491.slice. Jan 23 17:57:03.441633 systemd[1]: Created slice kubepods-besteffort-poda9a76376_ca93_4b12_a386_13b21a2c5528.slice - libcontainer container kubepods-besteffort-poda9a76376_ca93_4b12_a386_13b21a2c5528.slice. Jan 23 17:57:03.463825 kubelet[2768]: I0123 17:57:03.463780 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a9a76376-ca93-4b12-a386-13b21a2c5528-whisker-backend-key-pair\") pod \"whisker-5fc854889f-f6vdq\" (UID: \"a9a76376-ca93-4b12-a386-13b21a2c5528\") " pod="calico-system/whisker-5fc854889f-f6vdq" Jan 23 17:57:03.464028 kubelet[2768]: I0123 17:57:03.464014 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9a76376-ca93-4b12-a386-13b21a2c5528-whisker-ca-bundle\") pod \"whisker-5fc854889f-f6vdq\" (UID: \"a9a76376-ca93-4b12-a386-13b21a2c5528\") " pod="calico-system/whisker-5fc854889f-f6vdq" Jan 23 17:57:03.464119 kubelet[2768]: I0123 17:57:03.464101 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbwbj\" (UniqueName: \"kubernetes.io/projected/a9a76376-ca93-4b12-a386-13b21a2c5528-kube-api-access-fbwbj\") pod \"whisker-5fc854889f-f6vdq\" (UID: \"a9a76376-ca93-4b12-a386-13b21a2c5528\") " pod="calico-system/whisker-5fc854889f-f6vdq" Jan 23 17:57:03.747587 containerd[1553]: time="2026-01-23T17:57:03.747476787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fc854889f-f6vdq,Uid:a9a76376-ca93-4b12-a386-13b21a2c5528,Namespace:calico-system,Attempt:0,}" Jan 23 17:57:03.972126 systemd-networkd[1423]: calic5b7c3dd322: Link UP Jan 23 17:57:03.973063 systemd-networkd[1423]: calic5b7c3dd322: Gained carrier Jan 23 17:57:04.001475 containerd[1553]: 2026-01-23 17:57:03.776 [INFO][3899] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 17:57:04.001475 containerd[1553]: 2026-01-23 17:57:03.827 [INFO][3899] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--3--b08bb0c7a1-k8s-whisker--5fc854889f--f6vdq-eth0 whisker-5fc854889f- calico-system a9a76376-ca93-4b12-a386-13b21a2c5528 910 0 2026-01-23 17:57:03 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5fc854889f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459-2-3-3-b08bb0c7a1 whisker-5fc854889f-f6vdq eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic5b7c3dd322 [] [] }} ContainerID="af7703778b936846f7f3b2c98fc1faa640da031687b38548a29407f55a3e02bb" Namespace="calico-system" Pod="whisker-5fc854889f-f6vdq" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-whisker--5fc854889f--f6vdq-" Jan 23 17:57:04.001475 containerd[1553]: 2026-01-23 17:57:03.827 [INFO][3899] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="af7703778b936846f7f3b2c98fc1faa640da031687b38548a29407f55a3e02bb" Namespace="calico-system" Pod="whisker-5fc854889f-f6vdq" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-whisker--5fc854889f--f6vdq-eth0" Jan 23 17:57:04.001475 containerd[1553]: 2026-01-23 17:57:03.895 [INFO][3949] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="af7703778b936846f7f3b2c98fc1faa640da031687b38548a29407f55a3e02bb" HandleID="k8s-pod-network.af7703778b936846f7f3b2c98fc1faa640da031687b38548a29407f55a3e02bb" Workload="ci--4459--2--3--3--b08bb0c7a1-k8s-whisker--5fc854889f--f6vdq-eth0" Jan 23 17:57:04.001705 containerd[1553]: 2026-01-23 17:57:03.895 [INFO][3949] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="af7703778b936846f7f3b2c98fc1faa640da031687b38548a29407f55a3e02bb" HandleID="k8s-pod-network.af7703778b936846f7f3b2c98fc1faa640da031687b38548a29407f55a3e02bb" Workload="ci--4459--2--3--3--b08bb0c7a1-k8s-whisker--5fc854889f--f6vdq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000315920), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-3-3-b08bb0c7a1", "pod":"whisker-5fc854889f-f6vdq", "timestamp":"2026-01-23 17:57:03.895610826 +0000 UTC"}, Hostname:"ci-4459-2-3-3-b08bb0c7a1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:57:04.001705 containerd[1553]: 2026-01-23 17:57:03.895 [INFO][3949] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:57:04.001705 containerd[1553]: 2026-01-23 17:57:03.895 [INFO][3949] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:57:04.001705 containerd[1553]: 2026-01-23 17:57:03.896 [INFO][3949] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-3-b08bb0c7a1' Jan 23 17:57:04.001705 containerd[1553]: 2026-01-23 17:57:03.908 [INFO][3949] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.af7703778b936846f7f3b2c98fc1faa640da031687b38548a29407f55a3e02bb" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:04.001705 containerd[1553]: 2026-01-23 17:57:03.918 [INFO][3949] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:04.001705 containerd[1553]: 2026-01-23 17:57:03.926 [INFO][3949] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:04.001705 containerd[1553]: 2026-01-23 17:57:03.930 [INFO][3949] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:04.001705 containerd[1553]: 2026-01-23 17:57:03.934 [INFO][3949] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:04.001917 containerd[1553]: 2026-01-23 17:57:03.934 [INFO][3949] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.af7703778b936846f7f3b2c98fc1faa640da031687b38548a29407f55a3e02bb" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:04.001917 containerd[1553]: 2026-01-23 17:57:03.937 [INFO][3949] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.af7703778b936846f7f3b2c98fc1faa640da031687b38548a29407f55a3e02bb Jan 23 17:57:04.001917 containerd[1553]: 2026-01-23 17:57:03.944 [INFO][3949] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.af7703778b936846f7f3b2c98fc1faa640da031687b38548a29407f55a3e02bb" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:04.001917 containerd[1553]: 2026-01-23 17:57:03.952 [INFO][3949] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.21.129/26] block=192.168.21.128/26 handle="k8s-pod-network.af7703778b936846f7f3b2c98fc1faa640da031687b38548a29407f55a3e02bb" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:04.001917 containerd[1553]: 2026-01-23 17:57:03.952 [INFO][3949] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.129/26] handle="k8s-pod-network.af7703778b936846f7f3b2c98fc1faa640da031687b38548a29407f55a3e02bb" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:04.001917 containerd[1553]: 2026-01-23 17:57:03.953 [INFO][3949] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:57:04.001917 containerd[1553]: 2026-01-23 17:57:03.953 [INFO][3949] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.21.129/26] IPv6=[] ContainerID="af7703778b936846f7f3b2c98fc1faa640da031687b38548a29407f55a3e02bb" HandleID="k8s-pod-network.af7703778b936846f7f3b2c98fc1faa640da031687b38548a29407f55a3e02bb" Workload="ci--4459--2--3--3--b08bb0c7a1-k8s-whisker--5fc854889f--f6vdq-eth0" Jan 23 17:57:04.002045 containerd[1553]: 2026-01-23 17:57:03.956 [INFO][3899] cni-plugin/k8s.go 418: Populated endpoint ContainerID="af7703778b936846f7f3b2c98fc1faa640da031687b38548a29407f55a3e02bb" Namespace="calico-system" Pod="whisker-5fc854889f-f6vdq" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-whisker--5fc854889f--f6vdq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--3--b08bb0c7a1-k8s-whisker--5fc854889f--f6vdq-eth0", GenerateName:"whisker-5fc854889f-", Namespace:"calico-system", SelfLink:"", UID:"a9a76376-ca93-4b12-a386-13b21a2c5528", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 57, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5fc854889f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-3-b08bb0c7a1", ContainerID:"", Pod:"whisker-5fc854889f-f6vdq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.21.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic5b7c3dd322", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:57:04.002045 containerd[1553]: 2026-01-23 17:57:03.957 [INFO][3899] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.129/32] ContainerID="af7703778b936846f7f3b2c98fc1faa640da031687b38548a29407f55a3e02bb" Namespace="calico-system" Pod="whisker-5fc854889f-f6vdq" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-whisker--5fc854889f--f6vdq-eth0" Jan 23 17:57:04.002116 containerd[1553]: 2026-01-23 17:57:03.957 [INFO][3899] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic5b7c3dd322 ContainerID="af7703778b936846f7f3b2c98fc1faa640da031687b38548a29407f55a3e02bb" Namespace="calico-system" Pod="whisker-5fc854889f-f6vdq" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-whisker--5fc854889f--f6vdq-eth0" Jan 23 17:57:04.002116 containerd[1553]: 2026-01-23 17:57:03.973 [INFO][3899] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="af7703778b936846f7f3b2c98fc1faa640da031687b38548a29407f55a3e02bb" Namespace="calico-system" Pod="whisker-5fc854889f-f6vdq" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-whisker--5fc854889f--f6vdq-eth0" Jan 23 17:57:04.002155 containerd[1553]: 2026-01-23 17:57:03.975 [INFO][3899] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="af7703778b936846f7f3b2c98fc1faa640da031687b38548a29407f55a3e02bb" Namespace="calico-system" Pod="whisker-5fc854889f-f6vdq" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-whisker--5fc854889f--f6vdq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--3--b08bb0c7a1-k8s-whisker--5fc854889f--f6vdq-eth0", GenerateName:"whisker-5fc854889f-", Namespace:"calico-system", SelfLink:"", UID:"a9a76376-ca93-4b12-a386-13b21a2c5528", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 57, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5fc854889f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-3-b08bb0c7a1", ContainerID:"af7703778b936846f7f3b2c98fc1faa640da031687b38548a29407f55a3e02bb", Pod:"whisker-5fc854889f-f6vdq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.21.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic5b7c3dd322", MAC:"f2:4e:0a:5f:98:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:57:04.002201 containerd[1553]: 2026-01-23 17:57:03.987 [INFO][3899] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="af7703778b936846f7f3b2c98fc1faa640da031687b38548a29407f55a3e02bb" Namespace="calico-system" Pod="whisker-5fc854889f-f6vdq" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-whisker--5fc854889f--f6vdq-eth0" Jan 23 17:57:04.062561 containerd[1553]: time="2026-01-23T17:57:04.060986282Z" level=info msg="connecting to shim af7703778b936846f7f3b2c98fc1faa640da031687b38548a29407f55a3e02bb" address="unix:///run/containerd/s/631c30647c23bfca4faf5702127eb5049c99b4039337d5bb9dde293222e0e9c8" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:57:04.106910 systemd[1]: Started cri-containerd-af7703778b936846f7f3b2c98fc1faa640da031687b38548a29407f55a3e02bb.scope - libcontainer container af7703778b936846f7f3b2c98fc1faa640da031687b38548a29407f55a3e02bb. Jan 23 17:57:04.174469 kubelet[2768]: I0123 17:57:04.174387 2768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1b8de16-9b1c-4d91-a786-2640c9f09491" path="/var/lib/kubelet/pods/f1b8de16-9b1c-4d91-a786-2640c9f09491/volumes" Jan 23 17:57:04.239637 containerd[1553]: time="2026-01-23T17:57:04.239143446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fc854889f-f6vdq,Uid:a9a76376-ca93-4b12-a386-13b21a2c5528,Namespace:calico-system,Attempt:0,} returns sandbox id \"af7703778b936846f7f3b2c98fc1faa640da031687b38548a29407f55a3e02bb\"" Jan 23 17:57:04.245872 containerd[1553]: time="2026-01-23T17:57:04.245423736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 17:57:04.597757 containerd[1553]: time="2026-01-23T17:57:04.597474710Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:57:04.599261 containerd[1553]: time="2026-01-23T17:57:04.599166155Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 17:57:04.599540 containerd[1553]: time="2026-01-23T17:57:04.599386721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 17:57:04.599771 kubelet[2768]: E0123 17:57:04.599700 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:57:04.599868 kubelet[2768]: E0123 17:57:04.599786 2768 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:57:04.610075 kubelet[2768]: E0123 17:57:04.610006 2768 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d134215e77c34cbf9ec3fa5d672d6199,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fbwbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fc854889f-f6vdq_calico-system(a9a76376-ca93-4b12-a386-13b21a2c5528): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 17:57:04.612762 containerd[1553]: time="2026-01-23T17:57:04.612677800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 17:57:04.962776 containerd[1553]: time="2026-01-23T17:57:04.962715319Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:57:04.964938 containerd[1553]: time="2026-01-23T17:57:04.964817496Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 17:57:04.964938 containerd[1553]: time="2026-01-23T17:57:04.964900858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 17:57:04.965147 kubelet[2768]: E0123 17:57:04.965071 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:57:04.965147 kubelet[2768]: E0123 17:57:04.965129 2768 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:57:04.965576 kubelet[2768]: E0123 17:57:04.965464 2768 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fbwbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fc854889f-f6vdq_calico-system(a9a76376-ca93-4b12-a386-13b21a2c5528): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 17:57:04.966877 kubelet[2768]: E0123 17:57:04.966811 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fc854889f-f6vdq" podUID="a9a76376-ca93-4b12-a386-13b21a2c5528" Jan 23 17:57:05.356183 kubelet[2768]: E0123 17:57:05.355627 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fc854889f-f6vdq" podUID="a9a76376-ca93-4b12-a386-13b21a2c5528" Jan 23 17:57:05.408795 systemd-networkd[1423]: calic5b7c3dd322: Gained IPv6LL Jan 23 17:57:08.167768 containerd[1553]: time="2026-01-23T17:57:08.167695274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d67bdb5bc-4l6tt,Uid:d3d96596-5d19-4484-a8c6-296b60023534,Namespace:calico-apiserver,Attempt:0,}" Jan 23 17:57:08.316354 systemd-networkd[1423]: calia45ebb616ff: Link UP Jan 23 17:57:08.316696 systemd-networkd[1423]: calia45ebb616ff: Gained carrier Jan 23 17:57:08.335307 containerd[1553]: 2026-01-23 17:57:08.197 [INFO][4133] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 17:57:08.335307 containerd[1553]: 2026-01-23 17:57:08.216 [INFO][4133] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--6d67bdb5bc--4l6tt-eth0 calico-apiserver-6d67bdb5bc- calico-apiserver d3d96596-5d19-4484-a8c6-296b60023534 844 0 2026-01-23 17:56:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d67bdb5bc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-3-3-b08bb0c7a1 calico-apiserver-6d67bdb5bc-4l6tt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia45ebb616ff [] [] }} ContainerID="73f106e0bec84161ba39cbed19d489ba9aed42f969915f10c5835f1ed3f41218" Namespace="calico-apiserver" Pod="calico-apiserver-6d67bdb5bc-4l6tt" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--6d67bdb5bc--4l6tt-" Jan 23 17:57:08.335307 containerd[1553]: 2026-01-23 17:57:08.216 [INFO][4133] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="73f106e0bec84161ba39cbed19d489ba9aed42f969915f10c5835f1ed3f41218" Namespace="calico-apiserver" Pod="calico-apiserver-6d67bdb5bc-4l6tt" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--6d67bdb5bc--4l6tt-eth0" Jan 23 17:57:08.335307 containerd[1553]: 2026-01-23 17:57:08.249 [INFO][4144] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="73f106e0bec84161ba39cbed19d489ba9aed42f969915f10c5835f1ed3f41218" HandleID="k8s-pod-network.73f106e0bec84161ba39cbed19d489ba9aed42f969915f10c5835f1ed3f41218" Workload="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--6d67bdb5bc--4l6tt-eth0" Jan 23 17:57:08.335613 containerd[1553]: 2026-01-23 17:57:08.249 [INFO][4144] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="73f106e0bec84161ba39cbed19d489ba9aed42f969915f10c5835f1ed3f41218" HandleID="k8s-pod-network.73f106e0bec84161ba39cbed19d489ba9aed42f969915f10c5835f1ed3f41218" Workload="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--6d67bdb5bc--4l6tt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c8fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-2-3-3-b08bb0c7a1", "pod":"calico-apiserver-6d67bdb5bc-4l6tt", "timestamp":"2026-01-23 17:57:08.249184378 +0000 UTC"}, Hostname:"ci-4459-2-3-3-b08bb0c7a1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:57:08.335613 containerd[1553]: 2026-01-23 17:57:08.249 [INFO][4144] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:57:08.335613 containerd[1553]: 2026-01-23 17:57:08.249 [INFO][4144] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:57:08.335613 containerd[1553]: 2026-01-23 17:57:08.249 [INFO][4144] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-3-b08bb0c7a1' Jan 23 17:57:08.335613 containerd[1553]: 2026-01-23 17:57:08.268 [INFO][4144] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.73f106e0bec84161ba39cbed19d489ba9aed42f969915f10c5835f1ed3f41218" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:08.335613 containerd[1553]: 2026-01-23 17:57:08.275 [INFO][4144] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:08.335613 containerd[1553]: 2026-01-23 17:57:08.282 [INFO][4144] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:08.335613 containerd[1553]: 2026-01-23 17:57:08.285 [INFO][4144] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:08.335613 containerd[1553]: 2026-01-23 17:57:08.288 [INFO][4144] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:08.335816 containerd[1553]: 2026-01-23 17:57:08.288 [INFO][4144] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.73f106e0bec84161ba39cbed19d489ba9aed42f969915f10c5835f1ed3f41218" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:08.335816 containerd[1553]: 2026-01-23 17:57:08.290 [INFO][4144] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.73f106e0bec84161ba39cbed19d489ba9aed42f969915f10c5835f1ed3f41218 Jan 23 17:57:08.335816 containerd[1553]: 2026-01-23 17:57:08.297 [INFO][4144] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.73f106e0bec84161ba39cbed19d489ba9aed42f969915f10c5835f1ed3f41218" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:08.335816 containerd[1553]: 2026-01-23 17:57:08.306 [INFO][4144] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.21.130/26] block=192.168.21.128/26 handle="k8s-pod-network.73f106e0bec84161ba39cbed19d489ba9aed42f969915f10c5835f1ed3f41218" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:08.335816 containerd[1553]: 2026-01-23 17:57:08.306 [INFO][4144] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.130/26] handle="k8s-pod-network.73f106e0bec84161ba39cbed19d489ba9aed42f969915f10c5835f1ed3f41218" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:08.335816 containerd[1553]: 2026-01-23 17:57:08.306 [INFO][4144] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:57:08.335816 containerd[1553]: 2026-01-23 17:57:08.306 [INFO][4144] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.21.130/26] IPv6=[] ContainerID="73f106e0bec84161ba39cbed19d489ba9aed42f969915f10c5835f1ed3f41218" HandleID="k8s-pod-network.73f106e0bec84161ba39cbed19d489ba9aed42f969915f10c5835f1ed3f41218" Workload="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--6d67bdb5bc--4l6tt-eth0" Jan 23 17:57:08.335961 containerd[1553]: 2026-01-23 17:57:08.310 [INFO][4133] cni-plugin/k8s.go 418: Populated endpoint ContainerID="73f106e0bec84161ba39cbed19d489ba9aed42f969915f10c5835f1ed3f41218" Namespace="calico-apiserver" Pod="calico-apiserver-6d67bdb5bc-4l6tt" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--6d67bdb5bc--4l6tt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--6d67bdb5bc--4l6tt-eth0", GenerateName:"calico-apiserver-6d67bdb5bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3d96596-5d19-4484-a8c6-296b60023534", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 56, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d67bdb5bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-3-b08bb0c7a1", ContainerID:"", Pod:"calico-apiserver-6d67bdb5bc-4l6tt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia45ebb616ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:57:08.336013 containerd[1553]: 2026-01-23 17:57:08.310 [INFO][4133] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.130/32] ContainerID="73f106e0bec84161ba39cbed19d489ba9aed42f969915f10c5835f1ed3f41218" Namespace="calico-apiserver" Pod="calico-apiserver-6d67bdb5bc-4l6tt" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--6d67bdb5bc--4l6tt-eth0" Jan 23 17:57:08.336013 containerd[1553]: 2026-01-23 17:57:08.310 [INFO][4133] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia45ebb616ff ContainerID="73f106e0bec84161ba39cbed19d489ba9aed42f969915f10c5835f1ed3f41218" Namespace="calico-apiserver" Pod="calico-apiserver-6d67bdb5bc-4l6tt" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--6d67bdb5bc--4l6tt-eth0" Jan 23 17:57:08.336013 containerd[1553]: 2026-01-23 17:57:08.317 [INFO][4133] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="73f106e0bec84161ba39cbed19d489ba9aed42f969915f10c5835f1ed3f41218" Namespace="calico-apiserver" Pod="calico-apiserver-6d67bdb5bc-4l6tt" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--6d67bdb5bc--4l6tt-eth0" Jan 23 17:57:08.336078 containerd[1553]: 2026-01-23 17:57:08.318 [INFO][4133] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="73f106e0bec84161ba39cbed19d489ba9aed42f969915f10c5835f1ed3f41218" Namespace="calico-apiserver" Pod="calico-apiserver-6d67bdb5bc-4l6tt" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--6d67bdb5bc--4l6tt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--6d67bdb5bc--4l6tt-eth0", GenerateName:"calico-apiserver-6d67bdb5bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3d96596-5d19-4484-a8c6-296b60023534", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 56, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d67bdb5bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-3-b08bb0c7a1", ContainerID:"73f106e0bec84161ba39cbed19d489ba9aed42f969915f10c5835f1ed3f41218", Pod:"calico-apiserver-6d67bdb5bc-4l6tt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia45ebb616ff", MAC:"2a:7f:46:f9:f5:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:57:08.336128 containerd[1553]: 2026-01-23 17:57:08.329 [INFO][4133] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="73f106e0bec84161ba39cbed19d489ba9aed42f969915f10c5835f1ed3f41218" Namespace="calico-apiserver" Pod="calico-apiserver-6d67bdb5bc-4l6tt" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--6d67bdb5bc--4l6tt-eth0" Jan 23 17:57:08.372299 containerd[1553]: time="2026-01-23T17:57:08.372239512Z" level=info msg="connecting to shim 73f106e0bec84161ba39cbed19d489ba9aed42f969915f10c5835f1ed3f41218" address="unix:///run/containerd/s/7c3c129359938da915fd0a69bf183c19d00868517137321e93b615728b5840ea" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:57:08.405803 systemd[1]: Started cri-containerd-73f106e0bec84161ba39cbed19d489ba9aed42f969915f10c5835f1ed3f41218.scope - libcontainer container 73f106e0bec84161ba39cbed19d489ba9aed42f969915f10c5835f1ed3f41218. Jan 23 17:57:08.451412 containerd[1553]: time="2026-01-23T17:57:08.451321960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d67bdb5bc-4l6tt,Uid:d3d96596-5d19-4484-a8c6-296b60023534,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"73f106e0bec84161ba39cbed19d489ba9aed42f969915f10c5835f1ed3f41218\"" Jan 23 17:57:08.454727 containerd[1553]: time="2026-01-23T17:57:08.454602916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:57:08.805976 containerd[1553]: time="2026-01-23T17:57:08.805804347Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:57:08.807378 containerd[1553]: time="2026-01-23T17:57:08.807249220Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:57:08.807378 containerd[1553]: time="2026-01-23T17:57:08.807341902Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 17:57:08.807902 kubelet[2768]: E0123 17:57:08.807855 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:57:08.808187 kubelet[2768]: E0123 17:57:08.807911 2768 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:57:08.808187 kubelet[2768]: E0123 17:57:08.808058 2768 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9csw4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d67bdb5bc-4l6tt_calico-apiserver(d3d96596-5d19-4484-a8c6-296b60023534): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:57:08.809522 kubelet[2768]: E0123 17:57:08.809437 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d67bdb5bc-4l6tt" podUID="d3d96596-5d19-4484-a8c6-296b60023534" Jan 23 17:57:09.167145 containerd[1553]: time="2026-01-23T17:57:09.166909808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-n4dtl,Uid:880a2ec4-932d-40e4-a1c5-e4529584127c,Namespace:calico-system,Attempt:0,}" Jan 23 17:57:09.307664 systemd-networkd[1423]: caliaff633ac1b8: Link UP Jan 23 17:57:09.308264 systemd-networkd[1423]: caliaff633ac1b8: Gained carrier Jan 23 17:57:09.333186 containerd[1553]: 2026-01-23 17:57:09.195 [INFO][4227] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 17:57:09.333186 containerd[1553]: 2026-01-23 17:57:09.212 [INFO][4227] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--3--b08bb0c7a1-k8s-goldmane--666569f655--n4dtl-eth0 goldmane-666569f655- calico-system 880a2ec4-932d-40e4-a1c5-e4529584127c 845 0 2026-01-23 17:56:45 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459-2-3-3-b08bb0c7a1 goldmane-666569f655-n4dtl eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliaff633ac1b8 [] [] }} ContainerID="352d35395fe2babab529e6a8bc6149baf13b360e35e4ea8085c9e0ac53e9b818" Namespace="calico-system" Pod="goldmane-666569f655-n4dtl" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-goldmane--666569f655--n4dtl-" Jan 23 17:57:09.333186 containerd[1553]: 2026-01-23 17:57:09.212 [INFO][4227] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="352d35395fe2babab529e6a8bc6149baf13b360e35e4ea8085c9e0ac53e9b818" Namespace="calico-system" Pod="goldmane-666569f655-n4dtl" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-goldmane--666569f655--n4dtl-eth0" Jan 23 17:57:09.333186 containerd[1553]: 2026-01-23 17:57:09.249 [INFO][4235] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="352d35395fe2babab529e6a8bc6149baf13b360e35e4ea8085c9e0ac53e9b818" HandleID="k8s-pod-network.352d35395fe2babab529e6a8bc6149baf13b360e35e4ea8085c9e0ac53e9b818" Workload="ci--4459--2--3--3--b08bb0c7a1-k8s-goldmane--666569f655--n4dtl-eth0" Jan 23 17:57:09.334022 containerd[1553]: 2026-01-23 17:57:09.250 [INFO][4235] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="352d35395fe2babab529e6a8bc6149baf13b360e35e4ea8085c9e0ac53e9b818" HandleID="k8s-pod-network.352d35395fe2babab529e6a8bc6149baf13b360e35e4ea8085c9e0ac53e9b818" Workload="ci--4459--2--3--3--b08bb0c7a1-k8s-goldmane--666569f655--n4dtl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3190), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-3-3-b08bb0c7a1", "pod":"goldmane-666569f655-n4dtl", "timestamp":"2026-01-23 17:57:09.249907668 +0000 UTC"}, Hostname:"ci-4459-2-3-3-b08bb0c7a1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:57:09.334022 containerd[1553]: 2026-01-23 17:57:09.250 [INFO][4235] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:57:09.334022 containerd[1553]: 2026-01-23 17:57:09.250 [INFO][4235] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:57:09.334022 containerd[1553]: 2026-01-23 17:57:09.250 [INFO][4235] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-3-b08bb0c7a1' Jan 23 17:57:09.334022 containerd[1553]: 2026-01-23 17:57:09.262 [INFO][4235] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.352d35395fe2babab529e6a8bc6149baf13b360e35e4ea8085c9e0ac53e9b818" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:09.334022 containerd[1553]: 2026-01-23 17:57:09.268 [INFO][4235] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:09.334022 containerd[1553]: 2026-01-23 17:57:09.274 [INFO][4235] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:09.334022 containerd[1553]: 2026-01-23 17:57:09.277 [INFO][4235] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:09.334022 containerd[1553]: 2026-01-23 17:57:09.281 [INFO][4235] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:09.334530 containerd[1553]: 2026-01-23 17:57:09.281 [INFO][4235] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.352d35395fe2babab529e6a8bc6149baf13b360e35e4ea8085c9e0ac53e9b818" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:09.334530 containerd[1553]: 2026-01-23 17:57:09.284 [INFO][4235] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.352d35395fe2babab529e6a8bc6149baf13b360e35e4ea8085c9e0ac53e9b818 Jan 23 17:57:09.334530 containerd[1553]: 2026-01-23 17:57:09.289 [INFO][4235] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.352d35395fe2babab529e6a8bc6149baf13b360e35e4ea8085c9e0ac53e9b818" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:09.334530 containerd[1553]: 2026-01-23 17:57:09.300 [INFO][4235] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.21.131/26] block=192.168.21.128/26 handle="k8s-pod-network.352d35395fe2babab529e6a8bc6149baf13b360e35e4ea8085c9e0ac53e9b818" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:09.334530 containerd[1553]: 2026-01-23 17:57:09.300 [INFO][4235] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.131/26] handle="k8s-pod-network.352d35395fe2babab529e6a8bc6149baf13b360e35e4ea8085c9e0ac53e9b818" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:09.334530 containerd[1553]: 2026-01-23 17:57:09.300 [INFO][4235] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:57:09.334530 containerd[1553]: 2026-01-23 17:57:09.300 [INFO][4235] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.21.131/26] IPv6=[] ContainerID="352d35395fe2babab529e6a8bc6149baf13b360e35e4ea8085c9e0ac53e9b818" HandleID="k8s-pod-network.352d35395fe2babab529e6a8bc6149baf13b360e35e4ea8085c9e0ac53e9b818" Workload="ci--4459--2--3--3--b08bb0c7a1-k8s-goldmane--666569f655--n4dtl-eth0" Jan 23 17:57:09.334722 containerd[1553]: 2026-01-23 17:57:09.302 [INFO][4227] cni-plugin/k8s.go 418: Populated endpoint ContainerID="352d35395fe2babab529e6a8bc6149baf13b360e35e4ea8085c9e0ac53e9b818" Namespace="calico-system" Pod="goldmane-666569f655-n4dtl" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-goldmane--666569f655--n4dtl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--3--b08bb0c7a1-k8s-goldmane--666569f655--n4dtl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"880a2ec4-932d-40e4-a1c5-e4529584127c", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 56, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-3-b08bb0c7a1", ContainerID:"", Pod:"goldmane-666569f655-n4dtl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.21.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliaff633ac1b8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:57:09.334794 containerd[1553]: 2026-01-23 17:57:09.302 [INFO][4227] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.131/32] ContainerID="352d35395fe2babab529e6a8bc6149baf13b360e35e4ea8085c9e0ac53e9b818" Namespace="calico-system" Pod="goldmane-666569f655-n4dtl" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-goldmane--666569f655--n4dtl-eth0" Jan 23 17:57:09.334794 containerd[1553]: 2026-01-23 17:57:09.302 [INFO][4227] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaff633ac1b8 ContainerID="352d35395fe2babab529e6a8bc6149baf13b360e35e4ea8085c9e0ac53e9b818" Namespace="calico-system" Pod="goldmane-666569f655-n4dtl" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-goldmane--666569f655--n4dtl-eth0" Jan 23 17:57:09.334794 containerd[1553]: 2026-01-23 17:57:09.307 [INFO][4227] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="352d35395fe2babab529e6a8bc6149baf13b360e35e4ea8085c9e0ac53e9b818" Namespace="calico-system" Pod="goldmane-666569f655-n4dtl" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-goldmane--666569f655--n4dtl-eth0" Jan 23 17:57:09.334916 containerd[1553]: 2026-01-23 17:57:09.309 [INFO][4227] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="352d35395fe2babab529e6a8bc6149baf13b360e35e4ea8085c9e0ac53e9b818" Namespace="calico-system" Pod="goldmane-666569f655-n4dtl" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-goldmane--666569f655--n4dtl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--3--b08bb0c7a1-k8s-goldmane--666569f655--n4dtl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"880a2ec4-932d-40e4-a1c5-e4529584127c", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 56, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-3-b08bb0c7a1", ContainerID:"352d35395fe2babab529e6a8bc6149baf13b360e35e4ea8085c9e0ac53e9b818", Pod:"goldmane-666569f655-n4dtl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.21.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliaff633ac1b8", MAC:"1a:39:17:2c:e6:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:57:09.334991 containerd[1553]: 2026-01-23 17:57:09.329 [INFO][4227] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="352d35395fe2babab529e6a8bc6149baf13b360e35e4ea8085c9e0ac53e9b818" Namespace="calico-system" Pod="goldmane-666569f655-n4dtl" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-goldmane--666569f655--n4dtl-eth0" Jan 23 17:57:09.373135 containerd[1553]: time="2026-01-23T17:57:09.373056968Z" level=info msg="connecting to shim 352d35395fe2babab529e6a8bc6149baf13b360e35e4ea8085c9e0ac53e9b818" address="unix:///run/containerd/s/843cd0d5356ff0abc2535b467acfb450af734bfdf38e0e5a44440314cec1cf0f" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:57:09.376011 kubelet[2768]: E0123 17:57:09.375953 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d67bdb5bc-4l6tt" podUID="d3d96596-5d19-4484-a8c6-296b60023534" Jan 23 17:57:09.413954 systemd[1]: Started cri-containerd-352d35395fe2babab529e6a8bc6149baf13b360e35e4ea8085c9e0ac53e9b818.scope - libcontainer container 352d35395fe2babab529e6a8bc6149baf13b360e35e4ea8085c9e0ac53e9b818. Jan 23 17:57:09.472684 containerd[1553]: time="2026-01-23T17:57:09.472465667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-n4dtl,Uid:880a2ec4-932d-40e4-a1c5-e4529584127c,Namespace:calico-system,Attempt:0,} returns sandbox id \"352d35395fe2babab529e6a8bc6149baf13b360e35e4ea8085c9e0ac53e9b818\"" Jan 23 17:57:09.475107 containerd[1553]: time="2026-01-23T17:57:09.475052084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 17:57:09.571155 systemd-networkd[1423]: calia45ebb616ff: Gained IPv6LL Jan 23 17:57:09.802806 containerd[1553]: time="2026-01-23T17:57:09.802478942Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:57:09.805636 containerd[1553]: time="2026-01-23T17:57:09.805481648Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 17:57:09.805807 containerd[1553]: time="2026-01-23T17:57:09.805673812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 17:57:09.806018 kubelet[2768]: E0123 17:57:09.805962 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:57:09.806234 kubelet[2768]: E0123 17:57:09.806208 2768 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:57:09.806666 kubelet[2768]: E0123 17:57:09.806602 2768 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqkjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-n4dtl_calico-system(880a2ec4-932d-40e4-a1c5-e4529584127c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 17:57:09.808013 kubelet[2768]: E0123 17:57:09.807966 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n4dtl" podUID="880a2ec4-932d-40e4-a1c5-e4529584127c" Jan 23 17:57:10.171423 containerd[1553]: time="2026-01-23T17:57:10.169653917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j22p7,Uid:8eaf0d49-a15b-4271-a48f-ef6ce1231911,Namespace:kube-system,Attempt:0,}" Jan 23 17:57:10.171423 containerd[1553]: time="2026-01-23T17:57:10.170204169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d67bdb5bc-9sgp8,Uid:ff25427e-89d3-494f-819f-e42ac2ef9668,Namespace:calico-apiserver,Attempt:0,}" Jan 23 17:57:10.171423 containerd[1553]: time="2026-01-23T17:57:10.170461054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cf56b4c9c-b4lk9,Uid:0e19fa73-ca02-4aed-bbc2-3496e7625b06,Namespace:calico-apiserver,Attempt:0,}" Jan 23 17:57:10.172130 containerd[1553]: time="2026-01-23T17:57:10.172079568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nmxbw,Uid:c5662b76-1476-436c-8fcf-b63b80628b31,Namespace:kube-system,Attempt:0,}" Jan 23 17:57:10.336730 systemd-networkd[1423]: caliaff633ac1b8: Gained IPv6LL Jan 23 17:57:10.375092 kubelet[2768]: E0123 17:57:10.374573 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n4dtl" podUID="880a2ec4-932d-40e4-a1c5-e4529584127c" Jan 23 17:57:10.381258 kubelet[2768]: E0123 17:57:10.381214 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d67bdb5bc-4l6tt" podUID="d3d96596-5d19-4484-a8c6-296b60023534" Jan 23 17:57:10.512062 systemd-networkd[1423]: cali3e47a3a4883: Link UP Jan 23 17:57:10.514781 systemd-networkd[1423]: cali3e47a3a4883: Gained carrier Jan 23 17:57:10.540974 containerd[1553]: 2026-01-23 17:57:10.250 [INFO][4317] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 17:57:10.540974 containerd[1553]: 2026-01-23 17:57:10.289 [INFO][4317] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--3--b08bb0c7a1-k8s-coredns--674b8bbfcf--j22p7-eth0 coredns-674b8bbfcf- kube-system 8eaf0d49-a15b-4271-a48f-ef6ce1231911 842 0 2026-01-23 17:56:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-3-3-b08bb0c7a1 coredns-674b8bbfcf-j22p7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3e47a3a4883 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3d52975d1ec6c8fdb0ba3ff45968e3effda8d805abc5227bb9a35156913f3224" Namespace="kube-system" Pod="coredns-674b8bbfcf-j22p7" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-coredns--674b8bbfcf--j22p7-" Jan 23 17:57:10.540974 containerd[1553]: 2026-01-23 17:57:10.290 [INFO][4317] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3d52975d1ec6c8fdb0ba3ff45968e3effda8d805abc5227bb9a35156913f3224" Namespace="kube-system" Pod="coredns-674b8bbfcf-j22p7" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-coredns--674b8bbfcf--j22p7-eth0" Jan 23 17:57:10.540974 containerd[1553]: 2026-01-23 17:57:10.373 [INFO][4364] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3d52975d1ec6c8fdb0ba3ff45968e3effda8d805abc5227bb9a35156913f3224" HandleID="k8s-pod-network.3d52975d1ec6c8fdb0ba3ff45968e3effda8d805abc5227bb9a35156913f3224" Workload="ci--4459--2--3--3--b08bb0c7a1-k8s-coredns--674b8bbfcf--j22p7-eth0" Jan 23 17:57:10.541457 containerd[1553]: 2026-01-23 17:57:10.375 [INFO][4364] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3d52975d1ec6c8fdb0ba3ff45968e3effda8d805abc5227bb9a35156913f3224" HandleID="k8s-pod-network.3d52975d1ec6c8fdb0ba3ff45968e3effda8d805abc5227bb9a35156913f3224" Workload="ci--4459--2--3--3--b08bb0c7a1-k8s-coredns--674b8bbfcf--j22p7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b0e80), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-3-3-b08bb0c7a1", "pod":"coredns-674b8bbfcf-j22p7", "timestamp":"2026-01-23 17:57:10.373324716 +0000 UTC"}, Hostname:"ci-4459-2-3-3-b08bb0c7a1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:57:10.541457 containerd[1553]: 2026-01-23 17:57:10.382 [INFO][4364] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:57:10.541457 containerd[1553]: 2026-01-23 17:57:10.382 [INFO][4364] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:57:10.541457 containerd[1553]: 2026-01-23 17:57:10.383 [INFO][4364] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-3-b08bb0c7a1' Jan 23 17:57:10.541457 containerd[1553]: 2026-01-23 17:57:10.437 [INFO][4364] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3d52975d1ec6c8fdb0ba3ff45968e3effda8d805abc5227bb9a35156913f3224" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:10.541457 containerd[1553]: 2026-01-23 17:57:10.467 [INFO][4364] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:10.541457 containerd[1553]: 2026-01-23 17:57:10.475 [INFO][4364] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:10.541457 containerd[1553]: 2026-01-23 17:57:10.479 [INFO][4364] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:10.541457 containerd[1553]: 2026-01-23 17:57:10.481 [INFO][4364] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:10.542010 containerd[1553]: 2026-01-23 17:57:10.482 [INFO][4364] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.3d52975d1ec6c8fdb0ba3ff45968e3effda8d805abc5227bb9a35156913f3224" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:10.542010 containerd[1553]: 2026-01-23 17:57:10.485 [INFO][4364] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3d52975d1ec6c8fdb0ba3ff45968e3effda8d805abc5227bb9a35156913f3224 Jan 23 17:57:10.542010 containerd[1553]: 2026-01-23 17:57:10.490 [INFO][4364] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.3d52975d1ec6c8fdb0ba3ff45968e3effda8d805abc5227bb9a35156913f3224" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:10.542010 containerd[1553]: 2026-01-23 17:57:10.503 [INFO][4364] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.21.132/26] block=192.168.21.128/26 handle="k8s-pod-network.3d52975d1ec6c8fdb0ba3ff45968e3effda8d805abc5227bb9a35156913f3224" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:10.542010 containerd[1553]: 2026-01-23 17:57:10.504 [INFO][4364] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.132/26] handle="k8s-pod-network.3d52975d1ec6c8fdb0ba3ff45968e3effda8d805abc5227bb9a35156913f3224" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:10.542010 containerd[1553]: 2026-01-23 17:57:10.504 [INFO][4364] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:57:10.542010 containerd[1553]: 2026-01-23 17:57:10.504 [INFO][4364] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.21.132/26] IPv6=[] ContainerID="3d52975d1ec6c8fdb0ba3ff45968e3effda8d805abc5227bb9a35156913f3224" HandleID="k8s-pod-network.3d52975d1ec6c8fdb0ba3ff45968e3effda8d805abc5227bb9a35156913f3224" Workload="ci--4459--2--3--3--b08bb0c7a1-k8s-coredns--674b8bbfcf--j22p7-eth0" Jan 23 17:57:10.542154 containerd[1553]: 2026-01-23 17:57:10.507 [INFO][4317] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3d52975d1ec6c8fdb0ba3ff45968e3effda8d805abc5227bb9a35156913f3224" Namespace="kube-system" Pod="coredns-674b8bbfcf-j22p7" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-coredns--674b8bbfcf--j22p7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--3--b08bb0c7a1-k8s-coredns--674b8bbfcf--j22p7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8eaf0d49-a15b-4271-a48f-ef6ce1231911", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 56, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-3-b08bb0c7a1", ContainerID:"", Pod:"coredns-674b8bbfcf-j22p7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3e47a3a4883", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:57:10.542154 containerd[1553]: 2026-01-23 17:57:10.508 [INFO][4317] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.132/32] ContainerID="3d52975d1ec6c8fdb0ba3ff45968e3effda8d805abc5227bb9a35156913f3224" Namespace="kube-system" Pod="coredns-674b8bbfcf-j22p7" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-coredns--674b8bbfcf--j22p7-eth0" Jan 23 17:57:10.542154 containerd[1553]: 2026-01-23 17:57:10.508 [INFO][4317] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3e47a3a4883 ContainerID="3d52975d1ec6c8fdb0ba3ff45968e3effda8d805abc5227bb9a35156913f3224" Namespace="kube-system" Pod="coredns-674b8bbfcf-j22p7" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-coredns--674b8bbfcf--j22p7-eth0" Jan 23 17:57:10.542154 containerd[1553]: 2026-01-23 17:57:10.515 [INFO][4317] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3d52975d1ec6c8fdb0ba3ff45968e3effda8d805abc5227bb9a35156913f3224" Namespace="kube-system" Pod="coredns-674b8bbfcf-j22p7" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-coredns--674b8bbfcf--j22p7-eth0" Jan 23 17:57:10.542154 containerd[1553]: 2026-01-23 17:57:10.517 [INFO][4317] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3d52975d1ec6c8fdb0ba3ff45968e3effda8d805abc5227bb9a35156913f3224" Namespace="kube-system" Pod="coredns-674b8bbfcf-j22p7" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-coredns--674b8bbfcf--j22p7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--3--b08bb0c7a1-k8s-coredns--674b8bbfcf--j22p7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8eaf0d49-a15b-4271-a48f-ef6ce1231911", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 56, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-3-b08bb0c7a1", ContainerID:"3d52975d1ec6c8fdb0ba3ff45968e3effda8d805abc5227bb9a35156913f3224", Pod:"coredns-674b8bbfcf-j22p7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3e47a3a4883", MAC:"da:07:e2:92:cd:85", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:57:10.542154 containerd[1553]: 2026-01-23 17:57:10.536 [INFO][4317] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3d52975d1ec6c8fdb0ba3ff45968e3effda8d805abc5227bb9a35156913f3224" Namespace="kube-system" Pod="coredns-674b8bbfcf-j22p7" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-coredns--674b8bbfcf--j22p7-eth0" Jan 23 17:57:10.591245 containerd[1553]: time="2026-01-23T17:57:10.590669162Z" level=info msg="connecting to shim 3d52975d1ec6c8fdb0ba3ff45968e3effda8d805abc5227bb9a35156913f3224" address="unix:///run/containerd/s/1b260353d535e3d451c6dbe1b751069c101682caee00577d9dd0f36a44344a88" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:57:10.658413 systemd-networkd[1423]: cali7cf8830005d: Link UP Jan 23 17:57:10.659180 systemd-networkd[1423]: cali7cf8830005d: Gained carrier Jan 23 17:57:10.684930 systemd[1]: Started cri-containerd-3d52975d1ec6c8fdb0ba3ff45968e3effda8d805abc5227bb9a35156913f3224.scope - libcontainer container 3d52975d1ec6c8fdb0ba3ff45968e3effda8d805abc5227bb9a35156913f3224. Jan 23 17:57:10.702877 containerd[1553]: 2026-01-23 17:57:10.300 [INFO][4336] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 17:57:10.702877 containerd[1553]: 2026-01-23 17:57:10.343 [INFO][4336] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--3--b08bb0c7a1-k8s-coredns--674b8bbfcf--nmxbw-eth0 coredns-674b8bbfcf- kube-system c5662b76-1476-436c-8fcf-b63b80628b31 839 0 2026-01-23 17:56:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-3-3-b08bb0c7a1 coredns-674b8bbfcf-nmxbw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7cf8830005d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fb499e0fa41cf9c1097570e9ee613deb169bad8d93749a58d53f1f4ad8a4fd68" Namespace="kube-system" Pod="coredns-674b8bbfcf-nmxbw" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-coredns--674b8bbfcf--nmxbw-" Jan 23 17:57:10.702877 containerd[1553]: 2026-01-23 17:57:10.343 [INFO][4336] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fb499e0fa41cf9c1097570e9ee613deb169bad8d93749a58d53f1f4ad8a4fd68" Namespace="kube-system" Pod="coredns-674b8bbfcf-nmxbw" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-coredns--674b8bbfcf--nmxbw-eth0" Jan 23 17:57:10.702877 containerd[1553]: 2026-01-23 17:57:10.422 [INFO][4375] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb499e0fa41cf9c1097570e9ee613deb169bad8d93749a58d53f1f4ad8a4fd68" HandleID="k8s-pod-network.fb499e0fa41cf9c1097570e9ee613deb169bad8d93749a58d53f1f4ad8a4fd68" Workload="ci--4459--2--3--3--b08bb0c7a1-k8s-coredns--674b8bbfcf--nmxbw-eth0" Jan 23 17:57:10.702877 containerd[1553]: 2026-01-23 17:57:10.422 [INFO][4375] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fb499e0fa41cf9c1097570e9ee613deb169bad8d93749a58d53f1f4ad8a4fd68" HandleID="k8s-pod-network.fb499e0fa41cf9c1097570e9ee613deb169bad8d93749a58d53f1f4ad8a4fd68" Workload="ci--4459--2--3--3--b08bb0c7a1-k8s-coredns--674b8bbfcf--nmxbw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-3-3-b08bb0c7a1", "pod":"coredns-674b8bbfcf-nmxbw", "timestamp":"2026-01-23 17:57:10.422224783 +0000 UTC"}, Hostname:"ci-4459-2-3-3-b08bb0c7a1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:57:10.702877 containerd[1553]: 2026-01-23 17:57:10.423 [INFO][4375] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:57:10.702877 containerd[1553]: 2026-01-23 17:57:10.504 [INFO][4375] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:57:10.702877 containerd[1553]: 2026-01-23 17:57:10.504 [INFO][4375] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-3-b08bb0c7a1' Jan 23 17:57:10.702877 containerd[1553]: 2026-01-23 17:57:10.536 [INFO][4375] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fb499e0fa41cf9c1097570e9ee613deb169bad8d93749a58d53f1f4ad8a4fd68" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:10.702877 containerd[1553]: 2026-01-23 17:57:10.565 [INFO][4375] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:10.702877 containerd[1553]: 2026-01-23 17:57:10.584 [INFO][4375] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:10.702877 containerd[1553]: 2026-01-23 17:57:10.592 [INFO][4375] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:10.702877 containerd[1553]: 2026-01-23 17:57:10.596 [INFO][4375] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:10.702877 containerd[1553]: 2026-01-23 17:57:10.599 [INFO][4375] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.fb499e0fa41cf9c1097570e9ee613deb169bad8d93749a58d53f1f4ad8a4fd68" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:10.702877 containerd[1553]: 2026-01-23 17:57:10.606 [INFO][4375] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fb499e0fa41cf9c1097570e9ee613deb169bad8d93749a58d53f1f4ad8a4fd68 Jan 23 17:57:10.702877 containerd[1553]: 2026-01-23 17:57:10.617 [INFO][4375] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.fb499e0fa41cf9c1097570e9ee613deb169bad8d93749a58d53f1f4ad8a4fd68" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:10.702877 containerd[1553]: 2026-01-23 17:57:10.635 [INFO][4375] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.21.133/26] block=192.168.21.128/26 handle="k8s-pod-network.fb499e0fa41cf9c1097570e9ee613deb169bad8d93749a58d53f1f4ad8a4fd68" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:10.702877 containerd[1553]: 2026-01-23 17:57:10.636 [INFO][4375] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.133/26] handle="k8s-pod-network.fb499e0fa41cf9c1097570e9ee613deb169bad8d93749a58d53f1f4ad8a4fd68" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:10.702877 containerd[1553]: 2026-01-23 17:57:10.636 [INFO][4375] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:57:10.702877 containerd[1553]: 2026-01-23 17:57:10.636 [INFO][4375] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.21.133/26] IPv6=[] ContainerID="fb499e0fa41cf9c1097570e9ee613deb169bad8d93749a58d53f1f4ad8a4fd68" HandleID="k8s-pod-network.fb499e0fa41cf9c1097570e9ee613deb169bad8d93749a58d53f1f4ad8a4fd68" Workload="ci--4459--2--3--3--b08bb0c7a1-k8s-coredns--674b8bbfcf--nmxbw-eth0" Jan 23 17:57:10.703422 containerd[1553]: 2026-01-23 17:57:10.642 [INFO][4336] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fb499e0fa41cf9c1097570e9ee613deb169bad8d93749a58d53f1f4ad8a4fd68" Namespace="kube-system" Pod="coredns-674b8bbfcf-nmxbw" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-coredns--674b8bbfcf--nmxbw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--3--b08bb0c7a1-k8s-coredns--674b8bbfcf--nmxbw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c5662b76-1476-436c-8fcf-b63b80628b31", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 56, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-3-b08bb0c7a1", ContainerID:"", Pod:"coredns-674b8bbfcf-nmxbw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7cf8830005d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:57:10.703422 containerd[1553]: 2026-01-23 17:57:10.643 [INFO][4336] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.133/32] ContainerID="fb499e0fa41cf9c1097570e9ee613deb169bad8d93749a58d53f1f4ad8a4fd68" Namespace="kube-system" Pod="coredns-674b8bbfcf-nmxbw" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-coredns--674b8bbfcf--nmxbw-eth0" Jan 23 17:57:10.703422 containerd[1553]: 2026-01-23 17:57:10.643 [INFO][4336] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7cf8830005d ContainerID="fb499e0fa41cf9c1097570e9ee613deb169bad8d93749a58d53f1f4ad8a4fd68" Namespace="kube-system" Pod="coredns-674b8bbfcf-nmxbw" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-coredns--674b8bbfcf--nmxbw-eth0" Jan 23 17:57:10.703422 containerd[1553]: 2026-01-23 17:57:10.659 [INFO][4336] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb499e0fa41cf9c1097570e9ee613deb169bad8d93749a58d53f1f4ad8a4fd68" Namespace="kube-system" Pod="coredns-674b8bbfcf-nmxbw" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-coredns--674b8bbfcf--nmxbw-eth0" Jan 23 17:57:10.703422 containerd[1553]: 2026-01-23 17:57:10.664 [INFO][4336] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fb499e0fa41cf9c1097570e9ee613deb169bad8d93749a58d53f1f4ad8a4fd68" Namespace="kube-system" Pod="coredns-674b8bbfcf-nmxbw" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-coredns--674b8bbfcf--nmxbw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--3--b08bb0c7a1-k8s-coredns--674b8bbfcf--nmxbw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c5662b76-1476-436c-8fcf-b63b80628b31", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 56, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-3-b08bb0c7a1", ContainerID:"fb499e0fa41cf9c1097570e9ee613deb169bad8d93749a58d53f1f4ad8a4fd68", Pod:"coredns-674b8bbfcf-nmxbw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7cf8830005d", MAC:"36:61:b8:f7:09:0e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:57:10.703422 containerd[1553]: 2026-01-23 17:57:10.692 [INFO][4336] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fb499e0fa41cf9c1097570e9ee613deb169bad8d93749a58d53f1f4ad8a4fd68" Namespace="kube-system" Pod="coredns-674b8bbfcf-nmxbw" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-coredns--674b8bbfcf--nmxbw-eth0" Jan 23 17:57:10.762611 containerd[1553]: time="2026-01-23T17:57:10.762254406Z" level=info msg="connecting to shim fb499e0fa41cf9c1097570e9ee613deb169bad8d93749a58d53f1f4ad8a4fd68" address="unix:///run/containerd/s/a3fa535ae6fad4f8e8d1bc3edb541b9294be2e851f1a0ffc0da6ca8dbc888b74" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:57:10.767876 systemd-networkd[1423]: cali435deb623d6: Link UP Jan 23 17:57:10.769563 systemd-networkd[1423]: cali435deb623d6: Gained carrier Jan 23 17:57:10.803810 containerd[1553]: 2026-01-23 17:57:10.302 [INFO][4327] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 17:57:10.803810 containerd[1553]: 2026-01-23 17:57:10.360 [INFO][4327] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--5cf56b4c9c--b4lk9-eth0 calico-apiserver-5cf56b4c9c- calico-apiserver 0e19fa73-ca02-4aed-bbc2-3496e7625b06 840 0 2026-01-23 17:56:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5cf56b4c9c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-3-3-b08bb0c7a1 calico-apiserver-5cf56b4c9c-b4lk9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali435deb623d6 [] [] }} ContainerID="2a448ae76c0bbc2f5f3ed2a622747baafb4eaa90d50b03fd445b7de8286c8670" Namespace="calico-apiserver" Pod="calico-apiserver-5cf56b4c9c-b4lk9" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--5cf56b4c9c--b4lk9-" Jan 23 17:57:10.803810 containerd[1553]: 2026-01-23 17:57:10.360 [INFO][4327] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2a448ae76c0bbc2f5f3ed2a622747baafb4eaa90d50b03fd445b7de8286c8670" Namespace="calico-apiserver" Pod="calico-apiserver-5cf56b4c9c-b4lk9" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--5cf56b4c9c--b4lk9-eth0" Jan 23 17:57:10.803810 containerd[1553]: 2026-01-23 17:57:10.451 [INFO][4377] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a448ae76c0bbc2f5f3ed2a622747baafb4eaa90d50b03fd445b7de8286c8670" HandleID="k8s-pod-network.2a448ae76c0bbc2f5f3ed2a622747baafb4eaa90d50b03fd445b7de8286c8670" Workload="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--5cf56b4c9c--b4lk9-eth0" Jan 23 17:57:10.803810 containerd[1553]: 2026-01-23 17:57:10.451 [INFO][4377] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2a448ae76c0bbc2f5f3ed2a622747baafb4eaa90d50b03fd445b7de8286c8670" HandleID="k8s-pod-network.2a448ae76c0bbc2f5f3ed2a622747baafb4eaa90d50b03fd445b7de8286c8670" Workload="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--5cf56b4c9c--b4lk9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d36d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-2-3-3-b08bb0c7a1", "pod":"calico-apiserver-5cf56b4c9c-b4lk9", "timestamp":"2026-01-23 17:57:10.451741403 +0000 UTC"}, Hostname:"ci-4459-2-3-3-b08bb0c7a1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:57:10.803810 containerd[1553]: 2026-01-23 17:57:10.451 [INFO][4377] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:57:10.803810 containerd[1553]: 2026-01-23 17:57:10.636 [INFO][4377] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:57:10.803810 containerd[1553]: 2026-01-23 17:57:10.637 [INFO][4377] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-3-b08bb0c7a1' Jan 23 17:57:10.803810 containerd[1553]: 2026-01-23 17:57:10.667 [INFO][4377] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2a448ae76c0bbc2f5f3ed2a622747baafb4eaa90d50b03fd445b7de8286c8670" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:10.803810 containerd[1553]: 2026-01-23 17:57:10.687 [INFO][4377] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:10.803810 containerd[1553]: 2026-01-23 17:57:10.709 [INFO][4377] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:10.803810 containerd[1553]: 2026-01-23 17:57:10.715 [INFO][4377] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:10.803810 containerd[1553]: 2026-01-23 17:57:10.721 [INFO][4377] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:10.803810 containerd[1553]: 2026-01-23 17:57:10.721 [INFO][4377] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.2a448ae76c0bbc2f5f3ed2a622747baafb4eaa90d50b03fd445b7de8286c8670" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:10.803810 containerd[1553]: 2026-01-23 17:57:10.726 [INFO][4377] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2a448ae76c0bbc2f5f3ed2a622747baafb4eaa90d50b03fd445b7de8286c8670 Jan 23 17:57:10.803810 containerd[1553]: 2026-01-23 17:57:10.737 [INFO][4377] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.2a448ae76c0bbc2f5f3ed2a622747baafb4eaa90d50b03fd445b7de8286c8670" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:10.803810 containerd[1553]: 2026-01-23 17:57:10.749 [INFO][4377] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.21.134/26] block=192.168.21.128/26 handle="k8s-pod-network.2a448ae76c0bbc2f5f3ed2a622747baafb4eaa90d50b03fd445b7de8286c8670" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:10.803810 containerd[1553]: 2026-01-23 17:57:10.749 [INFO][4377] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.134/26] handle="k8s-pod-network.2a448ae76c0bbc2f5f3ed2a622747baafb4eaa90d50b03fd445b7de8286c8670" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:10.803810 containerd[1553]: 2026-01-23 17:57:10.749 [INFO][4377] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:57:10.803810 containerd[1553]: 2026-01-23 17:57:10.749 [INFO][4377] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.21.134/26] IPv6=[] ContainerID="2a448ae76c0bbc2f5f3ed2a622747baafb4eaa90d50b03fd445b7de8286c8670" HandleID="k8s-pod-network.2a448ae76c0bbc2f5f3ed2a622747baafb4eaa90d50b03fd445b7de8286c8670" Workload="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--5cf56b4c9c--b4lk9-eth0" Jan 23 17:57:10.804812 containerd[1553]: 2026-01-23 17:57:10.757 [INFO][4327] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2a448ae76c0bbc2f5f3ed2a622747baafb4eaa90d50b03fd445b7de8286c8670" Namespace="calico-apiserver" Pod="calico-apiserver-5cf56b4c9c-b4lk9" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--5cf56b4c9c--b4lk9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--5cf56b4c9c--b4lk9-eth0", GenerateName:"calico-apiserver-5cf56b4c9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"0e19fa73-ca02-4aed-bbc2-3496e7625b06", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 56, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cf56b4c9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-3-b08bb0c7a1", ContainerID:"", Pod:"calico-apiserver-5cf56b4c9c-b4lk9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali435deb623d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:57:10.804812 containerd[1553]: 2026-01-23 17:57:10.758 [INFO][4327] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.134/32] ContainerID="2a448ae76c0bbc2f5f3ed2a622747baafb4eaa90d50b03fd445b7de8286c8670" Namespace="calico-apiserver" Pod="calico-apiserver-5cf56b4c9c-b4lk9" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--5cf56b4c9c--b4lk9-eth0" Jan 23 17:57:10.804812 containerd[1553]: 2026-01-23 17:57:10.758 [INFO][4327] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali435deb623d6 ContainerID="2a448ae76c0bbc2f5f3ed2a622747baafb4eaa90d50b03fd445b7de8286c8670" Namespace="calico-apiserver" Pod="calico-apiserver-5cf56b4c9c-b4lk9" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--5cf56b4c9c--b4lk9-eth0" Jan 23 17:57:10.804812 containerd[1553]: 2026-01-23 17:57:10.771 [INFO][4327] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a448ae76c0bbc2f5f3ed2a622747baafb4eaa90d50b03fd445b7de8286c8670" Namespace="calico-apiserver" Pod="calico-apiserver-5cf56b4c9c-b4lk9" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--5cf56b4c9c--b4lk9-eth0" Jan 23 17:57:10.804812 containerd[1553]: 2026-01-23 17:57:10.777 [INFO][4327] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2a448ae76c0bbc2f5f3ed2a622747baafb4eaa90d50b03fd445b7de8286c8670" Namespace="calico-apiserver" Pod="calico-apiserver-5cf56b4c9c-b4lk9" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--5cf56b4c9c--b4lk9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--5cf56b4c9c--b4lk9-eth0", GenerateName:"calico-apiserver-5cf56b4c9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"0e19fa73-ca02-4aed-bbc2-3496e7625b06", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 56, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cf56b4c9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-3-b08bb0c7a1", ContainerID:"2a448ae76c0bbc2f5f3ed2a622747baafb4eaa90d50b03fd445b7de8286c8670", Pod:"calico-apiserver-5cf56b4c9c-b4lk9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali435deb623d6", MAC:"86:12:62:b8:af:f7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:57:10.804812 containerd[1553]: 2026-01-23 17:57:10.795 [INFO][4327] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2a448ae76c0bbc2f5f3ed2a622747baafb4eaa90d50b03fd445b7de8286c8670" Namespace="calico-apiserver" Pod="calico-apiserver-5cf56b4c9c-b4lk9" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--5cf56b4c9c--b4lk9-eth0" Jan 23 17:57:10.895173 systemd-networkd[1423]: cali3f71d20f37c: Link UP Jan 23 17:57:10.896252 systemd-networkd[1423]: cali3f71d20f37c: Gained carrier Jan 23 17:57:10.918587 containerd[1553]: 2026-01-23 17:57:10.298 [INFO][4339] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 17:57:10.918587 containerd[1553]: 2026-01-23 17:57:10.376 [INFO][4339] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--6d67bdb5bc--9sgp8-eth0 calico-apiserver-6d67bdb5bc- calico-apiserver ff25427e-89d3-494f-819f-e42ac2ef9668 846 0 2026-01-23 17:56:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d67bdb5bc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-3-3-b08bb0c7a1 calico-apiserver-6d67bdb5bc-9sgp8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3f71d20f37c [] [] }} ContainerID="32eaa32954298c71d1a04f0cd041f3986addfc709da2877d32cdbe26be2253cf" Namespace="calico-apiserver" Pod="calico-apiserver-6d67bdb5bc-9sgp8" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--6d67bdb5bc--9sgp8-" Jan 23 17:57:10.918587 containerd[1553]: 2026-01-23 17:57:10.376 [INFO][4339] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="32eaa32954298c71d1a04f0cd041f3986addfc709da2877d32cdbe26be2253cf" Namespace="calico-apiserver" Pod="calico-apiserver-6d67bdb5bc-9sgp8" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--6d67bdb5bc--9sgp8-eth0" Jan 23 17:57:10.918587 containerd[1553]: 2026-01-23 17:57:10.478 [INFO][4386] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="32eaa32954298c71d1a04f0cd041f3986addfc709da2877d32cdbe26be2253cf" HandleID="k8s-pod-network.32eaa32954298c71d1a04f0cd041f3986addfc709da2877d32cdbe26be2253cf" Workload="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--6d67bdb5bc--9sgp8-eth0" Jan 23 17:57:10.918587 containerd[1553]: 2026-01-23 17:57:10.478 [INFO][4386] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="32eaa32954298c71d1a04f0cd041f3986addfc709da2877d32cdbe26be2253cf" HandleID="k8s-pod-network.32eaa32954298c71d1a04f0cd041f3986addfc709da2877d32cdbe26be2253cf" Workload="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--6d67bdb5bc--9sgp8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ddb0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-2-3-3-b08bb0c7a1", "pod":"calico-apiserver-6d67bdb5bc-9sgp8", "timestamp":"2026-01-23 17:57:10.478213119 +0000 UTC"}, Hostname:"ci-4459-2-3-3-b08bb0c7a1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:57:10.918587 containerd[1553]: 2026-01-23 17:57:10.478 [INFO][4386] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:57:10.918587 containerd[1553]: 2026-01-23 17:57:10.750 [INFO][4386] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:57:10.918587 containerd[1553]: 2026-01-23 17:57:10.750 [INFO][4386] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-3-b08bb0c7a1' Jan 23 17:57:10.918587 containerd[1553]: 2026-01-23 17:57:10.785 [INFO][4386] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.32eaa32954298c71d1a04f0cd041f3986addfc709da2877d32cdbe26be2253cf" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:10.918587 containerd[1553]: 2026-01-23 17:57:10.810 [INFO][4386] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:10.918587 containerd[1553]: 2026-01-23 17:57:10.826 [INFO][4386] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:10.918587 containerd[1553]: 2026-01-23 17:57:10.834 [INFO][4386] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:10.918587 containerd[1553]: 2026-01-23 17:57:10.841 [INFO][4386] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:10.918587 containerd[1553]: 2026-01-23 17:57:10.841 [INFO][4386] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.32eaa32954298c71d1a04f0cd041f3986addfc709da2877d32cdbe26be2253cf" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:10.918587 containerd[1553]: 2026-01-23 17:57:10.846 [INFO][4386] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.32eaa32954298c71d1a04f0cd041f3986addfc709da2877d32cdbe26be2253cf Jan 23 17:57:10.918587 containerd[1553]: 2026-01-23 17:57:10.863 [INFO][4386] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.32eaa32954298c71d1a04f0cd041f3986addfc709da2877d32cdbe26be2253cf" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:10.918587 containerd[1553]: 2026-01-23 17:57:10.878 [INFO][4386] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.21.135/26] block=192.168.21.128/26 handle="k8s-pod-network.32eaa32954298c71d1a04f0cd041f3986addfc709da2877d32cdbe26be2253cf" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:10.918587 containerd[1553]: 2026-01-23 17:57:10.879 [INFO][4386] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.135/26] handle="k8s-pod-network.32eaa32954298c71d1a04f0cd041f3986addfc709da2877d32cdbe26be2253cf" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:10.918587 containerd[1553]: 2026-01-23 17:57:10.879 [INFO][4386] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:57:10.918587 containerd[1553]: 2026-01-23 17:57:10.880 [INFO][4386] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.21.135/26] IPv6=[] ContainerID="32eaa32954298c71d1a04f0cd041f3986addfc709da2877d32cdbe26be2253cf" HandleID="k8s-pod-network.32eaa32954298c71d1a04f0cd041f3986addfc709da2877d32cdbe26be2253cf" Workload="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--6d67bdb5bc--9sgp8-eth0" Jan 23 17:57:10.919909 containerd[1553]: 2026-01-23 17:57:10.890 [INFO][4339] cni-plugin/k8s.go 418: Populated endpoint ContainerID="32eaa32954298c71d1a04f0cd041f3986addfc709da2877d32cdbe26be2253cf" Namespace="calico-apiserver" Pod="calico-apiserver-6d67bdb5bc-9sgp8" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--6d67bdb5bc--9sgp8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--6d67bdb5bc--9sgp8-eth0", GenerateName:"calico-apiserver-6d67bdb5bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"ff25427e-89d3-494f-819f-e42ac2ef9668", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 56, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d67bdb5bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-3-b08bb0c7a1", ContainerID:"", Pod:"calico-apiserver-6d67bdb5bc-9sgp8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3f71d20f37c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:57:10.919909 containerd[1553]: 2026-01-23 17:57:10.890 [INFO][4339] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.135/32] ContainerID="32eaa32954298c71d1a04f0cd041f3986addfc709da2877d32cdbe26be2253cf" Namespace="calico-apiserver" Pod="calico-apiserver-6d67bdb5bc-9sgp8" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--6d67bdb5bc--9sgp8-eth0" Jan 23 17:57:10.919909 containerd[1553]: 2026-01-23 17:57:10.890 [INFO][4339] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f71d20f37c ContainerID="32eaa32954298c71d1a04f0cd041f3986addfc709da2877d32cdbe26be2253cf" Namespace="calico-apiserver" Pod="calico-apiserver-6d67bdb5bc-9sgp8" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--6d67bdb5bc--9sgp8-eth0" Jan 23 17:57:10.919909 containerd[1553]: 2026-01-23 17:57:10.893 [INFO][4339] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="32eaa32954298c71d1a04f0cd041f3986addfc709da2877d32cdbe26be2253cf" Namespace="calico-apiserver" Pod="calico-apiserver-6d67bdb5bc-9sgp8" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--6d67bdb5bc--9sgp8-eth0" Jan 23 17:57:10.919909 containerd[1553]: 2026-01-23 17:57:10.893 [INFO][4339] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="32eaa32954298c71d1a04f0cd041f3986addfc709da2877d32cdbe26be2253cf" Namespace="calico-apiserver" Pod="calico-apiserver-6d67bdb5bc-9sgp8" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--6d67bdb5bc--9sgp8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--6d67bdb5bc--9sgp8-eth0", GenerateName:"calico-apiserver-6d67bdb5bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"ff25427e-89d3-494f-819f-e42ac2ef9668", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 56, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d67bdb5bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-3-b08bb0c7a1", ContainerID:"32eaa32954298c71d1a04f0cd041f3986addfc709da2877d32cdbe26be2253cf", Pod:"calico-apiserver-6d67bdb5bc-9sgp8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3f71d20f37c", MAC:"76:fe:f6:fe:72:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:57:10.919909 containerd[1553]: 2026-01-23 17:57:10.907 [INFO][4339] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="32eaa32954298c71d1a04f0cd041f3986addfc709da2877d32cdbe26be2253cf" Namespace="calico-apiserver" Pod="calico-apiserver-6d67bdb5bc-9sgp8" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--apiserver--6d67bdb5bc--9sgp8-eth0" Jan 23 17:57:10.919909 containerd[1553]: time="2026-01-23T17:57:10.918504048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j22p7,Uid:8eaf0d49-a15b-4271-a48f-ef6ce1231911,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d52975d1ec6c8fdb0ba3ff45968e3effda8d805abc5227bb9a35156913f3224\"" Jan 23 17:57:10.933480 containerd[1553]: time="2026-01-23T17:57:10.933421562Z" level=info msg="connecting to shim 2a448ae76c0bbc2f5f3ed2a622747baafb4eaa90d50b03fd445b7de8286c8670" address="unix:///run/containerd/s/9a2b6b12ea8271a701cd9c2ae0bebff334238bad98de8938e728450ee4ed67d4" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:57:10.940422 containerd[1553]: time="2026-01-23T17:57:10.939644613Z" level=info msg="CreateContainer within sandbox \"3d52975d1ec6c8fdb0ba3ff45968e3effda8d805abc5227bb9a35156913f3224\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 17:57:10.967735 systemd[1]: Started cri-containerd-fb499e0fa41cf9c1097570e9ee613deb169bad8d93749a58d53f1f4ad8a4fd68.scope - libcontainer container fb499e0fa41cf9c1097570e9ee613deb169bad8d93749a58d53f1f4ad8a4fd68. Jan 23 17:57:10.983459 containerd[1553]: time="2026-01-23T17:57:10.983329250Z" level=info msg="Container 1881cfdd6ef4b6d7383ec96232681db7158864ebf1884571503ab1b4ea78e219: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:57:11.008559 containerd[1553]: time="2026-01-23T17:57:11.008168965Z" level=info msg="CreateContainer within sandbox \"3d52975d1ec6c8fdb0ba3ff45968e3effda8d805abc5227bb9a35156913f3224\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1881cfdd6ef4b6d7383ec96232681db7158864ebf1884571503ab1b4ea78e219\"" Jan 23 17:57:11.010316 systemd[1]: Started cri-containerd-2a448ae76c0bbc2f5f3ed2a622747baafb4eaa90d50b03fd445b7de8286c8670.scope - libcontainer container 2a448ae76c0bbc2f5f3ed2a622747baafb4eaa90d50b03fd445b7de8286c8670. Jan 23 17:57:11.011134 containerd[1553]: time="2026-01-23T17:57:11.010927741Z" level=info msg="StartContainer for \"1881cfdd6ef4b6d7383ec96232681db7158864ebf1884571503ab1b4ea78e219\"" Jan 23 17:57:11.019130 containerd[1553]: time="2026-01-23T17:57:11.019015504Z" level=info msg="connecting to shim 1881cfdd6ef4b6d7383ec96232681db7158864ebf1884571503ab1b4ea78e219" address="unix:///run/containerd/s/1b260353d535e3d451c6dbe1b751069c101682caee00577d9dd0f36a44344a88" protocol=ttrpc version=3 Jan 23 17:57:11.026761 containerd[1553]: time="2026-01-23T17:57:11.026714378Z" level=info msg="connecting to shim 32eaa32954298c71d1a04f0cd041f3986addfc709da2877d32cdbe26be2253cf" address="unix:///run/containerd/s/63930e31b585e25230a4e6c78546dd261cd3284839b0a9c0111e8563368dde05" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:57:11.063866 systemd[1]: Started cri-containerd-1881cfdd6ef4b6d7383ec96232681db7158864ebf1884571503ab1b4ea78e219.scope - libcontainer container 1881cfdd6ef4b6d7383ec96232681db7158864ebf1884571503ab1b4ea78e219. Jan 23 17:57:11.084819 systemd[1]: Started cri-containerd-32eaa32954298c71d1a04f0cd041f3986addfc709da2877d32cdbe26be2253cf.scope - libcontainer container 32eaa32954298c71d1a04f0cd041f3986addfc709da2877d32cdbe26be2253cf. Jan 23 17:57:11.156965 containerd[1553]: time="2026-01-23T17:57:11.156909198Z" level=info msg="StartContainer for \"1881cfdd6ef4b6d7383ec96232681db7158864ebf1884571503ab1b4ea78e219\" returns successfully" Jan 23 17:57:11.170033 containerd[1553]: time="2026-01-23T17:57:11.159141683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nmxbw,Uid:c5662b76-1476-436c-8fcf-b63b80628b31,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb499e0fa41cf9c1097570e9ee613deb169bad8d93749a58d53f1f4ad8a4fd68\"" Jan 23 17:57:11.170033 containerd[1553]: time="2026-01-23T17:57:11.165295807Z" level=info msg="CreateContainer within sandbox \"fb499e0fa41cf9c1097570e9ee613deb169bad8d93749a58d53f1f4ad8a4fd68\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 17:57:11.236311 containerd[1553]: time="2026-01-23T17:57:11.236260234Z" level=info msg="Container d5b26d051f50a9fab117aa000137db0a192c28989b5e60657859766ce4c02ca7: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:57:11.245399 containerd[1553]: time="2026-01-23T17:57:11.245344937Z" level=info msg="CreateContainer within sandbox \"fb499e0fa41cf9c1097570e9ee613deb169bad8d93749a58d53f1f4ad8a4fd68\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d5b26d051f50a9fab117aa000137db0a192c28989b5e60657859766ce4c02ca7\"" Jan 23 17:57:11.247644 containerd[1553]: time="2026-01-23T17:57:11.246174594Z" level=info msg="StartContainer for \"d5b26d051f50a9fab117aa000137db0a192c28989b5e60657859766ce4c02ca7\"" Jan 23 17:57:11.248470 containerd[1553]: time="2026-01-23T17:57:11.248429959Z" level=info msg="connecting to shim d5b26d051f50a9fab117aa000137db0a192c28989b5e60657859766ce4c02ca7" address="unix:///run/containerd/s/a3fa535ae6fad4f8e8d1bc3edb541b9294be2e851f1a0ffc0da6ca8dbc888b74" protocol=ttrpc version=3 Jan 23 17:57:11.281019 systemd[1]: Started cri-containerd-d5b26d051f50a9fab117aa000137db0a192c28989b5e60657859766ce4c02ca7.scope - libcontainer container d5b26d051f50a9fab117aa000137db0a192c28989b5e60657859766ce4c02ca7. Jan 23 17:57:11.341873 containerd[1553]: time="2026-01-23T17:57:11.341752517Z" level=info msg="StartContainer for \"d5b26d051f50a9fab117aa000137db0a192c28989b5e60657859766ce4c02ca7\" returns successfully" Jan 23 17:57:11.389297 kubelet[2768]: E0123 17:57:11.389232 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n4dtl" podUID="880a2ec4-932d-40e4-a1c5-e4529584127c" Jan 23 17:57:11.402213 containerd[1553]: time="2026-01-23T17:57:11.401004389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d67bdb5bc-9sgp8,Uid:ff25427e-89d3-494f-819f-e42ac2ef9668,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"32eaa32954298c71d1a04f0cd041f3986addfc709da2877d32cdbe26be2253cf\"" Jan 23 17:57:11.405783 containerd[1553]: time="2026-01-23T17:57:11.405741724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:57:11.423499 kubelet[2768]: I0123 17:57:11.423393 2768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 17:57:11.444675 kubelet[2768]: I0123 17:57:11.444007 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-nmxbw" podStartSLOduration=43.443992294 podStartE2EDuration="43.443992294s" podCreationTimestamp="2026-01-23 17:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:57:11.417490761 +0000 UTC m=+49.406021714" watchObservedRunningTime="2026-01-23 17:57:11.443992294 +0000 UTC m=+49.432523247" Jan 23 17:57:11.469943 kubelet[2768]: I0123 17:57:11.468692 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-j22p7" podStartSLOduration=43.46866903 podStartE2EDuration="43.46866903s" podCreationTimestamp="2026-01-23 17:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:57:11.464271662 +0000 UTC m=+49.452802615" watchObservedRunningTime="2026-01-23 17:57:11.46866903 +0000 UTC m=+49.457200103" Jan 23 17:57:11.498642 containerd[1553]: time="2026-01-23T17:57:11.497886738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cf56b4c9c-b4lk9,Uid:0e19fa73-ca02-4aed-bbc2-3496e7625b06,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2a448ae76c0bbc2f5f3ed2a622747baafb4eaa90d50b03fd445b7de8286c8670\"" Jan 23 17:57:11.554727 systemd-networkd[1423]: cali3e47a3a4883: Gained IPv6LL Jan 23 17:57:11.772528 containerd[1553]: time="2026-01-23T17:57:11.772448022Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:57:11.773960 containerd[1553]: time="2026-01-23T17:57:11.773852371Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:57:11.773960 containerd[1553]: time="2026-01-23T17:57:11.773915612Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 17:57:11.774142 kubelet[2768]: E0123 17:57:11.774087 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:57:11.774142 kubelet[2768]: E0123 17:57:11.774134 2768 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:57:11.775362 kubelet[2768]: E0123 17:57:11.774330 2768 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnsqs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d67bdb5bc-9sgp8_calico-apiserver(ff25427e-89d3-494f-819f-e42ac2ef9668): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:57:11.775574 containerd[1553]: time="2026-01-23T17:57:11.774775189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:57:11.776551 kubelet[2768]: E0123 17:57:11.775757 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d67bdb5bc-9sgp8" podUID="ff25427e-89d3-494f-819f-e42ac2ef9668" Jan 23 17:57:12.064706 systemd-networkd[1423]: cali7cf8830005d: Gained IPv6LL Jan 23 17:57:12.126281 containerd[1553]: time="2026-01-23T17:57:12.126234432Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:57:12.127920 containerd[1553]: time="2026-01-23T17:57:12.127876664Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:57:12.127987 containerd[1553]: time="2026-01-23T17:57:12.127975706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 17:57:12.128395 kubelet[2768]: E0123 17:57:12.128321 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:57:12.128719 kubelet[2768]: E0123 17:57:12.128579 2768 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:57:12.129234 kubelet[2768]: E0123 17:57:12.129174 2768 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kpw7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5cf56b4c9c-b4lk9_calico-apiserver(0e19fa73-ca02-4aed-bbc2-3496e7625b06): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:57:12.130584 kubelet[2768]: E0123 17:57:12.130416 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cf56b4c9c-b4lk9" podUID="0e19fa73-ca02-4aed-bbc2-3496e7625b06" Jan 23 17:57:12.169770 containerd[1553]: time="2026-01-23T17:57:12.169327822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r2j7b,Uid:c91580b5-1014-472e-a6f0-53c9f68e2405,Namespace:calico-system,Attempt:0,}" Jan 23 17:57:12.169770 containerd[1553]: time="2026-01-23T17:57:12.169614028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d89c9c97d-bcllg,Uid:505697ac-a88f-4c60-b275-ddbfae3b76e6,Namespace:calico-system,Attempt:0,}" Jan 23 17:57:12.274741 systemd-networkd[1423]: vxlan.calico: Link UP Jan 23 17:57:12.274761 systemd-networkd[1423]: vxlan.calico: Gained carrier Jan 23 17:57:12.321478 systemd-networkd[1423]: cali3f71d20f37c: Gained IPv6LL Jan 23 17:57:12.385682 systemd-networkd[1423]: cali435deb623d6: Gained IPv6LL Jan 23 17:57:12.395941 kubelet[2768]: E0123 17:57:12.395783 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d67bdb5bc-9sgp8" podUID="ff25427e-89d3-494f-819f-e42ac2ef9668" Jan 23 17:57:12.399733 kubelet[2768]: E0123 17:57:12.399689 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cf56b4c9c-b4lk9" podUID="0e19fa73-ca02-4aed-bbc2-3496e7625b06" Jan 23 17:57:12.425630 systemd-networkd[1423]: calidabed94739f: Link UP Jan 23 17:57:12.425977 systemd-networkd[1423]: calidabed94739f: Gained carrier Jan 23 17:57:12.489232 containerd[1553]: 2026-01-23 17:57:12.247 [INFO][4754] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--3--b08bb0c7a1-k8s-csi--node--driver--r2j7b-eth0 csi-node-driver- calico-system c91580b5-1014-472e-a6f0-53c9f68e2405 741 0 2026-01-23 17:56:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459-2-3-3-b08bb0c7a1 csi-node-driver-r2j7b eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calidabed94739f [] [] }} ContainerID="6aafda08ebb27814a7213a55635745dfa21858af814f047d817b1d85a1c8a0b7" Namespace="calico-system" Pod="csi-node-driver-r2j7b" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-csi--node--driver--r2j7b-" Jan 23 17:57:12.489232 containerd[1553]: 2026-01-23 17:57:12.248 [INFO][4754] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6aafda08ebb27814a7213a55635745dfa21858af814f047d817b1d85a1c8a0b7" Namespace="calico-system" Pod="csi-node-driver-r2j7b" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-csi--node--driver--r2j7b-eth0" Jan 23 17:57:12.489232 containerd[1553]: 2026-01-23 17:57:12.324 [INFO][4779] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6aafda08ebb27814a7213a55635745dfa21858af814f047d817b1d85a1c8a0b7" HandleID="k8s-pod-network.6aafda08ebb27814a7213a55635745dfa21858af814f047d817b1d85a1c8a0b7" Workload="ci--4459--2--3--3--b08bb0c7a1-k8s-csi--node--driver--r2j7b-eth0" Jan 23 17:57:12.489232 containerd[1553]: 2026-01-23 17:57:12.325 [INFO][4779] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6aafda08ebb27814a7213a55635745dfa21858af814f047d817b1d85a1c8a0b7" HandleID="k8s-pod-network.6aafda08ebb27814a7213a55635745dfa21858af814f047d817b1d85a1c8a0b7" Workload="ci--4459--2--3--3--b08bb0c7a1-k8s-csi--node--driver--r2j7b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024bbb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-3-3-b08bb0c7a1", "pod":"csi-node-driver-r2j7b", "timestamp":"2026-01-23 17:57:12.32444253 +0000 UTC"}, Hostname:"ci-4459-2-3-3-b08bb0c7a1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:57:12.489232 containerd[1553]: 2026-01-23 17:57:12.326 [INFO][4779] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:57:12.489232 containerd[1553]: 2026-01-23 17:57:12.326 [INFO][4779] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:57:12.489232 containerd[1553]: 2026-01-23 17:57:12.326 [INFO][4779] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-3-b08bb0c7a1' Jan 23 17:57:12.489232 containerd[1553]: 2026-01-23 17:57:12.343 [INFO][4779] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6aafda08ebb27814a7213a55635745dfa21858af814f047d817b1d85a1c8a0b7" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:12.489232 containerd[1553]: 2026-01-23 17:57:12.362 [INFO][4779] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:12.489232 containerd[1553]: 2026-01-23 17:57:12.371 [INFO][4779] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:12.489232 containerd[1553]: 2026-01-23 17:57:12.381 [INFO][4779] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:12.489232 containerd[1553]: 2026-01-23 17:57:12.385 [INFO][4779] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:12.489232 containerd[1553]: 2026-01-23 17:57:12.385 [INFO][4779] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.6aafda08ebb27814a7213a55635745dfa21858af814f047d817b1d85a1c8a0b7" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:12.489232 containerd[1553]: 2026-01-23 17:57:12.389 [INFO][4779] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6aafda08ebb27814a7213a55635745dfa21858af814f047d817b1d85a1c8a0b7 Jan 23 17:57:12.489232 containerd[1553]: 2026-01-23 17:57:12.400 [INFO][4779] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.6aafda08ebb27814a7213a55635745dfa21858af814f047d817b1d85a1c8a0b7" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:12.489232 containerd[1553]: 2026-01-23 17:57:12.410 [INFO][4779] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.21.136/26] block=192.168.21.128/26 handle="k8s-pod-network.6aafda08ebb27814a7213a55635745dfa21858af814f047d817b1d85a1c8a0b7" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:12.489232 containerd[1553]: 2026-01-23 17:57:12.410 [INFO][4779] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.136/26] handle="k8s-pod-network.6aafda08ebb27814a7213a55635745dfa21858af814f047d817b1d85a1c8a0b7" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:12.489232 containerd[1553]: 2026-01-23 17:57:12.410 [INFO][4779] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:57:12.489232 containerd[1553]: 2026-01-23 17:57:12.410 [INFO][4779] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.21.136/26] IPv6=[] ContainerID="6aafda08ebb27814a7213a55635745dfa21858af814f047d817b1d85a1c8a0b7" HandleID="k8s-pod-network.6aafda08ebb27814a7213a55635745dfa21858af814f047d817b1d85a1c8a0b7" Workload="ci--4459--2--3--3--b08bb0c7a1-k8s-csi--node--driver--r2j7b-eth0" Jan 23 17:57:12.489799 containerd[1553]: 2026-01-23 17:57:12.419 [INFO][4754] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6aafda08ebb27814a7213a55635745dfa21858af814f047d817b1d85a1c8a0b7" Namespace="calico-system" Pod="csi-node-driver-r2j7b" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-csi--node--driver--r2j7b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--3--b08bb0c7a1-k8s-csi--node--driver--r2j7b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c91580b5-1014-472e-a6f0-53c9f68e2405", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 56, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-3-b08bb0c7a1", ContainerID:"", Pod:"csi-node-driver-r2j7b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.21.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidabed94739f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:57:12.489799 containerd[1553]: 2026-01-23 17:57:12.419 [INFO][4754] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.136/32] ContainerID="6aafda08ebb27814a7213a55635745dfa21858af814f047d817b1d85a1c8a0b7" Namespace="calico-system" Pod="csi-node-driver-r2j7b" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-csi--node--driver--r2j7b-eth0" Jan 23 17:57:12.489799 containerd[1553]: 2026-01-23 17:57:12.419 [INFO][4754] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidabed94739f ContainerID="6aafda08ebb27814a7213a55635745dfa21858af814f047d817b1d85a1c8a0b7" Namespace="calico-system" Pod="csi-node-driver-r2j7b" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-csi--node--driver--r2j7b-eth0" Jan 23 17:57:12.489799 containerd[1553]: 2026-01-23 17:57:12.424 [INFO][4754] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6aafda08ebb27814a7213a55635745dfa21858af814f047d817b1d85a1c8a0b7" Namespace="calico-system" Pod="csi-node-driver-r2j7b" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-csi--node--driver--r2j7b-eth0" Jan 23 17:57:12.489799 containerd[1553]: 2026-01-23 17:57:12.428 [INFO][4754] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6aafda08ebb27814a7213a55635745dfa21858af814f047d817b1d85a1c8a0b7" Namespace="calico-system" Pod="csi-node-driver-r2j7b" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-csi--node--driver--r2j7b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--3--b08bb0c7a1-k8s-csi--node--driver--r2j7b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c91580b5-1014-472e-a6f0-53c9f68e2405", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 56, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-3-b08bb0c7a1", ContainerID:"6aafda08ebb27814a7213a55635745dfa21858af814f047d817b1d85a1c8a0b7", Pod:"csi-node-driver-r2j7b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.21.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidabed94739f", MAC:"0e:db:23:01:64:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:57:12.489799 containerd[1553]: 2026-01-23 17:57:12.471 [INFO][4754] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6aafda08ebb27814a7213a55635745dfa21858af814f047d817b1d85a1c8a0b7" Namespace="calico-system" Pod="csi-node-driver-r2j7b" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-csi--node--driver--r2j7b-eth0" Jan 23 17:57:12.531175 containerd[1553]: time="2026-01-23T17:57:12.531118230Z" level=info msg="connecting to shim 6aafda08ebb27814a7213a55635745dfa21858af814f047d817b1d85a1c8a0b7" address="unix:///run/containerd/s/7b0d0f1e9330b8b3873fb36f22e3ae2f78e25a709274c7fb13782d84f1b6bfdf" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:57:12.562810 systemd[1]: Started cri-containerd-6aafda08ebb27814a7213a55635745dfa21858af814f047d817b1d85a1c8a0b7.scope - libcontainer container 6aafda08ebb27814a7213a55635745dfa21858af814f047d817b1d85a1c8a0b7. Jan 23 17:57:12.609356 containerd[1553]: time="2026-01-23T17:57:12.609237655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r2j7b,Uid:c91580b5-1014-472e-a6f0-53c9f68e2405,Namespace:calico-system,Attempt:0,} returns sandbox id \"6aafda08ebb27814a7213a55635745dfa21858af814f047d817b1d85a1c8a0b7\"" Jan 23 17:57:12.619799 containerd[1553]: time="2026-01-23T17:57:12.619759858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 17:57:12.668831 systemd-networkd[1423]: calia2204963d3b: Link UP Jan 23 17:57:12.670789 systemd-networkd[1423]: calia2204963d3b: Gained carrier Jan 23 17:57:12.703416 containerd[1553]: 2026-01-23 17:57:12.256 [INFO][4764] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--3--b08bb0c7a1-k8s-calico--kube--controllers--5d89c9c97d--bcllg-eth0 calico-kube-controllers-5d89c9c97d- calico-system 505697ac-a88f-4c60-b275-ddbfae3b76e6 843 0 2026-01-23 17:56:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5d89c9c97d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459-2-3-3-b08bb0c7a1 calico-kube-controllers-5d89c9c97d-bcllg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia2204963d3b [] [] }} ContainerID="a74b910b98a80ed0c94e23f776050b90371b561bc5b749b1ea119045837d8288" Namespace="calico-system" Pod="calico-kube-controllers-5d89c9c97d-bcllg" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--kube--controllers--5d89c9c97d--bcllg-" Jan 23 17:57:12.703416 containerd[1553]: 2026-01-23 17:57:12.256 [INFO][4764] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a74b910b98a80ed0c94e23f776050b90371b561bc5b749b1ea119045837d8288" Namespace="calico-system" Pod="calico-kube-controllers-5d89c9c97d-bcllg" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--kube--controllers--5d89c9c97d--bcllg-eth0" Jan 23 17:57:12.703416 containerd[1553]: 2026-01-23 17:57:12.333 [INFO][4786] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a74b910b98a80ed0c94e23f776050b90371b561bc5b749b1ea119045837d8288" HandleID="k8s-pod-network.a74b910b98a80ed0c94e23f776050b90371b561bc5b749b1ea119045837d8288" Workload="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--kube--controllers--5d89c9c97d--bcllg-eth0" Jan 23 17:57:12.703416 containerd[1553]: 2026-01-23 17:57:12.333 [INFO][4786] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a74b910b98a80ed0c94e23f776050b90371b561bc5b749b1ea119045837d8288" HandleID="k8s-pod-network.a74b910b98a80ed0c94e23f776050b90371b561bc5b749b1ea119045837d8288" Workload="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--kube--controllers--5d89c9c97d--bcllg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000315af0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-3-3-b08bb0c7a1", "pod":"calico-kube-controllers-5d89c9c97d-bcllg", "timestamp":"2026-01-23 17:57:12.333108897 +0000 UTC"}, Hostname:"ci-4459-2-3-3-b08bb0c7a1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:57:12.703416 containerd[1553]: 2026-01-23 17:57:12.333 [INFO][4786] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:57:12.703416 containerd[1553]: 2026-01-23 17:57:12.415 [INFO][4786] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:57:12.703416 containerd[1553]: 2026-01-23 17:57:12.415 [INFO][4786] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-3-b08bb0c7a1' Jan 23 17:57:12.703416 containerd[1553]: 2026-01-23 17:57:12.460 [INFO][4786] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a74b910b98a80ed0c94e23f776050b90371b561bc5b749b1ea119045837d8288" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:12.703416 containerd[1553]: 2026-01-23 17:57:12.485 [INFO][4786] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:12.703416 containerd[1553]: 2026-01-23 17:57:12.521 [INFO][4786] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:12.703416 containerd[1553]: 2026-01-23 17:57:12.557 [INFO][4786] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:12.703416 containerd[1553]: 2026-01-23 17:57:12.578 [INFO][4786] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:12.703416 containerd[1553]: 2026-01-23 17:57:12.578 [INFO][4786] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.a74b910b98a80ed0c94e23f776050b90371b561bc5b749b1ea119045837d8288" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:12.703416 containerd[1553]: 2026-01-23 17:57:12.596 [INFO][4786] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a74b910b98a80ed0c94e23f776050b90371b561bc5b749b1ea119045837d8288 Jan 23 17:57:12.703416 containerd[1553]: 2026-01-23 17:57:12.622 [INFO][4786] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.a74b910b98a80ed0c94e23f776050b90371b561bc5b749b1ea119045837d8288" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:12.703416 containerd[1553]: 2026-01-23 17:57:12.655 [INFO][4786] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.21.137/26] block=192.168.21.128/26 handle="k8s-pod-network.a74b910b98a80ed0c94e23f776050b90371b561bc5b749b1ea119045837d8288" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:12.703416 containerd[1553]: 2026-01-23 17:57:12.655 [INFO][4786] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.137/26] handle="k8s-pod-network.a74b910b98a80ed0c94e23f776050b90371b561bc5b749b1ea119045837d8288" host="ci-4459-2-3-3-b08bb0c7a1" Jan 23 17:57:12.703416 containerd[1553]: 2026-01-23 17:57:12.655 [INFO][4786] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:57:12.703416 containerd[1553]: 2026-01-23 17:57:12.655 [INFO][4786] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.21.137/26] IPv6=[] ContainerID="a74b910b98a80ed0c94e23f776050b90371b561bc5b749b1ea119045837d8288" HandleID="k8s-pod-network.a74b910b98a80ed0c94e23f776050b90371b561bc5b749b1ea119045837d8288" Workload="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--kube--controllers--5d89c9c97d--bcllg-eth0" Jan 23 17:57:12.704136 containerd[1553]: 2026-01-23 17:57:12.660 [INFO][4764] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a74b910b98a80ed0c94e23f776050b90371b561bc5b749b1ea119045837d8288" Namespace="calico-system" Pod="calico-kube-controllers-5d89c9c97d-bcllg" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--kube--controllers--5d89c9c97d--bcllg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--3--b08bb0c7a1-k8s-calico--kube--controllers--5d89c9c97d--bcllg-eth0", GenerateName:"calico-kube-controllers-5d89c9c97d-", Namespace:"calico-system", SelfLink:"", UID:"505697ac-a88f-4c60-b275-ddbfae3b76e6", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 56, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d89c9c97d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-3-b08bb0c7a1", ContainerID:"", Pod:"calico-kube-controllers-5d89c9c97d-bcllg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.21.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia2204963d3b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:57:12.704136 containerd[1553]: 2026-01-23 17:57:12.660 [INFO][4764] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.137/32] ContainerID="a74b910b98a80ed0c94e23f776050b90371b561bc5b749b1ea119045837d8288" Namespace="calico-system" Pod="calico-kube-controllers-5d89c9c97d-bcllg" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--kube--controllers--5d89c9c97d--bcllg-eth0" Jan 23 17:57:12.704136 containerd[1553]: 2026-01-23 17:57:12.661 [INFO][4764] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia2204963d3b ContainerID="a74b910b98a80ed0c94e23f776050b90371b561bc5b749b1ea119045837d8288" Namespace="calico-system" Pod="calico-kube-controllers-5d89c9c97d-bcllg" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--kube--controllers--5d89c9c97d--bcllg-eth0" Jan 23 17:57:12.704136 containerd[1553]: 2026-01-23 17:57:12.670 [INFO][4764] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a74b910b98a80ed0c94e23f776050b90371b561bc5b749b1ea119045837d8288" Namespace="calico-system" Pod="calico-kube-controllers-5d89c9c97d-bcllg" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--kube--controllers--5d89c9c97d--bcllg-eth0" Jan 23 17:57:12.704136 containerd[1553]: 2026-01-23 17:57:12.671 [INFO][4764] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a74b910b98a80ed0c94e23f776050b90371b561bc5b749b1ea119045837d8288" Namespace="calico-system" Pod="calico-kube-controllers-5d89c9c97d-bcllg" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--kube--controllers--5d89c9c97d--bcllg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--3--b08bb0c7a1-k8s-calico--kube--controllers--5d89c9c97d--bcllg-eth0", GenerateName:"calico-kube-controllers-5d89c9c97d-", Namespace:"calico-system", SelfLink:"", UID:"505697ac-a88f-4c60-b275-ddbfae3b76e6", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 56, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d89c9c97d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-3-b08bb0c7a1", ContainerID:"a74b910b98a80ed0c94e23f776050b90371b561bc5b749b1ea119045837d8288", Pod:"calico-kube-controllers-5d89c9c97d-bcllg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.21.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia2204963d3b", MAC:"d6:cc:b8:6e:d7:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:57:12.704136 containerd[1553]: 2026-01-23 17:57:12.698 [INFO][4764] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a74b910b98a80ed0c94e23f776050b90371b561bc5b749b1ea119045837d8288" Namespace="calico-system" Pod="calico-kube-controllers-5d89c9c97d-bcllg" WorkloadEndpoint="ci--4459--2--3--3--b08bb0c7a1-k8s-calico--kube--controllers--5d89c9c97d--bcllg-eth0" Jan 23 17:57:12.741876 containerd[1553]: time="2026-01-23T17:57:12.741818928Z" level=info msg="connecting to shim a74b910b98a80ed0c94e23f776050b90371b561bc5b749b1ea119045837d8288" address="unix:///run/containerd/s/cf81ea6e82657a237f6e55246bbdedbeecf8271fc8e432a00c9973a2eae66000" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:57:12.788794 systemd[1]: Started cri-containerd-a74b910b98a80ed0c94e23f776050b90371b561bc5b749b1ea119045837d8288.scope - libcontainer container a74b910b98a80ed0c94e23f776050b90371b561bc5b749b1ea119045837d8288. Jan 23 17:57:12.843952 containerd[1553]: time="2026-01-23T17:57:12.843904534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d89c9c97d-bcllg,Uid:505697ac-a88f-4c60-b275-ddbfae3b76e6,Namespace:calico-system,Attempt:0,} returns sandbox id \"a74b910b98a80ed0c94e23f776050b90371b561bc5b749b1ea119045837d8288\"" Jan 23 17:57:13.024984 containerd[1553]: time="2026-01-23T17:57:13.024808678Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:57:13.026175 containerd[1553]: time="2026-01-23T17:57:13.026039061Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 17:57:13.026175 containerd[1553]: time="2026-01-23T17:57:13.026155823Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 17:57:13.026471 kubelet[2768]: E0123 17:57:13.026318 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:57:13.026679 kubelet[2768]: E0123 17:57:13.026656 2768 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:57:13.027662 kubelet[2768]: E0123 17:57:13.027359 2768 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvfkx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-r2j7b_calico-system(c91580b5-1014-472e-a6f0-53c9f68e2405): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 17:57:13.028238 containerd[1553]: time="2026-01-23T17:57:13.027791573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 17:57:13.344804 systemd-networkd[1423]: vxlan.calico: Gained IPv6LL Jan 23 17:57:13.364458 containerd[1553]: time="2026-01-23T17:57:13.364387936Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:57:13.365845 containerd[1553]: time="2026-01-23T17:57:13.365790962Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 17:57:13.366623 containerd[1553]: time="2026-01-23T17:57:13.366570056Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 17:57:13.366957 kubelet[2768]: E0123 17:57:13.366909 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:57:13.367060 kubelet[2768]: E0123 17:57:13.366977 2768 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:57:13.368460 containerd[1553]: time="2026-01-23T17:57:13.368419090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 17:57:13.370504 kubelet[2768]: E0123 17:57:13.367484 2768 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bbpt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5d89c9c97d-bcllg_calico-system(505697ac-a88f-4c60-b275-ddbfae3b76e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 17:57:13.371795 kubelet[2768]: E0123 17:57:13.371746 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d89c9c97d-bcllg" podUID="505697ac-a88f-4c60-b275-ddbfae3b76e6" Jan 23 17:57:13.402931 kubelet[2768]: E0123 17:57:13.402877 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d89c9c97d-bcllg" podUID="505697ac-a88f-4c60-b275-ddbfae3b76e6" Jan 23 17:57:13.408968 kubelet[2768]: E0123 17:57:13.408867 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d67bdb5bc-9sgp8" podUID="ff25427e-89d3-494f-819f-e42ac2ef9668" Jan 23 17:57:13.410811 kubelet[2768]: E0123 17:57:13.410761 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cf56b4c9c-b4lk9" podUID="0e19fa73-ca02-4aed-bbc2-3496e7625b06" Jan 23 17:57:13.756773 containerd[1553]: time="2026-01-23T17:57:13.756360279Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:57:13.758058 containerd[1553]: time="2026-01-23T17:57:13.757856506Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 17:57:13.758058 containerd[1553]: time="2026-01-23T17:57:13.757998429Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 17:57:13.758630 kubelet[2768]: E0123 17:57:13.758575 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:57:13.758720 kubelet[2768]: E0123 17:57:13.758634 2768 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:57:13.762826 kubelet[2768]: E0123 17:57:13.762661 2768 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvfkx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-r2j7b_calico-system(c91580b5-1014-472e-a6f0-53c9f68e2405): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 17:57:13.764541 kubelet[2768]: E0123 17:57:13.764448 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r2j7b" podUID="c91580b5-1014-472e-a6f0-53c9f68e2405" Jan 23 17:57:14.241149 systemd-networkd[1423]: calidabed94739f: Gained IPv6LL Jan 23 17:57:14.242366 systemd-networkd[1423]: calia2204963d3b: Gained IPv6LL Jan 23 17:57:14.411328 kubelet[2768]: E0123 17:57:14.411244 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d89c9c97d-bcllg" podUID="505697ac-a88f-4c60-b275-ddbfae3b76e6" Jan 23 17:57:14.413071 kubelet[2768]: E0123 17:57:14.412920 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r2j7b" podUID="c91580b5-1014-472e-a6f0-53c9f68e2405" Jan 23 17:57:16.168759 containerd[1553]: time="2026-01-23T17:57:16.168343615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 17:57:16.529015 containerd[1553]: time="2026-01-23T17:57:16.528930974Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:57:16.530787 containerd[1553]: time="2026-01-23T17:57:16.530686962Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 17:57:16.530916 containerd[1553]: time="2026-01-23T17:57:16.530737323Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 17:57:16.531492 kubelet[2768]: E0123 17:57:16.531134 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:57:16.531492 kubelet[2768]: E0123 17:57:16.531214 2768 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:57:16.531492 kubelet[2768]: E0123 17:57:16.531401 2768 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d134215e77c34cbf9ec3fa5d672d6199,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fbwbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fc854889f-f6vdq_calico-system(a9a76376-ca93-4b12-a386-13b21a2c5528): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 17:57:16.535004 containerd[1553]: time="2026-01-23T17:57:16.534969151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 17:57:16.881782 containerd[1553]: time="2026-01-23T17:57:16.881606725Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:57:16.883459 containerd[1553]: time="2026-01-23T17:57:16.883336673Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 17:57:16.884707 containerd[1553]: time="2026-01-23T17:57:16.883363833Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 17:57:16.884813 kubelet[2768]: E0123 17:57:16.883908 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:57:16.884813 kubelet[2768]: E0123 17:57:16.883986 2768 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:57:16.884813 kubelet[2768]: E0123 17:57:16.884161 2768 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fbwbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fc854889f-f6vdq_calico-system(a9a76376-ca93-4b12-a386-13b21a2c5528): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 17:57:16.886298 kubelet[2768]: E0123 17:57:16.886220 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fc854889f-f6vdq" podUID="a9a76376-ca93-4b12-a386-13b21a2c5528" Jan 23 17:57:22.168153 containerd[1553]: time="2026-01-23T17:57:22.168102425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:57:22.517908 containerd[1553]: time="2026-01-23T17:57:22.517845185Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:57:22.519794 containerd[1553]: time="2026-01-23T17:57:22.519746248Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:57:22.520056 containerd[1553]: time="2026-01-23T17:57:22.519999971Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 17:57:22.520220 kubelet[2768]: E0123 17:57:22.520174 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:57:22.520484 kubelet[2768]: E0123 17:57:22.520233 2768 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:57:22.520484 kubelet[2768]: E0123 17:57:22.520364 2768 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9csw4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d67bdb5bc-4l6tt_calico-apiserver(d3d96596-5d19-4484-a8c6-296b60023534): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:57:22.521665 kubelet[2768]: E0123 17:57:22.521596 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d67bdb5bc-4l6tt" podUID="d3d96596-5d19-4484-a8c6-296b60023534" Jan 23 17:57:23.168854 containerd[1553]: time="2026-01-23T17:57:23.168801823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 17:57:23.532776 containerd[1553]: time="2026-01-23T17:57:23.532594451Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:57:23.534277 containerd[1553]: time="2026-01-23T17:57:23.534197989Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 17:57:23.534394 containerd[1553]: time="2026-01-23T17:57:23.534206069Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 17:57:23.534712 kubelet[2768]: E0123 17:57:23.534671 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:57:23.535544 kubelet[2768]: E0123 17:57:23.535074 2768 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:57:23.535781 kubelet[2768]: E0123 17:57:23.535725 2768 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqkjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-n4dtl_calico-system(880a2ec4-932d-40e4-a1c5-e4529584127c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 17:57:23.539563 kubelet[2768]: E0123 17:57:23.538390 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n4dtl" podUID="880a2ec4-932d-40e4-a1c5-e4529584127c" Jan 23 17:57:24.170623 containerd[1553]: time="2026-01-23T17:57:24.169091770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:57:24.525209 containerd[1553]: time="2026-01-23T17:57:24.525103620Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:57:24.526703 containerd[1553]: time="2026-01-23T17:57:24.526625597Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:57:24.526823 containerd[1553]: time="2026-01-23T17:57:24.526750318Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 17:57:24.527129 kubelet[2768]: E0123 17:57:24.527074 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:57:24.527274 kubelet[2768]: E0123 17:57:24.527137 2768 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:57:24.527369 kubelet[2768]: E0123 17:57:24.527301 2768 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kpw7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5cf56b4c9c-b4lk9_calico-apiserver(0e19fa73-ca02-4aed-bbc2-3496e7625b06): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:57:24.529446 kubelet[2768]: E0123 17:57:24.528816 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cf56b4c9c-b4lk9" podUID="0e19fa73-ca02-4aed-bbc2-3496e7625b06" Jan 23 17:57:27.168750 containerd[1553]: time="2026-01-23T17:57:27.168657684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 17:57:27.511173 containerd[1553]: time="2026-01-23T17:57:27.511066980Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:57:27.512738 containerd[1553]: time="2026-01-23T17:57:27.512491993Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 17:57:27.512738 containerd[1553]: time="2026-01-23T17:57:27.512591994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 17:57:27.513074 kubelet[2768]: E0123 17:57:27.512969 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:57:27.513074 kubelet[2768]: E0123 17:57:27.513067 2768 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:57:27.513614 kubelet[2768]: E0123 17:57:27.513365 2768 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bbpt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5d89c9c97d-bcllg_calico-system(505697ac-a88f-4c60-b275-ddbfae3b76e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 17:57:27.515618 kubelet[2768]: E0123 17:57:27.515542 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d89c9c97d-bcllg" podUID="505697ac-a88f-4c60-b275-ddbfae3b76e6" Jan 23 17:57:28.169076 containerd[1553]: time="2026-01-23T17:57:28.168284075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:57:28.518831 containerd[1553]: time="2026-01-23T17:57:28.518756464Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:57:28.520488 containerd[1553]: time="2026-01-23T17:57:28.520398438Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:57:28.520931 containerd[1553]: time="2026-01-23T17:57:28.520549239Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 17:57:28.521201 kubelet[2768]: E0123 17:57:28.521143 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:57:28.522078 kubelet[2768]: E0123 17:57:28.521734 2768 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:57:28.522078 kubelet[2768]: E0123 17:57:28.521969 2768 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnsqs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d67bdb5bc-9sgp8_calico-apiserver(ff25427e-89d3-494f-819f-e42ac2ef9668): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:57:28.523261 kubelet[2768]: E0123 17:57:28.523184 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d67bdb5bc-9sgp8" podUID="ff25427e-89d3-494f-819f-e42ac2ef9668" Jan 23 17:57:29.169271 containerd[1553]: time="2026-01-23T17:57:29.169214362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 17:57:29.519446 containerd[1553]: time="2026-01-23T17:57:29.519307613Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:57:29.521098 containerd[1553]: time="2026-01-23T17:57:29.520955346Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 17:57:29.521098 containerd[1553]: time="2026-01-23T17:57:29.521061507Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 17:57:29.521460 kubelet[2768]: E0123 17:57:29.521381 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:57:29.521898 kubelet[2768]: E0123 17:57:29.521459 2768 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:57:29.522790 kubelet[2768]: E0123 17:57:29.522711 2768 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvfkx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-r2j7b_calico-system(c91580b5-1014-472e-a6f0-53c9f68e2405): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 17:57:29.525346 containerd[1553]: time="2026-01-23T17:57:29.525299302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 17:57:29.882168 containerd[1553]: time="2026-01-23T17:57:29.881795564Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:57:29.883969 containerd[1553]: time="2026-01-23T17:57:29.883891541Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 17:57:29.884701 containerd[1553]: time="2026-01-23T17:57:29.884076103Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 17:57:29.884862 kubelet[2768]: E0123 17:57:29.884556 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:57:29.884862 kubelet[2768]: E0123 17:57:29.884751 2768 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:57:29.885241 kubelet[2768]: E0123 17:57:29.884980 2768 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvfkx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-r2j7b_calico-system(c91580b5-1014-472e-a6f0-53c9f68e2405): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 17:57:29.886562 kubelet[2768]: E0123 17:57:29.886421 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r2j7b" podUID="c91580b5-1014-472e-a6f0-53c9f68e2405" Jan 23 17:57:31.173052 kubelet[2768]: E0123 17:57:31.172988 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fc854889f-f6vdq" podUID="a9a76376-ca93-4b12-a386-13b21a2c5528" Jan 23 17:57:33.167797 kubelet[2768]: E0123 17:57:33.167715 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d67bdb5bc-4l6tt" podUID="d3d96596-5d19-4484-a8c6-296b60023534" Jan 23 17:57:37.169057 kubelet[2768]: E0123 17:57:37.168922 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cf56b4c9c-b4lk9" podUID="0e19fa73-ca02-4aed-bbc2-3496e7625b06" Jan 23 17:57:38.172670 kubelet[2768]: E0123 17:57:38.172579 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n4dtl" podUID="880a2ec4-932d-40e4-a1c5-e4529584127c" Jan 23 17:57:39.167705 kubelet[2768]: E0123 17:57:39.167611 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d67bdb5bc-9sgp8" podUID="ff25427e-89d3-494f-819f-e42ac2ef9668" Jan 23 17:57:40.168092 kubelet[2768]: E0123 17:57:40.167696 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d89c9c97d-bcllg" podUID="505697ac-a88f-4c60-b275-ddbfae3b76e6" Jan 23 17:57:41.168969 kubelet[2768]: E0123 17:57:41.168874 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r2j7b" podUID="c91580b5-1014-472e-a6f0-53c9f68e2405" Jan 23 17:57:44.175000 containerd[1553]: time="2026-01-23T17:57:44.174731637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:57:44.515418 containerd[1553]: time="2026-01-23T17:57:44.515367886Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:57:44.516962 containerd[1553]: time="2026-01-23T17:57:44.516900129Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:57:44.517178 containerd[1553]: time="2026-01-23T17:57:44.517135970Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 17:57:44.517936 kubelet[2768]: E0123 17:57:44.517766 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:57:44.517936 kubelet[2768]: E0123 17:57:44.517856 2768 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:57:44.519679 kubelet[2768]: E0123 17:57:44.519601 2768 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9csw4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d67bdb5bc-4l6tt_calico-apiserver(d3d96596-5d19-4484-a8c6-296b60023534): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:57:44.525180 kubelet[2768]: E0123 17:57:44.525091 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d67bdb5bc-4l6tt" podUID="d3d96596-5d19-4484-a8c6-296b60023534" Jan 23 17:57:45.168175 containerd[1553]: time="2026-01-23T17:57:45.168127828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 17:57:45.511268 containerd[1553]: time="2026-01-23T17:57:45.510911337Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:57:45.512895 containerd[1553]: time="2026-01-23T17:57:45.512771101Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 17:57:45.512895 containerd[1553]: time="2026-01-23T17:57:45.512826581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 17:57:45.513226 kubelet[2768]: E0123 17:57:45.513159 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:57:45.513427 kubelet[2768]: E0123 17:57:45.513236 2768 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:57:45.513427 kubelet[2768]: E0123 17:57:45.513353 2768 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d134215e77c34cbf9ec3fa5d672d6199,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fbwbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fc854889f-f6vdq_calico-system(a9a76376-ca93-4b12-a386-13b21a2c5528): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 17:57:45.518547 containerd[1553]: time="2026-01-23T17:57:45.517049949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 17:57:45.869293 containerd[1553]: time="2026-01-23T17:57:45.869029197Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:57:45.872964 containerd[1553]: time="2026-01-23T17:57:45.872842124Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 17:57:45.872964 containerd[1553]: time="2026-01-23T17:57:45.872900444Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 17:57:45.873124 kubelet[2768]: E0123 17:57:45.873063 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:57:45.873124 kubelet[2768]: E0123 17:57:45.873110 2768 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:57:45.873404 kubelet[2768]: E0123 17:57:45.873219 2768 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fbwbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fc854889f-f6vdq_calico-system(a9a76376-ca93-4b12-a386-13b21a2c5528): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 17:57:45.874957 kubelet[2768]: E0123 17:57:45.874900 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fc854889f-f6vdq" podUID="a9a76376-ca93-4b12-a386-13b21a2c5528" Jan 23 17:57:48.170990 containerd[1553]: time="2026-01-23T17:57:48.170601042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:57:48.524430 containerd[1553]: time="2026-01-23T17:57:48.524198873Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:57:48.525877 containerd[1553]: time="2026-01-23T17:57:48.525717435Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:57:48.525877 containerd[1553]: time="2026-01-23T17:57:48.525804115Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 17:57:48.526781 kubelet[2768]: E0123 17:57:48.526498 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:57:48.527580 kubelet[2768]: E0123 17:57:48.526913 2768 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:57:48.527580 kubelet[2768]: E0123 17:57:48.527360 2768 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kpw7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5cf56b4c9c-b4lk9_calico-apiserver(0e19fa73-ca02-4aed-bbc2-3496e7625b06): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:57:48.529087 kubelet[2768]: E0123 17:57:48.528990 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cf56b4c9c-b4lk9" podUID="0e19fa73-ca02-4aed-bbc2-3496e7625b06" Jan 23 17:57:52.170333 containerd[1553]: time="2026-01-23T17:57:52.170055567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 17:57:52.537187 containerd[1553]: time="2026-01-23T17:57:52.537047480Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:57:52.539198 containerd[1553]: time="2026-01-23T17:57:52.539087161Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 17:57:52.539198 containerd[1553]: time="2026-01-23T17:57:52.539159881Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 17:57:52.540677 kubelet[2768]: E0123 17:57:52.539355 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:57:52.540677 kubelet[2768]: E0123 17:57:52.539420 2768 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:57:52.540677 kubelet[2768]: E0123 17:57:52.539644 2768 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqkjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-n4dtl_calico-system(880a2ec4-932d-40e4-a1c5-e4529584127c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 17:57:52.542040 kubelet[2768]: E0123 17:57:52.541298 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n4dtl" podUID="880a2ec4-932d-40e4-a1c5-e4529584127c" Jan 23 17:57:53.169708 containerd[1553]: time="2026-01-23T17:57:53.169405858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 17:57:53.504079 containerd[1553]: time="2026-01-23T17:57:53.504009010Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:57:53.505532 containerd[1553]: time="2026-01-23T17:57:53.505460370Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 17:57:53.505648 containerd[1553]: time="2026-01-23T17:57:53.505572970Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 17:57:53.505907 kubelet[2768]: E0123 17:57:53.505843 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:57:53.505971 kubelet[2768]: E0123 17:57:53.505907 2768 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:57:53.506204 kubelet[2768]: E0123 17:57:53.506126 2768 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bbpt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5d89c9c97d-bcllg_calico-system(505697ac-a88f-4c60-b275-ddbfae3b76e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 17:57:53.507691 kubelet[2768]: E0123 17:57:53.507601 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d89c9c97d-bcllg" podUID="505697ac-a88f-4c60-b275-ddbfae3b76e6" Jan 23 17:57:54.173828 containerd[1553]: time="2026-01-23T17:57:54.173768595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:57:54.528924 containerd[1553]: time="2026-01-23T17:57:54.528740264Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:57:54.532194 containerd[1553]: time="2026-01-23T17:57:54.531929783Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:57:54.532313 containerd[1553]: time="2026-01-23T17:57:54.532133303Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 17:57:54.533133 kubelet[2768]: E0123 17:57:54.532493 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:57:54.533133 kubelet[2768]: E0123 17:57:54.532546 2768 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:57:54.533133 kubelet[2768]: E0123 17:57:54.532759 2768 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnsqs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d67bdb5bc-9sgp8_calico-apiserver(ff25427e-89d3-494f-819f-e42ac2ef9668): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:57:54.533505 containerd[1553]: time="2026-01-23T17:57:54.533195862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 17:57:54.534565 kubelet[2768]: E0123 17:57:54.534059 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d67bdb5bc-9sgp8" podUID="ff25427e-89d3-494f-819f-e42ac2ef9668" Jan 23 17:57:54.874195 containerd[1553]: time="2026-01-23T17:57:54.874017776Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:57:54.876424 containerd[1553]: time="2026-01-23T17:57:54.876310295Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 17:57:54.876681 kubelet[2768]: E0123 17:57:54.876564 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:57:54.876681 kubelet[2768]: E0123 17:57:54.876635 2768 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:57:54.876845 kubelet[2768]: E0123 17:57:54.876763 2768 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvfkx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-r2j7b_calico-system(c91580b5-1014-472e-a6f0-53c9f68e2405): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 17:57:54.877080 containerd[1553]: time="2026-01-23T17:57:54.876367695Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 17:57:54.879459 containerd[1553]: time="2026-01-23T17:57:54.879417774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 17:57:55.231697 containerd[1553]: time="2026-01-23T17:57:55.231590353Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:57:55.233053 containerd[1553]: time="2026-01-23T17:57:55.232929993Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 17:57:55.233158 containerd[1553]: time="2026-01-23T17:57:55.233123913Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 17:57:55.233586 kubelet[2768]: E0123 17:57:55.233323 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:57:55.234271 kubelet[2768]: E0123 17:57:55.233373 2768 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:57:55.234436 kubelet[2768]: E0123 17:57:55.234379 2768 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvfkx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-r2j7b_calico-system(c91580b5-1014-472e-a6f0-53c9f68e2405): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 17:57:55.236523 kubelet[2768]: E0123 17:57:55.236429 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r2j7b" podUID="c91580b5-1014-472e-a6f0-53c9f68e2405" Jan 23 17:57:59.168078 kubelet[2768]: E0123 17:57:59.167991 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cf56b4c9c-b4lk9" podUID="0e19fa73-ca02-4aed-bbc2-3496e7625b06" Jan 23 17:57:59.170647 kubelet[2768]: E0123 17:57:59.168948 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fc854889f-f6vdq" podUID="a9a76376-ca93-4b12-a386-13b21a2c5528" Jan 23 17:58:00.171533 kubelet[2768]: E0123 17:58:00.171211 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d67bdb5bc-4l6tt" podUID="d3d96596-5d19-4484-a8c6-296b60023534" Jan 23 17:58:03.167778 kubelet[2768]: E0123 17:58:03.167725 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n4dtl" podUID="880a2ec4-932d-40e4-a1c5-e4529584127c" Jan 23 17:58:07.167599 kubelet[2768]: E0123 17:58:07.167178 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d89c9c97d-bcllg" podUID="505697ac-a88f-4c60-b275-ddbfae3b76e6" Jan 23 17:58:09.169358 kubelet[2768]: E0123 17:58:09.167953 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d67bdb5bc-9sgp8" podUID="ff25427e-89d3-494f-819f-e42ac2ef9668" Jan 23 17:58:10.170353 kubelet[2768]: E0123 17:58:10.170225 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r2j7b" podUID="c91580b5-1014-472e-a6f0-53c9f68e2405" Jan 23 17:58:11.169250 kubelet[2768]: E0123 17:58:11.168798 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cf56b4c9c-b4lk9" podUID="0e19fa73-ca02-4aed-bbc2-3496e7625b06" Jan 23 17:58:13.169499 kubelet[2768]: E0123 17:58:13.168825 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fc854889f-f6vdq" podUID="a9a76376-ca93-4b12-a386-13b21a2c5528" Jan 23 17:58:15.168517 kubelet[2768]: E0123 17:58:15.168441 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n4dtl" podUID="880a2ec4-932d-40e4-a1c5-e4529584127c" Jan 23 17:58:15.170965 kubelet[2768]: E0123 17:58:15.170907 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d67bdb5bc-4l6tt" podUID="d3d96596-5d19-4484-a8c6-296b60023534" Jan 23 17:58:21.168356 kubelet[2768]: E0123 17:58:21.168309 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d67bdb5bc-9sgp8" podUID="ff25427e-89d3-494f-819f-e42ac2ef9668" Jan 23 17:58:22.175810 kubelet[2768]: E0123 17:58:22.175498 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d89c9c97d-bcllg" podUID="505697ac-a88f-4c60-b275-ddbfae3b76e6" Jan 23 17:58:23.168175 kubelet[2768]: E0123 17:58:23.168123 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cf56b4c9c-b4lk9" podUID="0e19fa73-ca02-4aed-bbc2-3496e7625b06" Jan 23 17:58:23.171336 kubelet[2768]: E0123 17:58:23.171281 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r2j7b" podUID="c91580b5-1014-472e-a6f0-53c9f68e2405" Jan 23 17:58:25.168105 kubelet[2768]: E0123 17:58:25.167921 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fc854889f-f6vdq" podUID="a9a76376-ca93-4b12-a386-13b21a2c5528" Jan 23 17:58:29.168778 containerd[1553]: time="2026-01-23T17:58:29.168662580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:58:29.503956 containerd[1553]: time="2026-01-23T17:58:29.503895556Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:58:29.505608 containerd[1553]: time="2026-01-23T17:58:29.505530027Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:58:29.505718 containerd[1553]: time="2026-01-23T17:58:29.505689467Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 17:58:29.505973 kubelet[2768]: E0123 17:58:29.505898 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:58:29.506249 kubelet[2768]: E0123 17:58:29.505990 2768 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:58:29.506249 kubelet[2768]: E0123 17:58:29.506182 2768 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9csw4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d67bdb5bc-4l6tt_calico-apiserver(d3d96596-5d19-4484-a8c6-296b60023534): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:58:29.509540 kubelet[2768]: E0123 17:58:29.508109 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d67bdb5bc-4l6tt" podUID="d3d96596-5d19-4484-a8c6-296b60023534" Jan 23 17:58:30.171941 kubelet[2768]: E0123 17:58:30.171897 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n4dtl" podUID="880a2ec4-932d-40e4-a1c5-e4529584127c" Jan 23 17:58:33.167855 kubelet[2768]: E0123 17:58:33.167803 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d67bdb5bc-9sgp8" podUID="ff25427e-89d3-494f-819f-e42ac2ef9668" Jan 23 17:58:35.170554 containerd[1553]: time="2026-01-23T17:58:35.170273086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:58:35.732553 containerd[1553]: time="2026-01-23T17:58:35.732395803Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:58:35.734791 containerd[1553]: time="2026-01-23T17:58:35.734608591Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:58:35.734791 containerd[1553]: time="2026-01-23T17:58:35.734714750Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 17:58:35.735026 kubelet[2768]: E0123 17:58:35.734875 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:58:35.735026 kubelet[2768]: E0123 17:58:35.734970 2768 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:58:35.735429 kubelet[2768]: E0123 17:58:35.735101 2768 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kpw7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5cf56b4c9c-b4lk9_calico-apiserver(0e19fa73-ca02-4aed-bbc2-3496e7625b06): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:58:35.737587 kubelet[2768]: E0123 17:58:35.736923 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cf56b4c9c-b4lk9" podUID="0e19fa73-ca02-4aed-bbc2-3496e7625b06" Jan 23 17:58:37.171725 containerd[1553]: time="2026-01-23T17:58:37.171685910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 17:58:37.515699 containerd[1553]: time="2026-01-23T17:58:37.514324150Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:58:37.517432 containerd[1553]: time="2026-01-23T17:58:37.517316854Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 17:58:37.517432 containerd[1553]: time="2026-01-23T17:58:37.517381693Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 17:58:37.517651 kubelet[2768]: E0123 17:58:37.517553 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:58:37.517651 kubelet[2768]: E0123 17:58:37.517596 2768 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:58:37.518914 kubelet[2768]: E0123 17:58:37.518019 2768 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvfkx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-r2j7b_calico-system(c91580b5-1014-472e-a6f0-53c9f68e2405): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 17:58:37.519005 containerd[1553]: time="2026-01-23T17:58:37.517916650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 17:58:37.866044 containerd[1553]: time="2026-01-23T17:58:37.865483543Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:58:37.867344 containerd[1553]: time="2026-01-23T17:58:37.867248773Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 17:58:37.867461 containerd[1553]: time="2026-01-23T17:58:37.867364653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 17:58:37.867598 kubelet[2768]: E0123 17:58:37.867560 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:58:37.867669 kubelet[2768]: E0123 17:58:37.867613 2768 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:58:37.867981 kubelet[2768]: E0123 17:58:37.867901 2768 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bbpt2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5d89c9c97d-bcllg_calico-system(505697ac-a88f-4c60-b275-ddbfae3b76e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 17:58:37.869161 containerd[1553]: time="2026-01-23T17:58:37.869121683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 17:58:37.869782 kubelet[2768]: E0123 17:58:37.869743 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d89c9c97d-bcllg" podUID="505697ac-a88f-4c60-b275-ddbfae3b76e6" Jan 23 17:58:38.207580 containerd[1553]: time="2026-01-23T17:58:38.206722620Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:58:38.209531 containerd[1553]: time="2026-01-23T17:58:38.209420685Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 17:58:38.209838 containerd[1553]: time="2026-01-23T17:58:38.209709283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 17:58:38.210129 kubelet[2768]: E0123 17:58:38.210083 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:58:38.210244 kubelet[2768]: E0123 17:58:38.210138 2768 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:58:38.210532 kubelet[2768]: E0123 17:58:38.210305 2768 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvfkx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-r2j7b_calico-system(c91580b5-1014-472e-a6f0-53c9f68e2405): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 17:58:38.212061 kubelet[2768]: E0123 17:58:38.211991 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r2j7b" podUID="c91580b5-1014-472e-a6f0-53c9f68e2405" Jan 23 17:58:39.168352 containerd[1553]: time="2026-01-23T17:58:39.167955133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 17:58:39.520399 containerd[1553]: time="2026-01-23T17:58:39.520329280Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:58:39.522655 containerd[1553]: time="2026-01-23T17:58:39.522579547Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 17:58:39.523237 containerd[1553]: time="2026-01-23T17:58:39.522599787Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 17:58:39.523567 kubelet[2768]: E0123 17:58:39.523484 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:58:39.524006 kubelet[2768]: E0123 17:58:39.523586 2768 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:58:39.524006 kubelet[2768]: E0123 17:58:39.523762 2768 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d134215e77c34cbf9ec3fa5d672d6199,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fbwbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fc854889f-f6vdq_calico-system(a9a76376-ca93-4b12-a386-13b21a2c5528): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 17:58:39.527675 containerd[1553]: time="2026-01-23T17:58:39.527633838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 17:58:39.877702 containerd[1553]: time="2026-01-23T17:58:39.877495560Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:58:39.879489 containerd[1553]: time="2026-01-23T17:58:39.879417749Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 17:58:39.879618 containerd[1553]: time="2026-01-23T17:58:39.879538788Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 17:58:39.879957 kubelet[2768]: E0123 17:58:39.879880 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:58:39.879957 kubelet[2768]: E0123 17:58:39.879936 2768 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:58:39.880742 kubelet[2768]: E0123 17:58:39.880676 2768 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fbwbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fc854889f-f6vdq_calico-system(a9a76376-ca93-4b12-a386-13b21a2c5528): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 17:58:39.881935 kubelet[2768]: E0123 17:58:39.881888 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fc854889f-f6vdq" podUID="a9a76376-ca93-4b12-a386-13b21a2c5528" Jan 23 17:58:43.170167 containerd[1553]: time="2026-01-23T17:58:43.169537932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 17:58:43.514568 containerd[1553]: time="2026-01-23T17:58:43.514383293Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:58:43.516940 containerd[1553]: time="2026-01-23T17:58:43.516756159Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 17:58:43.517055 containerd[1553]: time="2026-01-23T17:58:43.516894998Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 17:58:43.517582 kubelet[2768]: E0123 17:58:43.517271 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:58:43.517582 kubelet[2768]: E0123 17:58:43.517321 2768 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:58:43.517582 kubelet[2768]: E0123 17:58:43.517463 2768 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqkjs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-n4dtl_calico-system(880a2ec4-932d-40e4-a1c5-e4529584127c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 17:58:43.519234 kubelet[2768]: E0123 17:58:43.519188 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n4dtl" podUID="880a2ec4-932d-40e4-a1c5-e4529584127c" Jan 23 17:58:44.170178 kubelet[2768]: E0123 17:58:44.169799 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d67bdb5bc-4l6tt" podUID="d3d96596-5d19-4484-a8c6-296b60023534" Jan 23 17:58:48.173430 containerd[1553]: time="2026-01-23T17:58:48.173394898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:58:48.174374 kubelet[2768]: E0123 17:58:48.174265 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cf56b4c9c-b4lk9" podUID="0e19fa73-ca02-4aed-bbc2-3496e7625b06" Jan 23 17:58:48.525188 containerd[1553]: time="2026-01-23T17:58:48.525138942Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:58:48.526549 containerd[1553]: time="2026-01-23T17:58:48.526479933Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:58:48.526655 containerd[1553]: time="2026-01-23T17:58:48.526599653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 17:58:48.526854 kubelet[2768]: E0123 17:58:48.526816 2768 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:58:48.527397 kubelet[2768]: E0123 17:58:48.526969 2768 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:58:48.527397 kubelet[2768]: E0123 17:58:48.527329 2768 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnsqs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d67bdb5bc-9sgp8_calico-apiserver(ff25427e-89d3-494f-819f-e42ac2ef9668): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:58:48.528796 kubelet[2768]: E0123 17:58:48.528758 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d67bdb5bc-9sgp8" podUID="ff25427e-89d3-494f-819f-e42ac2ef9668" Jan 23 17:58:49.168457 kubelet[2768]: E0123 17:58:49.167994 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d89c9c97d-bcllg" podUID="505697ac-a88f-4c60-b275-ddbfae3b76e6" Jan 23 17:58:51.170469 kubelet[2768]: E0123 17:58:51.170404 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r2j7b" podUID="c91580b5-1014-472e-a6f0-53c9f68e2405" Jan 23 17:58:52.173475 kubelet[2768]: E0123 17:58:52.173415 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fc854889f-f6vdq" podUID="a9a76376-ca93-4b12-a386-13b21a2c5528" Jan 23 17:58:53.423303 systemd[1]: Started sshd@7-46.224.74.11:22-68.220.241.50:53436.service - OpenSSH per-connection server daemon (68.220.241.50:53436). Jan 23 17:58:54.056770 sshd[5130]: Accepted publickey for core from 68.220.241.50 port 53436 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:58:54.061641 sshd-session[5130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:54.071009 systemd-logind[1529]: New session 8 of user core. Jan 23 17:58:54.075711 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 17:58:54.611603 sshd[5133]: Connection closed by 68.220.241.50 port 53436 Jan 23 17:58:54.612870 sshd-session[5130]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:54.619749 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 17:58:54.622378 systemd[1]: sshd@7-46.224.74.11:22-68.220.241.50:53436.service: Deactivated successfully. Jan 23 17:58:54.627990 systemd-logind[1529]: Session 8 logged out. Waiting for processes to exit. Jan 23 17:58:54.630062 systemd-logind[1529]: Removed session 8. Jan 23 17:58:56.172266 kubelet[2768]: E0123 17:58:56.171265 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d67bdb5bc-4l6tt" podUID="d3d96596-5d19-4484-a8c6-296b60023534" Jan 23 17:58:59.168273 kubelet[2768]: E0123 17:58:59.168219 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n4dtl" podUID="880a2ec4-932d-40e4-a1c5-e4529584127c" Jan 23 17:58:59.724175 systemd[1]: Started sshd@8-46.224.74.11:22-68.220.241.50:53452.service - OpenSSH per-connection server daemon (68.220.241.50:53452). Jan 23 17:59:00.169078 kubelet[2768]: E0123 17:59:00.168170 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d89c9c97d-bcllg" podUID="505697ac-a88f-4c60-b275-ddbfae3b76e6" Jan 23 17:59:00.358543 sshd[5148]: Accepted publickey for core from 68.220.241.50 port 53452 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:59:00.362322 sshd-session[5148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:00.370578 systemd-logind[1529]: New session 9 of user core. Jan 23 17:59:00.377768 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 17:59:00.917547 sshd[5151]: Connection closed by 68.220.241.50 port 53452 Jan 23 17:59:00.916299 sshd-session[5148]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:00.923848 systemd[1]: sshd@8-46.224.74.11:22-68.220.241.50:53452.service: Deactivated successfully. Jan 23 17:59:00.929365 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 17:59:00.931033 systemd-logind[1529]: Session 9 logged out. Waiting for processes to exit. Jan 23 17:59:00.933289 systemd-logind[1529]: Removed session 9. Jan 23 17:59:01.170368 kubelet[2768]: E0123 17:59:01.170163 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d67bdb5bc-9sgp8" podUID="ff25427e-89d3-494f-819f-e42ac2ef9668" Jan 23 17:59:02.174074 kubelet[2768]: E0123 17:59:02.174005 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r2j7b" podUID="c91580b5-1014-472e-a6f0-53c9f68e2405" Jan 23 17:59:03.169261 kubelet[2768]: E0123 17:59:03.169197 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cf56b4c9c-b4lk9" podUID="0e19fa73-ca02-4aed-bbc2-3496e7625b06" Jan 23 17:59:06.024712 systemd[1]: Started sshd@9-46.224.74.11:22-68.220.241.50:40312.service - OpenSSH per-connection server daemon (68.220.241.50:40312). Jan 23 17:59:06.667625 sshd[5190]: Accepted publickey for core from 68.220.241.50 port 40312 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:59:06.669377 sshd-session[5190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:06.678559 systemd-logind[1529]: New session 10 of user core. Jan 23 17:59:06.683075 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 17:59:07.167783 kubelet[2768]: E0123 17:59:07.167721 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d67bdb5bc-4l6tt" podUID="d3d96596-5d19-4484-a8c6-296b60023534" Jan 23 17:59:07.171085 kubelet[2768]: E0123 17:59:07.171015 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fc854889f-f6vdq" podUID="a9a76376-ca93-4b12-a386-13b21a2c5528" Jan 23 17:59:07.204062 sshd[5193]: Connection closed by 68.220.241.50 port 40312 Jan 23 17:59:07.205451 sshd-session[5190]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:07.211136 systemd[1]: sshd@9-46.224.74.11:22-68.220.241.50:40312.service: Deactivated successfully. Jan 23 17:59:07.215752 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 17:59:07.217767 systemd-logind[1529]: Session 10 logged out. Waiting for processes to exit. Jan 23 17:59:07.220208 systemd-logind[1529]: Removed session 10. Jan 23 17:59:07.326231 systemd[1]: Started sshd@10-46.224.74.11:22-68.220.241.50:40316.service - OpenSSH per-connection server daemon (68.220.241.50:40316). Jan 23 17:59:07.984596 sshd[5206]: Accepted publickey for core from 68.220.241.50 port 40316 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:59:07.986492 sshd-session[5206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:07.994592 systemd-logind[1529]: New session 11 of user core. Jan 23 17:59:08.000807 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 17:59:08.591638 sshd[5213]: Connection closed by 68.220.241.50 port 40316 Jan 23 17:59:08.592150 sshd-session[5206]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:08.603497 systemd[1]: sshd@10-46.224.74.11:22-68.220.241.50:40316.service: Deactivated successfully. Jan 23 17:59:08.609313 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 17:59:08.611316 systemd-logind[1529]: Session 11 logged out. Waiting for processes to exit. Jan 23 17:59:08.614056 systemd-logind[1529]: Removed session 11. Jan 23 17:59:08.699270 systemd[1]: Started sshd@11-46.224.74.11:22-68.220.241.50:40320.service - OpenSSH per-connection server daemon (68.220.241.50:40320). Jan 23 17:59:09.337190 sshd[5223]: Accepted publickey for core from 68.220.241.50 port 40320 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:59:09.339098 sshd-session[5223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:09.345758 systemd-logind[1529]: New session 12 of user core. Jan 23 17:59:09.351790 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 17:59:09.850624 sshd[5226]: Connection closed by 68.220.241.50 port 40320 Jan 23 17:59:09.852776 sshd-session[5223]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:09.860450 systemd[1]: sshd@11-46.224.74.11:22-68.220.241.50:40320.service: Deactivated successfully. Jan 23 17:59:09.864070 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 17:59:09.866990 systemd-logind[1529]: Session 12 logged out. Waiting for processes to exit. Jan 23 17:59:09.869010 systemd-logind[1529]: Removed session 12. Jan 23 17:59:10.172380 kubelet[2768]: E0123 17:59:10.172253 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n4dtl" podUID="880a2ec4-932d-40e4-a1c5-e4529584127c" Jan 23 17:59:13.168298 kubelet[2768]: E0123 17:59:13.168131 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d89c9c97d-bcllg" podUID="505697ac-a88f-4c60-b275-ddbfae3b76e6" Jan 23 17:59:13.169720 kubelet[2768]: E0123 17:59:13.169674 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d67bdb5bc-9sgp8" podUID="ff25427e-89d3-494f-819f-e42ac2ef9668" Jan 23 17:59:14.959672 systemd[1]: Started sshd@12-46.224.74.11:22-68.220.241.50:43430.service - OpenSSH per-connection server daemon (68.220.241.50:43430). Jan 23 17:59:15.168165 kubelet[2768]: E0123 17:59:15.168099 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cf56b4c9c-b4lk9" podUID="0e19fa73-ca02-4aed-bbc2-3496e7625b06" Jan 23 17:59:15.583359 sshd[5238]: Accepted publickey for core from 68.220.241.50 port 43430 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:59:15.585352 sshd-session[5238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:15.593570 systemd-logind[1529]: New session 13 of user core. Jan 23 17:59:15.600826 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 17:59:16.103713 sshd[5241]: Connection closed by 68.220.241.50 port 43430 Jan 23 17:59:16.105356 sshd-session[5238]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:16.112058 systemd-logind[1529]: Session 13 logged out. Waiting for processes to exit. Jan 23 17:59:16.112930 systemd[1]: sshd@12-46.224.74.11:22-68.220.241.50:43430.service: Deactivated successfully. Jan 23 17:59:16.116093 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 17:59:16.118295 systemd-logind[1529]: Removed session 13. Jan 23 17:59:16.169023 kubelet[2768]: E0123 17:59:16.168686 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r2j7b" podUID="c91580b5-1014-472e-a6f0-53c9f68e2405" Jan 23 17:59:16.213843 systemd[1]: Started sshd@13-46.224.74.11:22-68.220.241.50:43444.service - OpenSSH per-connection server daemon (68.220.241.50:43444). Jan 23 17:59:16.864596 sshd[5253]: Accepted publickey for core from 68.220.241.50 port 43444 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:59:16.867455 sshd-session[5253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:16.876092 systemd-logind[1529]: New session 14 of user core. Jan 23 17:59:16.880708 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 17:59:17.581253 sshd[5256]: Connection closed by 68.220.241.50 port 43444 Jan 23 17:59:17.581814 sshd-session[5253]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:17.590237 systemd-logind[1529]: Session 14 logged out. Waiting for processes to exit. Jan 23 17:59:17.590278 systemd[1]: sshd@13-46.224.74.11:22-68.220.241.50:43444.service: Deactivated successfully. Jan 23 17:59:17.595324 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 17:59:17.601623 systemd-logind[1529]: Removed session 14. Jan 23 17:59:17.686965 systemd[1]: Started sshd@14-46.224.74.11:22-68.220.241.50:43446.service - OpenSSH per-connection server daemon (68.220.241.50:43446). Jan 23 17:59:18.315366 sshd[5266]: Accepted publickey for core from 68.220.241.50 port 43446 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:59:18.317870 sshd-session[5266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:18.327067 systemd-logind[1529]: New session 15 of user core. Jan 23 17:59:18.337921 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 17:59:19.168477 kubelet[2768]: E0123 17:59:19.168012 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d67bdb5bc-4l6tt" podUID="d3d96596-5d19-4484-a8c6-296b60023534" Jan 23 17:59:19.171916 kubelet[2768]: E0123 17:59:19.171795 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fc854889f-f6vdq" podUID="a9a76376-ca93-4b12-a386-13b21a2c5528" Jan 23 17:59:19.664486 sshd[5269]: Connection closed by 68.220.241.50 port 43446 Jan 23 17:59:19.665680 sshd-session[5266]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:19.672257 systemd[1]: sshd@14-46.224.74.11:22-68.220.241.50:43446.service: Deactivated successfully. Jan 23 17:59:19.679253 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 17:59:19.681271 systemd-logind[1529]: Session 15 logged out. Waiting for processes to exit. Jan 23 17:59:19.684363 systemd-logind[1529]: Removed session 15. Jan 23 17:59:19.777060 systemd[1]: Started sshd@15-46.224.74.11:22-68.220.241.50:43454.service - OpenSSH per-connection server daemon (68.220.241.50:43454). Jan 23 17:59:20.430552 sshd[5286]: Accepted publickey for core from 68.220.241.50 port 43454 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:59:20.432408 sshd-session[5286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:20.440186 systemd-logind[1529]: New session 16 of user core. Jan 23 17:59:20.446973 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 17:59:21.158675 sshd[5289]: Connection closed by 68.220.241.50 port 43454 Jan 23 17:59:21.160641 sshd-session[5286]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:21.166409 systemd[1]: sshd@15-46.224.74.11:22-68.220.241.50:43454.service: Deactivated successfully. Jan 23 17:59:21.172977 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 17:59:21.178400 systemd-logind[1529]: Session 16 logged out. Waiting for processes to exit. Jan 23 17:59:21.180496 systemd-logind[1529]: Removed session 16. Jan 23 17:59:21.277722 systemd[1]: Started sshd@16-46.224.74.11:22-68.220.241.50:43470.service - OpenSSH per-connection server daemon (68.220.241.50:43470). Jan 23 17:59:21.929298 sshd[5299]: Accepted publickey for core from 68.220.241.50 port 43470 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:59:21.932000 sshd-session[5299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:21.937963 systemd-logind[1529]: New session 17 of user core. Jan 23 17:59:21.943830 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 17:59:22.183568 kubelet[2768]: E0123 17:59:22.182855 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n4dtl" podUID="880a2ec4-932d-40e4-a1c5-e4529584127c" Jan 23 17:59:22.488492 sshd[5302]: Connection closed by 68.220.241.50 port 43470 Jan 23 17:59:22.489749 sshd-session[5299]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:22.496068 systemd-logind[1529]: Session 17 logged out. Waiting for processes to exit. Jan 23 17:59:22.496418 systemd[1]: sshd@16-46.224.74.11:22-68.220.241.50:43470.service: Deactivated successfully. Jan 23 17:59:22.499965 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 17:59:22.503594 systemd-logind[1529]: Removed session 17. Jan 23 17:59:27.167998 kubelet[2768]: E0123 17:59:27.167681 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d89c9c97d-bcllg" podUID="505697ac-a88f-4c60-b275-ddbfae3b76e6" Jan 23 17:59:27.167998 kubelet[2768]: E0123 17:59:27.167852 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d67bdb5bc-9sgp8" podUID="ff25427e-89d3-494f-819f-e42ac2ef9668" Jan 23 17:59:27.600824 systemd[1]: Started sshd@17-46.224.74.11:22-68.220.241.50:44192.service - OpenSSH per-connection server daemon (68.220.241.50:44192). Jan 23 17:59:28.228178 sshd[5318]: Accepted publickey for core from 68.220.241.50 port 44192 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:59:28.230326 sshd-session[5318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:28.235741 systemd-logind[1529]: New session 18 of user core. Jan 23 17:59:28.243001 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 17:59:28.750091 sshd[5321]: Connection closed by 68.220.241.50 port 44192 Jan 23 17:59:28.749967 sshd-session[5318]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:28.759155 systemd-logind[1529]: Session 18 logged out. Waiting for processes to exit. Jan 23 17:59:28.760228 systemd[1]: sshd@17-46.224.74.11:22-68.220.241.50:44192.service: Deactivated successfully. Jan 23 17:59:28.765452 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 17:59:28.768374 systemd-logind[1529]: Removed session 18. Jan 23 17:59:30.173464 kubelet[2768]: E0123 17:59:30.173413 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r2j7b" podUID="c91580b5-1014-472e-a6f0-53c9f68e2405" Jan 23 17:59:30.173993 kubelet[2768]: E0123 17:59:30.173648 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cf56b4c9c-b4lk9" podUID="0e19fa73-ca02-4aed-bbc2-3496e7625b06" Jan 23 17:59:31.168200 kubelet[2768]: E0123 17:59:31.168120 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fc854889f-f6vdq" podUID="a9a76376-ca93-4b12-a386-13b21a2c5528" Jan 23 17:59:33.169049 kubelet[2768]: E0123 17:59:33.168739 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d67bdb5bc-4l6tt" podUID="d3d96596-5d19-4484-a8c6-296b60023534" Jan 23 17:59:33.857823 systemd[1]: Started sshd@18-46.224.74.11:22-68.220.241.50:48402.service - OpenSSH per-connection server daemon (68.220.241.50:48402). Jan 23 17:59:34.479569 sshd[5361]: Accepted publickey for core from 68.220.241.50 port 48402 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:59:34.482422 sshd-session[5361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:59:34.489872 systemd-logind[1529]: New session 19 of user core. Jan 23 17:59:34.501891 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 17:59:35.002172 sshd[5364]: Connection closed by 68.220.241.50 port 48402 Jan 23 17:59:35.003311 sshd-session[5361]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:35.012194 systemd-logind[1529]: Session 19 logged out. Waiting for processes to exit. Jan 23 17:59:35.013472 systemd[1]: sshd@18-46.224.74.11:22-68.220.241.50:48402.service: Deactivated successfully. Jan 23 17:59:35.017903 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 17:59:35.021339 systemd-logind[1529]: Removed session 19. Jan 23 17:59:35.167289 kubelet[2768]: E0123 17:59:35.167185 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n4dtl" podUID="880a2ec4-932d-40e4-a1c5-e4529584127c" Jan 23 17:59:39.169159 kubelet[2768]: E0123 17:59:39.168734 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d89c9c97d-bcllg" podUID="505697ac-a88f-4c60-b275-ddbfae3b76e6" Jan 23 17:59:41.167786 kubelet[2768]: E0123 17:59:41.167690 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cf56b4c9c-b4lk9" podUID="0e19fa73-ca02-4aed-bbc2-3496e7625b06" Jan 23 17:59:41.167786 kubelet[2768]: E0123 17:59:41.167705 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d67bdb5bc-9sgp8" podUID="ff25427e-89d3-494f-819f-e42ac2ef9668" Jan 23 17:59:42.170787 kubelet[2768]: E0123 17:59:42.170725 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r2j7b" podUID="c91580b5-1014-472e-a6f0-53c9f68e2405" Jan 23 17:59:43.169085 kubelet[2768]: E0123 17:59:43.168969 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fc854889f-f6vdq" podUID="a9a76376-ca93-4b12-a386-13b21a2c5528" Jan 23 17:59:46.167613 kubelet[2768]: E0123 17:59:46.167406 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d67bdb5bc-4l6tt" podUID="d3d96596-5d19-4484-a8c6-296b60023534" Jan 23 17:59:47.167373 kubelet[2768]: E0123 17:59:47.167207 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-n4dtl" podUID="880a2ec4-932d-40e4-a1c5-e4529584127c" Jan 23 17:59:50.148095 kubelet[2768]: E0123 17:59:50.148045 2768 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:50560->10.0.0.2:2379: read: connection timed out" Jan 23 17:59:50.155529 systemd[1]: cri-containerd-6326a0e39ae7fae2f6a371bfec59db5a83166faba391addc75136f0fff41bd2e.scope: Deactivated successfully. Jan 23 17:59:50.156991 systemd[1]: cri-containerd-6326a0e39ae7fae2f6a371bfec59db5a83166faba391addc75136f0fff41bd2e.scope: Consumed 3.992s CPU time, 23.2M memory peak, 2.3M read from disk. Jan 23 17:59:50.159109 containerd[1553]: time="2026-01-23T17:59:50.158986350Z" level=info msg="received container exit event container_id:\"6326a0e39ae7fae2f6a371bfec59db5a83166faba391addc75136f0fff41bd2e\" id:\"6326a0e39ae7fae2f6a371bfec59db5a83166faba391addc75136f0fff41bd2e\" pid:2613 exit_status:1 exited_at:{seconds:1769191190 nanos:158613353}" Jan 23 17:59:50.172338 kubelet[2768]: E0123 17:59:50.171958 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d89c9c97d-bcllg" podUID="505697ac-a88f-4c60-b275-ddbfae3b76e6" Jan 23 17:59:50.189123 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6326a0e39ae7fae2f6a371bfec59db5a83166faba391addc75136f0fff41bd2e-rootfs.mount: Deactivated successfully. Jan 23 17:59:50.425817 systemd[1]: cri-containerd-1548b48aa4b2b26c10a5e91add086484f469fc40494e1fb012c01a1f4bf3b715.scope: Deactivated successfully. Jan 23 17:59:50.426132 systemd[1]: cri-containerd-1548b48aa4b2b26c10a5e91add086484f469fc40494e1fb012c01a1f4bf3b715.scope: Consumed 36.072s CPU time, 114.2M memory peak. Jan 23 17:59:50.430161 containerd[1553]: time="2026-01-23T17:59:50.430122153Z" level=info msg="received container exit event container_id:\"1548b48aa4b2b26c10a5e91add086484f469fc40494e1fb012c01a1f4bf3b715\" id:\"1548b48aa4b2b26c10a5e91add086484f469fc40494e1fb012c01a1f4bf3b715\" pid:3096 exit_status:1 exited_at:{seconds:1769191190 nanos:429182760}" Jan 23 17:59:50.463374 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1548b48aa4b2b26c10a5e91add086484f469fc40494e1fb012c01a1f4bf3b715-rootfs.mount: Deactivated successfully. Jan 23 17:59:50.830782 systemd[1]: cri-containerd-b364011ef08efc2535d520c921544fe7d96987411aa9b09d74813ae5b1603b99.scope: Deactivated successfully. Jan 23 17:59:50.831422 systemd[1]: cri-containerd-b364011ef08efc2535d520c921544fe7d96987411aa9b09d74813ae5b1603b99.scope: Consumed 3.991s CPU time, 59.7M memory peak, 2.6M read from disk. Jan 23 17:59:50.835618 containerd[1553]: time="2026-01-23T17:59:50.835578626Z" level=info msg="received container exit event container_id:\"b364011ef08efc2535d520c921544fe7d96987411aa9b09d74813ae5b1603b99\" id:\"b364011ef08efc2535d520c921544fe7d96987411aa9b09d74813ae5b1603b99\" pid:2626 exit_status:1 exited_at:{seconds:1769191190 nanos:835183469}" Jan 23 17:59:50.865192 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b364011ef08efc2535d520c921544fe7d96987411aa9b09d74813ae5b1603b99-rootfs.mount: Deactivated successfully. Jan 23 17:59:50.913648 kubelet[2768]: I0123 17:59:50.913599 2768 scope.go:117] "RemoveContainer" containerID="6326a0e39ae7fae2f6a371bfec59db5a83166faba391addc75136f0fff41bd2e" Jan 23 17:59:50.913915 kubelet[2768]: I0123 17:59:50.913874 2768 scope.go:117] "RemoveContainer" containerID="b364011ef08efc2535d520c921544fe7d96987411aa9b09d74813ae5b1603b99" Jan 23 17:59:50.914037 kubelet[2768]: I0123 17:59:50.914011 2768 scope.go:117] "RemoveContainer" containerID="1548b48aa4b2b26c10a5e91add086484f469fc40494e1fb012c01a1f4bf3b715" Jan 23 17:59:50.917574 containerd[1553]: time="2026-01-23T17:59:50.917274636Z" level=info msg="CreateContainer within sandbox \"06bb7fa30b60be150eeb644d305dfab1181d188d8af9acf4ef4e998c56aeba31\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 23 17:59:50.927585 containerd[1553]: time="2026-01-23T17:59:50.926827767Z" level=info msg="Container 51a8ed00d7214b8c1de74d44f464f239490b59ce06c33dfb7f7129c427428606: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:59:50.932793 containerd[1553]: time="2026-01-23T17:59:50.932738205Z" level=info msg="CreateContainer within sandbox \"27ab573afc5333e4e97f5f97815aad07680a9cc3447f17f47b95daefb0933e09\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 23 17:59:50.933072 containerd[1553]: time="2026-01-23T17:59:50.932760924Z" level=info msg="CreateContainer within sandbox \"50d682d1d1cb6ec80ec4a4ec84c461935d31733cb3ff73b5e80c361c51b77833\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 23 17:59:50.937449 containerd[1553]: time="2026-01-23T17:59:50.937415411Z" level=info msg="CreateContainer within sandbox \"06bb7fa30b60be150eeb644d305dfab1181d188d8af9acf4ef4e998c56aeba31\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"51a8ed00d7214b8c1de74d44f464f239490b59ce06c33dfb7f7129c427428606\"" Jan 23 17:59:50.938431 containerd[1553]: time="2026-01-23T17:59:50.938365644Z" level=info msg="StartContainer for \"51a8ed00d7214b8c1de74d44f464f239490b59ce06c33dfb7f7129c427428606\"" Jan 23 17:59:50.940010 containerd[1553]: time="2026-01-23T17:59:50.939957472Z" level=info msg="connecting to shim 51a8ed00d7214b8c1de74d44f464f239490b59ce06c33dfb7f7129c427428606" address="unix:///run/containerd/s/e3cfd5523036778dc2645b566eb82284e6c3e1e023f44f926061e10acc426df1" protocol=ttrpc version=3 Jan 23 17:59:50.946782 containerd[1553]: time="2026-01-23T17:59:50.946717144Z" level=info msg="Container 9585bc3f3cc040cb3a4b80c2e5a526b3ea2de941d20d34943814f7859e435b53: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:59:50.948653 containerd[1553]: time="2026-01-23T17:59:50.948618410Z" level=info msg="Container 6408fcdf804d9e332809bd3289ff29cfe66b67eee9bdc39a6f3efef99affb3c7: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:59:50.964609 containerd[1553]: time="2026-01-23T17:59:50.964062698Z" level=info msg="CreateContainer within sandbox \"50d682d1d1cb6ec80ec4a4ec84c461935d31733cb3ff73b5e80c361c51b77833\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"6408fcdf804d9e332809bd3289ff29cfe66b67eee9bdc39a6f3efef99affb3c7\"" Jan 23 17:59:50.964988 containerd[1553]: time="2026-01-23T17:59:50.964721374Z" level=info msg="StartContainer for \"6408fcdf804d9e332809bd3289ff29cfe66b67eee9bdc39a6f3efef99affb3c7\"" Jan 23 17:59:50.965229 containerd[1553]: time="2026-01-23T17:59:50.965187410Z" level=info msg="CreateContainer within sandbox \"27ab573afc5333e4e97f5f97815aad07680a9cc3447f17f47b95daefb0933e09\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"9585bc3f3cc040cb3a4b80c2e5a526b3ea2de941d20d34943814f7859e435b53\"" Jan 23 17:59:50.965377 systemd[1]: Started cri-containerd-51a8ed00d7214b8c1de74d44f464f239490b59ce06c33dfb7f7129c427428606.scope - libcontainer container 51a8ed00d7214b8c1de74d44f464f239490b59ce06c33dfb7f7129c427428606. Jan 23 17:59:50.967371 containerd[1553]: time="2026-01-23T17:59:50.966573600Z" level=info msg="StartContainer for \"9585bc3f3cc040cb3a4b80c2e5a526b3ea2de941d20d34943814f7859e435b53\"" Jan 23 17:59:50.967371 containerd[1553]: time="2026-01-23T17:59:50.967203596Z" level=info msg="connecting to shim 6408fcdf804d9e332809bd3289ff29cfe66b67eee9bdc39a6f3efef99affb3c7" address="unix:///run/containerd/s/79fd21cad128a47240780fb907272b4146e57575ac12f1c3832df64e7457c3a3" protocol=ttrpc version=3 Jan 23 17:59:50.969871 containerd[1553]: time="2026-01-23T17:59:50.969822897Z" level=info msg="connecting to shim 9585bc3f3cc040cb3a4b80c2e5a526b3ea2de941d20d34943814f7859e435b53" address="unix:///run/containerd/s/bbc7fdc360ce5f26094cacbefbe3de523f0fbe7ee39d3e4a252702fe1ef53027" protocol=ttrpc version=3 Jan 23 17:59:50.998789 systemd[1]: Started cri-containerd-6408fcdf804d9e332809bd3289ff29cfe66b67eee9bdc39a6f3efef99affb3c7.scope - libcontainer container 6408fcdf804d9e332809bd3289ff29cfe66b67eee9bdc39a6f3efef99affb3c7. Jan 23 17:59:51.010925 systemd[1]: Started cri-containerd-9585bc3f3cc040cb3a4b80c2e5a526b3ea2de941d20d34943814f7859e435b53.scope - libcontainer container 9585bc3f3cc040cb3a4b80c2e5a526b3ea2de941d20d34943814f7859e435b53. Jan 23 17:59:51.030598 containerd[1553]: time="2026-01-23T17:59:51.030555458Z" level=info msg="StartContainer for \"51a8ed00d7214b8c1de74d44f464f239490b59ce06c33dfb7f7129c427428606\" returns successfully" Jan 23 17:59:51.084958 containerd[1553]: time="2026-01-23T17:59:51.084403029Z" level=info msg="StartContainer for \"6408fcdf804d9e332809bd3289ff29cfe66b67eee9bdc39a6f3efef99affb3c7\" returns successfully" Jan 23 17:59:51.097193 containerd[1553]: time="2026-01-23T17:59:51.097150417Z" level=info msg="StartContainer for \"9585bc3f3cc040cb3a4b80c2e5a526b3ea2de941d20d34943814f7859e435b53\" returns successfully" Jan 23 17:59:51.621531 kubelet[2768]: E0123 17:59:51.620006 2768 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:50374->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{calico-apiserver-5cf56b4c9c-b4lk9.188d6dded1fc6672 calico-apiserver 1766 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-apiserver,Name:calico-apiserver-5cf56b4c9c-b4lk9,UID:0e19fa73-ca02-4aed-bbc2-3496e7625b06,APIVersion:v1,ResourceVersion:826,FieldPath:spec.containers{calico-apiserver},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4459-2-3-3-b08bb0c7a1,},FirstTimestamp:2026-01-23 17:57:12 +0000 UTC,LastTimestamp:2026-01-23 17:59:41.167625309 +0000 UTC m=+199.156156262,Count:11,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-3-3-b08bb0c7a1,}" Jan 23 17:59:54.115475 kubelet[2768]: I0123 17:59:54.115402 2768 status_manager.go:895] "Failed to get status for pod" podUID="87c5f1b495b2801ebdcb4e77dba0b154" pod="kube-system/kube-scheduler-ci-4459-2-3-3-b08bb0c7a1" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:50480->10.0.0.2:2379: read: connection timed out" Jan 23 17:59:54.168245 kubelet[2768]: E0123 17:59:54.168119 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cf56b4c9c-b4lk9" podUID="0e19fa73-ca02-4aed-bbc2-3496e7625b06" Jan 23 17:59:54.168980 kubelet[2768]: E0123 17:59:54.168803 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r2j7b" podUID="c91580b5-1014-472e-a6f0-53c9f68e2405" Jan 23 17:59:55.169274 kubelet[2768]: E0123 17:59:55.169204 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d67bdb5bc-9sgp8" podUID="ff25427e-89d3-494f-819f-e42ac2ef9668"