Jan 23 17:52:31.812405 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 23 17:52:31.813292 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Jan 23 16:10:02 -00 2026 Jan 23 17:52:31.813317 kernel: KASLR enabled Jan 23 17:52:31.813323 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 23 17:52:31.813329 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390bb018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Jan 23 17:52:31.813335 kernel: random: crng init done Jan 23 17:52:31.813341 kernel: secureboot: Secure boot disabled Jan 23 17:52:31.813347 kernel: ACPI: Early table checksum verification disabled Jan 23 17:52:31.813353 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jan 23 17:52:31.813359 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 23 17:52:31.813367 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:52:31.813373 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:52:31.813378 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:52:31.813384 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:52:31.813391 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:52:31.813399 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:52:31.813405 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:52:31.813411 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:52:31.813417 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 17:52:31.813423 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 23 17:52:31.813442 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 23 17:52:31.813448 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 23 17:52:31.813454 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 23 17:52:31.813461 kernel: NODE_DATA(0) allocated [mem 0x13967da00-0x139684fff] Jan 23 17:52:31.813466 kernel: Zone ranges: Jan 23 17:52:31.813472 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 23 17:52:31.813480 kernel: DMA32 empty Jan 23 17:52:31.813486 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 23 17:52:31.813498 kernel: Device empty Jan 23 17:52:31.813504 kernel: Movable zone start for each node Jan 23 17:52:31.813510 kernel: Early memory node ranges Jan 23 17:52:31.813516 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Jan 23 17:52:31.813522 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Jan 23 17:52:31.813528 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Jan 23 17:52:31.813533 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jan 23 17:52:31.813539 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jan 23 17:52:31.813545 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jan 23 17:52:31.813551 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jan 23 17:52:31.813559 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jan 23 17:52:31.813565 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jan 23 17:52:31.813574 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 23 17:52:31.813580 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 23 17:52:31.813587 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Jan 23 17:52:31.813595 kernel: psci: probing for conduit method from ACPI. Jan 23 17:52:31.813601 kernel: psci: PSCIv1.1 detected in firmware. Jan 23 17:52:31.813607 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 17:52:31.813614 kernel: psci: Trusted OS migration not required Jan 23 17:52:31.813620 kernel: psci: SMC Calling Convention v1.1 Jan 23 17:52:31.813626 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 23 17:52:31.813633 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 23 17:52:31.813639 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 23 17:52:31.813646 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 17:52:31.813652 kernel: Detected PIPT I-cache on CPU0 Jan 23 17:52:31.813658 kernel: CPU features: detected: GIC system register CPU interface Jan 23 17:52:31.813666 kernel: CPU features: detected: Spectre-v4 Jan 23 17:52:31.813672 kernel: CPU features: detected: Spectre-BHB Jan 23 17:52:31.813679 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 23 17:52:31.813685 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 23 17:52:31.813691 kernel: CPU features: detected: ARM erratum 1418040 Jan 23 17:52:31.813698 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 23 17:52:31.813704 kernel: alternatives: applying boot alternatives Jan 23 17:52:31.813712 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=5fc6d8e43735a6d26d13c2f5b234025ac82c601a45144671feeb457ddade8f9d Jan 23 17:52:31.813719 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 17:52:31.813725 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 17:52:31.813732 kernel: Fallback order for Node 0: 0 Jan 23 17:52:31.813739 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1024000 Jan 23 17:52:31.813746 kernel: Policy zone: Normal Jan 23 17:52:31.813752 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 17:52:31.813758 kernel: software IO TLB: area num 2. Jan 23 17:52:31.813764 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Jan 23 17:52:31.813771 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 17:52:31.813777 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 17:52:31.813784 kernel: rcu: RCU event tracing is enabled. Jan 23 17:52:31.813791 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 17:52:31.813798 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 17:52:31.813804 kernel: Tracing variant of Tasks RCU enabled. Jan 23 17:52:31.813810 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 17:52:31.813818 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 17:52:31.813825 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 17:52:31.813832 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 17:52:31.813838 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 17:52:31.813845 kernel: GICv3: 256 SPIs implemented Jan 23 17:52:31.813851 kernel: GICv3: 0 Extended SPIs implemented Jan 23 17:52:31.813857 kernel: Root IRQ handler: gic_handle_irq Jan 23 17:52:31.813864 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 23 17:52:31.813870 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jan 23 17:52:31.813876 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 23 17:52:31.813883 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 23 17:52:31.813891 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100100000 (indirect, esz 8, psz 64K, shr 1) Jan 23 17:52:31.813897 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100110000 (flat, esz 8, psz 64K, shr 1) Jan 23 17:52:31.813904 kernel: GICv3: using LPI property table @0x0000000100120000 Jan 23 17:52:31.813910 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100130000 Jan 23 17:52:31.813917 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 17:52:31.813923 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 17:52:31.813930 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 23 17:52:31.813936 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 23 17:52:31.813943 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 23 17:52:31.813950 kernel: Console: colour dummy device 80x25 Jan 23 17:52:31.813956 kernel: ACPI: Core revision 20240827 Jan 23 17:52:31.813965 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 23 17:52:31.813972 kernel: pid_max: default: 32768 minimum: 301 Jan 23 17:52:31.813979 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 17:52:31.813985 kernel: landlock: Up and running. Jan 23 17:52:31.813992 kernel: SELinux: Initializing. Jan 23 17:52:31.813998 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 17:52:31.814005 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 17:52:31.814011 kernel: rcu: Hierarchical SRCU implementation. Jan 23 17:52:31.814018 kernel: rcu: Max phase no-delay instances is 400. Jan 23 17:52:31.814026 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 17:52:31.814033 kernel: Remapping and enabling EFI services. Jan 23 17:52:31.814039 kernel: smp: Bringing up secondary CPUs ... Jan 23 17:52:31.814046 kernel: Detected PIPT I-cache on CPU1 Jan 23 17:52:31.814052 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 23 17:52:31.814059 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100140000 Jan 23 17:52:31.814065 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 17:52:31.814072 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 23 17:52:31.814079 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 17:52:31.814085 kernel: SMP: Total of 2 processors activated. Jan 23 17:52:31.814098 kernel: CPU: All CPU(s) started at EL1 Jan 23 17:52:31.814105 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 17:52:31.814112 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 23 17:52:31.814120 kernel: CPU features: detected: Common not Private translations Jan 23 17:52:31.814127 kernel: CPU features: detected: CRC32 instructions Jan 23 17:52:31.814134 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 23 17:52:31.814141 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 23 17:52:31.814149 kernel: CPU features: detected: LSE atomic instructions Jan 23 17:52:31.814156 kernel: CPU features: detected: Privileged Access Never Jan 23 17:52:31.814163 kernel: CPU features: detected: RAS Extension Support Jan 23 17:52:31.814179 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 23 17:52:31.814186 kernel: alternatives: applying system-wide alternatives Jan 23 17:52:31.814193 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jan 23 17:52:31.814201 kernel: Memory: 3858852K/4096000K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 215668K reserved, 16384K cma-reserved) Jan 23 17:52:31.814208 kernel: devtmpfs: initialized Jan 23 17:52:31.814215 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 17:52:31.814224 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 17:52:31.814231 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 23 17:52:31.814238 kernel: 0 pages in range for non-PLT usage Jan 23 17:52:31.814245 kernel: 508400 pages in range for PLT usage Jan 23 17:52:31.814252 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 17:52:31.814259 kernel: SMBIOS 3.0.0 present. Jan 23 17:52:31.814266 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 23 17:52:31.814273 kernel: DMI: Memory slots populated: 1/1 Jan 23 17:52:31.814280 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 17:52:31.814288 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 17:52:31.814295 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 17:52:31.814302 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 17:52:31.814309 kernel: audit: initializing netlink subsys (disabled) Jan 23 17:52:31.814316 kernel: audit: type=2000 audit(0.015:1): state=initialized audit_enabled=0 res=1 Jan 23 17:52:31.814323 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 17:52:31.814330 kernel: cpuidle: using governor menu Jan 23 17:52:31.814337 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 17:52:31.814344 kernel: ASID allocator initialised with 32768 entries Jan 23 17:52:31.814352 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 17:52:31.814359 kernel: Serial: AMBA PL011 UART driver Jan 23 17:52:31.814366 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 17:52:31.814373 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 17:52:31.814380 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 17:52:31.814387 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 17:52:31.814394 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 17:52:31.814401 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 17:52:31.814407 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 17:52:31.814416 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 17:52:31.814423 kernel: ACPI: Added _OSI(Module Device) Jan 23 17:52:31.814437 kernel: ACPI: Added _OSI(Processor Device) Jan 23 17:52:31.814444 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 17:52:31.814451 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 17:52:31.814458 kernel: ACPI: Interpreter enabled Jan 23 17:52:31.814464 kernel: ACPI: Using GIC for interrupt routing Jan 23 17:52:31.814472 kernel: ACPI: MCFG table detected, 1 entries Jan 23 17:52:31.814478 kernel: ACPI: CPU0 has been hot-added Jan 23 17:52:31.814485 kernel: ACPI: CPU1 has been hot-added Jan 23 17:52:31.814495 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 23 17:52:31.814502 kernel: printk: legacy console [ttyAMA0] enabled Jan 23 17:52:31.814509 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 17:52:31.814646 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 17:52:31.814711 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 23 17:52:31.814771 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 23 17:52:31.814830 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 23 17:52:31.814891 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 23 17:52:31.814900 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 23 17:52:31.814908 kernel: PCI host bridge to bus 0000:00 Jan 23 17:52:31.814981 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 23 17:52:31.815037 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 23 17:52:31.815089 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 23 17:52:31.815142 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 17:52:31.815238 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jan 23 17:52:31.815316 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 conventional PCI endpoint Jan 23 17:52:31.815377 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11289000-0x11289fff] Jan 23 17:52:31.815504 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref] Jan 23 17:52:31.815582 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:52:31.815642 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11288000-0x11288fff] Jan 23 17:52:31.815707 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 23 17:52:31.815764 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Jan 23 17:52:31.815822 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Jan 23 17:52:31.815887 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:52:31.815945 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11287000-0x11287fff] Jan 23 17:52:31.816003 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 23 17:52:31.816061 kernel: pci 0000:00:02.1: bridge window [mem 0x10e00000-0x10ffffff] Jan 23 17:52:31.816127 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:52:31.816226 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11286000-0x11286fff] Jan 23 17:52:31.816289 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 23 17:52:31.816347 kernel: pci 0000:00:02.2: bridge window [mem 0x10c00000-0x10dfffff] Jan 23 17:52:31.816405 kernel: pci 0000:00:02.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Jan 23 17:52:31.816500 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:52:31.816561 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11285000-0x11285fff] Jan 23 17:52:31.816623 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 23 17:52:31.816680 kernel: pci 0000:00:02.3: bridge window [mem 0x10a00000-0x10bfffff] Jan 23 17:52:31.816738 kernel: pci 0000:00:02.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Jan 23 17:52:31.816803 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:52:31.816861 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11284000-0x11284fff] Jan 23 17:52:31.816919 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 23 17:52:31.816980 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 23 17:52:31.817040 kernel: pci 0000:00:02.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Jan 23 17:52:31.817104 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:52:31.817163 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11283000-0x11283fff] Jan 23 17:52:31.817236 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 23 17:52:31.817295 kernel: pci 0000:00:02.5: bridge window [mem 0x10600000-0x107fffff] Jan 23 17:52:31.817352 kernel: pci 0000:00:02.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Jan 23 17:52:31.817418 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:52:31.817506 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11282000-0x11282fff] Jan 23 17:52:31.817566 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 23 17:52:31.817623 kernel: pci 0000:00:02.6: bridge window [mem 0x10400000-0x105fffff] Jan 23 17:52:31.817682 kernel: pci 0000:00:02.6: bridge window [mem 0x8000500000-0x80005fffff 64bit pref] Jan 23 17:52:31.817747 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:52:31.817805 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11281000-0x11281fff] Jan 23 17:52:31.817865 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 23 17:52:31.817923 kernel: pci 0000:00:02.7: bridge window [mem 0x10200000-0x103fffff] Jan 23 17:52:31.817991 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 17:52:31.818051 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11280000-0x11280fff] Jan 23 17:52:31.818108 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 23 17:52:31.818175 kernel: pci 0000:00:03.0: bridge window [mem 0x10000000-0x101fffff] Jan 23 17:52:31.818247 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 conventional PCI endpoint Jan 23 17:52:31.818309 kernel: pci 0000:00:04.0: BAR 0 [io 0x0000-0x0007] Jan 23 17:52:31.818380 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 23 17:52:31.818452 kernel: pci 0000:01:00.0: BAR 1 [mem 0x11000000-0x11000fff] Jan 23 17:52:31.818514 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jan 23 17:52:31.818575 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 23 17:52:31.818642 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jan 23 17:52:31.818702 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10e00000-0x10e03fff 64bit] Jan 23 17:52:31.820058 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Jan 23 17:52:31.820143 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10c00000-0x10c00fff] Jan 23 17:52:31.820227 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Jan 23 17:52:31.820299 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Jan 23 17:52:31.820361 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Jan 23 17:52:31.820449 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 23 17:52:31.820534 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff] Jan 23 17:52:31.820598 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Jan 23 17:52:31.820669 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Jan 23 17:52:31.820731 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10600000-0x10600fff] Jan 23 17:52:31.820792 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Jan 23 17:52:31.820867 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 23 17:52:31.820928 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10400000-0x10400fff] Jan 23 17:52:31.820991 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000500000-0x8000503fff 64bit pref] Jan 23 17:52:31.821050 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 23 17:52:31.821112 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 23 17:52:31.821183 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 23 17:52:31.821246 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 23 17:52:31.821308 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 23 17:52:31.821367 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 23 17:52:31.821496 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 23 17:52:31.821571 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 23 17:52:31.821631 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 23 17:52:31.821689 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 23 17:52:31.821750 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 23 17:52:31.821809 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 23 17:52:31.821866 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 23 17:52:31.821930 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 23 17:52:31.821989 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 23 17:52:31.822046 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 23 17:52:31.822106 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 23 17:52:31.822193 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 23 17:52:31.822266 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 23 17:52:31.822332 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 23 17:52:31.822390 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 23 17:52:31.822524 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 23 17:52:31.822595 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 23 17:52:31.822653 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 23 17:52:31.822711 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 23 17:52:31.822776 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 23 17:52:31.822838 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 23 17:52:31.822896 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 23 17:52:31.822954 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff]: assigned Jan 23 17:52:31.823011 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Jan 23 17:52:31.823070 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff]: assigned Jan 23 17:52:31.823127 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Jan 23 17:52:31.823203 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff]: assigned Jan 23 17:52:31.823265 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Jan 23 17:52:31.823328 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff]: assigned Jan 23 17:52:31.823385 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Jan 23 17:52:31.823465 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff]: assigned Jan 23 17:52:31.823525 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Jan 23 17:52:31.823583 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Jan 23 17:52:31.823641 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Jan 23 17:52:31.823700 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Jan 23 17:52:31.823762 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Jan 23 17:52:31.823821 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Jan 23 17:52:31.823879 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Jan 23 17:52:31.823938 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff]: assigned Jan 23 17:52:31.823996 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Jan 23 17:52:31.826529 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8001200000-0x8001203fff 64bit pref]: assigned Jan 23 17:52:31.826608 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11200000-0x11200fff]: assigned Jan 23 17:52:31.826669 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11201000-0x11201fff]: assigned Jan 23 17:52:31.826734 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Jan 23 17:52:31.826794 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11202000-0x11202fff]: assigned Jan 23 17:52:31.826852 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Jan 23 17:52:31.826913 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11203000-0x11203fff]: assigned Jan 23 17:52:31.826971 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Jan 23 17:52:31.827032 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11204000-0x11204fff]: assigned Jan 23 17:52:31.827090 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Jan 23 17:52:31.827150 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11205000-0x11205fff]: assigned Jan 23 17:52:31.827253 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Jan 23 17:52:31.827328 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11206000-0x11206fff]: assigned Jan 23 17:52:31.827388 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Jan 23 17:52:31.827470 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11207000-0x11207fff]: assigned Jan 23 17:52:31.827531 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Jan 23 17:52:31.827595 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11208000-0x11208fff]: assigned Jan 23 17:52:31.827653 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Jan 23 17:52:31.827713 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11209000-0x11209fff]: assigned Jan 23 17:52:31.827772 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff]: assigned Jan 23 17:52:31.827842 kernel: pci 0000:00:04.0: BAR 0 [io 0xa000-0xa007]: assigned Jan 23 17:52:31.827924 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Jan 23 17:52:31.827991 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jan 23 17:52:31.828060 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Jan 23 17:52:31.828127 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 23 17:52:31.828213 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 23 17:52:31.828276 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 23 17:52:31.828334 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 23 17:52:31.828399 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Jan 23 17:52:31.829161 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 23 17:52:31.829308 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 23 17:52:31.829376 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 23 17:52:31.829465 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 23 17:52:31.829556 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Jan 23 17:52:31.829627 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Jan 23 17:52:31.829685 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 23 17:52:31.829750 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 23 17:52:31.829826 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 23 17:52:31.829892 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 23 17:52:31.829957 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Jan 23 17:52:31.830024 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 23 17:52:31.830119 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 23 17:52:31.830195 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 23 17:52:31.830253 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 23 17:52:31.830326 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Jan 23 17:52:31.830402 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff]: assigned Jan 23 17:52:31.830489 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 23 17:52:31.830550 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 23 17:52:31.830610 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 23 17:52:31.830682 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 23 17:52:31.830761 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Jan 23 17:52:31.830824 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Jan 23 17:52:31.830883 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 23 17:52:31.830967 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 23 17:52:31.831047 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 23 17:52:31.831106 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 23 17:52:31.831215 kernel: pci 0000:07:00.0: ROM [mem 0x10c00000-0x10c7ffff pref]: assigned Jan 23 17:52:31.831300 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000c00000-0x8000c03fff 64bit pref]: assigned Jan 23 17:52:31.831367 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10c80000-0x10c80fff]: assigned Jan 23 17:52:31.833488 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 23 17:52:31.833600 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 23 17:52:31.833676 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 23 17:52:31.833741 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 23 17:52:31.833803 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 23 17:52:31.833862 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 23 17:52:31.833920 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 23 17:52:31.833979 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 23 17:52:31.834045 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 23 17:52:31.834114 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 23 17:52:31.834224 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 23 17:52:31.834298 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 23 17:52:31.834361 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 23 17:52:31.834415 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 23 17:52:31.834495 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 23 17:52:31.834566 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 23 17:52:31.834622 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 23 17:52:31.834685 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 23 17:52:31.834751 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 23 17:52:31.834822 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 23 17:52:31.834891 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 23 17:52:31.834974 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 23 17:52:31.835039 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 23 17:52:31.835094 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 23 17:52:31.835156 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 23 17:52:31.835230 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 23 17:52:31.835285 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 23 17:52:31.835347 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 23 17:52:31.835402 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 23 17:52:31.835729 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 23 17:52:31.835814 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 23 17:52:31.835876 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 23 17:52:31.835933 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 23 17:52:31.836009 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 23 17:52:31.836076 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 23 17:52:31.836134 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 23 17:52:31.836265 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 23 17:52:31.836345 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 23 17:52:31.836407 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 23 17:52:31.836541 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 23 17:52:31.837912 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 23 17:52:31.837976 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 23 17:52:31.837986 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 23 17:52:31.837994 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 23 17:52:31.838008 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 23 17:52:31.838015 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 23 17:52:31.838023 kernel: iommu: Default domain type: Translated Jan 23 17:52:31.838030 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 17:52:31.838038 kernel: efivars: Registered efivars operations Jan 23 17:52:31.838045 kernel: vgaarb: loaded Jan 23 17:52:31.838052 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 17:52:31.838060 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 17:52:31.838067 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 17:52:31.838076 kernel: pnp: PnP ACPI init Jan 23 17:52:31.838149 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 23 17:52:31.838160 kernel: pnp: PnP ACPI: found 1 devices Jan 23 17:52:31.838183 kernel: NET: Registered PF_INET protocol family Jan 23 17:52:31.838192 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 17:52:31.838199 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 17:52:31.838207 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 17:52:31.838214 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 17:52:31.838224 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 17:52:31.838232 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 17:52:31.838239 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 17:52:31.838247 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 17:52:31.838255 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 17:52:31.838331 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 23 17:52:31.838342 kernel: PCI: CLS 0 bytes, default 64 Jan 23 17:52:31.838350 kernel: kvm [1]: HYP mode not available Jan 23 17:52:31.838357 kernel: Initialise system trusted keyrings Jan 23 17:52:31.838367 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 17:52:31.838374 kernel: Key type asymmetric registered Jan 23 17:52:31.838382 kernel: Asymmetric key parser 'x509' registered Jan 23 17:52:31.838390 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 23 17:52:31.838397 kernel: io scheduler mq-deadline registered Jan 23 17:52:31.838405 kernel: io scheduler kyber registered Jan 23 17:52:31.838412 kernel: io scheduler bfq registered Jan 23 17:52:31.838420 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 23 17:52:31.838525 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 23 17:52:31.838592 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 23 17:52:31.838655 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:52:31.838718 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 23 17:52:31.838780 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 23 17:52:31.838839 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:52:31.838903 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 23 17:52:31.838974 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 23 17:52:31.839034 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:52:31.839098 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 23 17:52:31.839159 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 23 17:52:31.839269 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:52:31.839334 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 23 17:52:31.839394 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 23 17:52:31.841513 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:52:31.841598 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 23 17:52:31.841661 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 23 17:52:31.841730 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:52:31.841817 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 23 17:52:31.841880 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 23 17:52:31.841938 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:52:31.842001 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 23 17:52:31.842060 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 23 17:52:31.842118 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:52:31.842131 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 23 17:52:31.842208 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 23 17:52:31.842270 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 23 17:52:31.842343 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 17:52:31.842354 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 23 17:52:31.842362 kernel: ACPI: button: Power Button [PWRB] Jan 23 17:52:31.842369 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 23 17:52:31.842479 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 23 17:52:31.842553 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 23 17:52:31.842567 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 17:52:31.842575 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 23 17:52:31.842636 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 23 17:52:31.842647 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 23 17:52:31.842654 kernel: thunder_xcv, ver 1.0 Jan 23 17:52:31.842662 kernel: thunder_bgx, ver 1.0 Jan 23 17:52:31.842669 kernel: nicpf, ver 1.0 Jan 23 17:52:31.842676 kernel: nicvf, ver 1.0 Jan 23 17:52:31.842745 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 17:52:31.842803 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T17:52:31 UTC (1769190751) Jan 23 17:52:31.842815 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 17:52:31.842823 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jan 23 17:52:31.842830 kernel: watchdog: NMI not fully supported Jan 23 17:52:31.842838 kernel: watchdog: Hard watchdog permanently disabled Jan 23 17:52:31.842845 kernel: NET: Registered PF_INET6 protocol family Jan 23 17:52:31.842852 kernel: Segment Routing with IPv6 Jan 23 17:52:31.842860 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 17:52:31.842868 kernel: NET: Registered PF_PACKET protocol family Jan 23 17:52:31.842876 kernel: Key type dns_resolver registered Jan 23 17:52:31.842883 kernel: registered taskstats version 1 Jan 23 17:52:31.842891 kernel: Loading compiled-in X.509 certificates Jan 23 17:52:31.842906 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 3b281aa2bfe49764dd224485ec54e6070c82b8fb' Jan 23 17:52:31.842913 kernel: Demotion targets for Node 0: null Jan 23 17:52:31.842921 kernel: Key type .fscrypt registered Jan 23 17:52:31.842928 kernel: Key type fscrypt-provisioning registered Jan 23 17:52:31.842935 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 17:52:31.842945 kernel: ima: Allocated hash algorithm: sha1 Jan 23 17:52:31.842952 kernel: ima: No architecture policies found Jan 23 17:52:31.842960 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 17:52:31.842967 kernel: clk: Disabling unused clocks Jan 23 17:52:31.842974 kernel: PM: genpd: Disabling unused power domains Jan 23 17:52:31.842982 kernel: Warning: unable to open an initial console. Jan 23 17:52:31.842989 kernel: Freeing unused kernel memory: 39552K Jan 23 17:52:31.842996 kernel: Run /init as init process Jan 23 17:52:31.843004 kernel: with arguments: Jan 23 17:52:31.843012 kernel: /init Jan 23 17:52:31.843019 kernel: with environment: Jan 23 17:52:31.843027 kernel: HOME=/ Jan 23 17:52:31.843034 kernel: TERM=linux Jan 23 17:52:31.843042 systemd[1]: Successfully made /usr/ read-only. Jan 23 17:52:31.843052 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 17:52:31.843060 systemd[1]: Detected virtualization kvm. Jan 23 17:52:31.843068 systemd[1]: Detected architecture arm64. Jan 23 17:52:31.843077 systemd[1]: Running in initrd. Jan 23 17:52:31.843085 systemd[1]: No hostname configured, using default hostname. Jan 23 17:52:31.843093 systemd[1]: Hostname set to . Jan 23 17:52:31.843100 systemd[1]: Initializing machine ID from VM UUID. Jan 23 17:52:31.843108 systemd[1]: Queued start job for default target initrd.target. Jan 23 17:52:31.843116 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:52:31.843124 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:52:31.843132 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 17:52:31.843142 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 17:52:31.843150 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 17:52:31.843158 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 17:52:31.843177 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 17:52:31.843185 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 17:52:31.843193 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:52:31.843203 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:52:31.843211 systemd[1]: Reached target paths.target - Path Units. Jan 23 17:52:31.843219 systemd[1]: Reached target slices.target - Slice Units. Jan 23 17:52:31.843227 systemd[1]: Reached target swap.target - Swaps. Jan 23 17:52:31.843235 systemd[1]: Reached target timers.target - Timer Units. Jan 23 17:52:31.843243 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 17:52:31.843251 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 17:52:31.843259 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 17:52:31.843266 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 17:52:31.843275 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:52:31.843283 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 17:52:31.843291 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:52:31.843299 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 17:52:31.843307 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 17:52:31.843315 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 17:52:31.843322 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 17:52:31.843331 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 17:52:31.843340 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 17:52:31.843348 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 17:52:31.843356 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 17:52:31.843364 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:52:31.843371 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 17:52:31.843380 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:52:31.843389 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 17:52:31.843397 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 17:52:31.843488 systemd-journald[245]: Collecting audit messages is disabled. Jan 23 17:52:31.843515 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 17:52:31.843523 kernel: Bridge firewalling registered Jan 23 17:52:31.843532 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 17:52:31.843540 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 17:52:31.843548 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:52:31.843557 systemd-journald[245]: Journal started Jan 23 17:52:31.843576 systemd-journald[245]: Runtime Journal (/run/log/journal/562a170905b449b3b6cdd5babf862df6) is 8M, max 76.5M, 68.5M free. Jan 23 17:52:31.806631 systemd-modules-load[247]: Inserted module 'overlay' Jan 23 17:52:31.829419 systemd-modules-load[247]: Inserted module 'br_netfilter' Jan 23 17:52:31.856472 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 17:52:31.858688 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 17:52:31.859117 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 17:52:31.863609 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:52:31.869593 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 17:52:31.873239 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 17:52:31.887358 systemd-tmpfiles[273]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 17:52:31.888086 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 17:52:31.889009 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:52:31.891811 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 17:52:31.893540 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:52:31.900294 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 17:52:31.923715 dracut-cmdline[285]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=5fc6d8e43735a6d26d13c2f5b234025ac82c601a45144671feeb457ddade8f9d Jan 23 17:52:31.939658 systemd-resolved[287]: Positive Trust Anchors: Jan 23 17:52:31.939683 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 17:52:31.939732 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 17:52:31.946681 systemd-resolved[287]: Defaulting to hostname 'linux'. Jan 23 17:52:31.947856 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 17:52:31.948710 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:52:32.030464 kernel: SCSI subsystem initialized Jan 23 17:52:32.035460 kernel: Loading iSCSI transport class v2.0-870. Jan 23 17:52:32.042468 kernel: iscsi: registered transport (tcp) Jan 23 17:52:32.056462 kernel: iscsi: registered transport (qla4xxx) Jan 23 17:52:32.056520 kernel: QLogic iSCSI HBA Driver Jan 23 17:52:32.079662 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 17:52:32.101039 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:52:32.102043 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 17:52:32.150900 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 17:52:32.153082 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 17:52:32.223491 kernel: raid6: neonx8 gen() 15687 MB/s Jan 23 17:52:32.240522 kernel: raid6: neonx4 gen() 15720 MB/s Jan 23 17:52:32.257483 kernel: raid6: neonx2 gen() 13120 MB/s Jan 23 17:52:32.274509 kernel: raid6: neonx1 gen() 10410 MB/s Jan 23 17:52:32.291482 kernel: raid6: int64x8 gen() 6877 MB/s Jan 23 17:52:32.308512 kernel: raid6: int64x4 gen() 7316 MB/s Jan 23 17:52:32.325475 kernel: raid6: int64x2 gen() 6077 MB/s Jan 23 17:52:32.342499 kernel: raid6: int64x1 gen() 5037 MB/s Jan 23 17:52:32.342596 kernel: raid6: using algorithm neonx4 gen() 15720 MB/s Jan 23 17:52:32.359491 kernel: raid6: .... xor() 12291 MB/s, rmw enabled Jan 23 17:52:32.359563 kernel: raid6: using neon recovery algorithm Jan 23 17:52:32.364635 kernel: xor: measuring software checksum speed Jan 23 17:52:32.364700 kernel: 8regs : 21567 MB/sec Jan 23 17:52:32.364716 kernel: 32regs : 20759 MB/sec Jan 23 17:52:32.364731 kernel: arm64_neon : 28041 MB/sec Jan 23 17:52:32.365466 kernel: xor: using function: arm64_neon (28041 MB/sec) Jan 23 17:52:32.419479 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 17:52:32.427103 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 17:52:32.431288 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:52:32.464546 systemd-udevd[495]: Using default interface naming scheme 'v255'. Jan 23 17:52:32.468813 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:52:32.475023 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 17:52:32.498659 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation Jan 23 17:52:32.523870 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 17:52:32.525845 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 17:52:32.586117 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:52:32.589376 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 17:52:32.707470 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Jan 23 17:52:32.709457 kernel: scsi host0: Virtio SCSI HBA Jan 23 17:52:32.713451 kernel: ACPI: bus type USB registered Jan 23 17:52:32.713498 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 23 17:52:32.715463 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 23 17:52:32.725897 kernel: usbcore: registered new interface driver usbfs Jan 23 17:52:32.725951 kernel: usbcore: registered new interface driver hub Jan 23 17:52:32.727468 kernel: usbcore: registered new device driver usb Jan 23 17:52:32.731685 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 17:52:32.731800 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:52:32.733261 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:52:32.735030 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:52:32.750783 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 23 17:52:32.750961 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 23 17:52:32.751038 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 23 17:52:32.751887 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 23 17:52:32.752028 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 23 17:52:32.755068 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 23 17:52:32.755279 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 23 17:52:32.758493 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 17:52:32.759449 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 17:52:32.759477 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 23 17:52:32.759646 kernel: GPT:17805311 != 80003071 Jan 23 17:52:32.759656 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 17:52:32.759665 kernel: GPT:17805311 != 80003071 Jan 23 17:52:32.759674 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 17:52:32.759683 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 17:52:32.761820 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 23 17:52:32.764819 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 23 17:52:32.764957 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 23 17:52:32.767725 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 23 17:52:32.767856 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 23 17:52:32.768706 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 23 17:52:32.768850 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 23 17:52:32.770452 kernel: hub 1-0:1.0: USB hub found Jan 23 17:52:32.772461 kernel: hub 1-0:1.0: 4 ports detected Jan 23 17:52:32.772618 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 23 17:52:32.774282 kernel: hub 2-0:1.0: USB hub found Jan 23 17:52:32.774498 kernel: hub 2-0:1.0: 4 ports detected Jan 23 17:52:32.776945 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:52:32.824562 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 23 17:52:32.844909 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 23 17:52:32.855212 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 23 17:52:32.861995 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 23 17:52:32.862743 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 23 17:52:32.870598 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 17:52:32.881535 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 17:52:32.884391 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 17:52:32.885754 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:52:32.890384 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 17:52:32.892019 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 17:52:32.895487 disk-uuid[601]: Primary Header is updated. Jan 23 17:52:32.895487 disk-uuid[601]: Secondary Entries is updated. Jan 23 17:52:32.895487 disk-uuid[601]: Secondary Header is updated. Jan 23 17:52:32.905487 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 17:52:32.923753 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 17:52:33.011474 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 23 17:52:33.141586 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 23 17:52:33.141654 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 23 17:52:33.141903 kernel: usbcore: registered new interface driver usbhid Jan 23 17:52:33.141924 kernel: usbhid: USB HID core driver Jan 23 17:52:33.247504 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 23 17:52:33.375485 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 23 17:52:33.428532 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 23 17:52:33.924887 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 17:52:33.926528 disk-uuid[603]: The operation has completed successfully. Jan 23 17:52:34.001061 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 17:52:34.001212 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 17:52:34.017346 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 17:52:34.034870 sh[626]: Success Jan 23 17:52:34.049729 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 17:52:34.049784 kernel: device-mapper: uevent: version 1.0.3 Jan 23 17:52:34.049805 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 17:52:34.059460 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 23 17:52:34.112263 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 17:52:34.116084 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 17:52:34.120233 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 17:52:34.143483 kernel: BTRFS: device fsid 8784b097-3924-47e8-98b3-06e8cbe78a64 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (638) Jan 23 17:52:34.145071 kernel: BTRFS info (device dm-0): first mount of filesystem 8784b097-3924-47e8-98b3-06e8cbe78a64 Jan 23 17:52:34.145142 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:52:34.152559 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 17:52:34.152658 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 17:52:34.152705 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 17:52:34.155037 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 17:52:34.157247 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 17:52:34.158698 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 17:52:34.160725 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 17:52:34.162685 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 17:52:34.197499 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (669) Jan 23 17:52:34.199454 kernel: BTRFS info (device sda6): first mount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:52:34.199512 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:52:34.205464 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 17:52:34.205528 kernel: BTRFS info (device sda6): turning on async discard Jan 23 17:52:34.205541 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 17:52:34.210512 kernel: BTRFS info (device sda6): last unmount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:52:34.212453 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 17:52:34.218338 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 17:52:34.301874 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 17:52:34.309017 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 17:52:34.364225 systemd-networkd[813]: lo: Link UP Jan 23 17:52:34.364235 systemd-networkd[813]: lo: Gained carrier Jan 23 17:52:34.367040 systemd-networkd[813]: Enumeration completed Jan 23 17:52:34.367180 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 17:52:34.368226 systemd[1]: Reached target network.target - Network. Jan 23 17:52:34.369041 systemd-networkd[813]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:52:34.369045 systemd-networkd[813]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 17:52:34.369837 systemd-networkd[813]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:52:34.369840 systemd-networkd[813]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 17:52:34.370350 systemd-networkd[813]: eth0: Link UP Jan 23 17:52:34.372568 systemd-networkd[813]: eth1: Link UP Jan 23 17:52:34.372762 systemd-networkd[813]: eth0: Gained carrier Jan 23 17:52:34.376380 ignition[724]: Ignition 2.22.0 Jan 23 17:52:34.372773 systemd-networkd[813]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:52:34.376388 ignition[724]: Stage: fetch-offline Jan 23 17:52:34.379074 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 17:52:34.376418 ignition[724]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:52:34.379245 systemd-networkd[813]: eth1: Gained carrier Jan 23 17:52:34.376425 ignition[724]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 17:52:34.379261 systemd-networkd[813]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:52:34.376536 ignition[724]: parsed url from cmdline: "" Jan 23 17:52:34.382084 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 17:52:34.376539 ignition[724]: no config URL provided Jan 23 17:52:34.376545 ignition[724]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 17:52:34.376552 ignition[724]: no config at "/usr/lib/ignition/user.ign" Jan 23 17:52:34.376557 ignition[724]: failed to fetch config: resource requires networking Jan 23 17:52:34.376760 ignition[724]: Ignition finished successfully Jan 23 17:52:34.413192 ignition[821]: Ignition 2.22.0 Jan 23 17:52:34.413207 ignition[821]: Stage: fetch Jan 23 17:52:34.413347 ignition[821]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:52:34.413356 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 17:52:34.413478 ignition[821]: parsed url from cmdline: "" Jan 23 17:52:34.413481 ignition[821]: no config URL provided Jan 23 17:52:34.413486 ignition[821]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 17:52:34.413494 ignition[821]: no config at "/usr/lib/ignition/user.ign" Jan 23 17:52:34.413524 ignition[821]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 23 17:52:34.414169 ignition[821]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 23 17:52:34.427564 systemd-networkd[813]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 23 17:52:34.434879 systemd-networkd[813]: eth0: DHCPv4 address 49.13.3.65/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 23 17:52:34.614391 ignition[821]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 23 17:52:34.621931 ignition[821]: GET result: OK Jan 23 17:52:34.622044 ignition[821]: parsing config with SHA512: 4d7c481c705c88e23e39f84e4d0325dcc28f6f4290abfcafed5f1f9dd4f2ea66a651331bb089093de8a956621ebb7b216c93c14b9debc1b83802ec77c5b92256 Jan 23 17:52:34.626851 unknown[821]: fetched base config from "system" Jan 23 17:52:34.627279 ignition[821]: fetch: fetch complete Jan 23 17:52:34.626926 unknown[821]: fetched base config from "system" Jan 23 17:52:34.627285 ignition[821]: fetch: fetch passed Jan 23 17:52:34.626933 unknown[821]: fetched user config from "hetzner" Jan 23 17:52:34.627334 ignition[821]: Ignition finished successfully Jan 23 17:52:34.631333 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 17:52:34.633398 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 17:52:34.668571 ignition[828]: Ignition 2.22.0 Jan 23 17:52:34.669127 ignition[828]: Stage: kargs Jan 23 17:52:34.669291 ignition[828]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:52:34.669301 ignition[828]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 17:52:34.670013 ignition[828]: kargs: kargs passed Jan 23 17:52:34.670055 ignition[828]: Ignition finished successfully Jan 23 17:52:34.673816 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 17:52:34.675920 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 17:52:34.708116 ignition[835]: Ignition 2.22.0 Jan 23 17:52:34.708134 ignition[835]: Stage: disks Jan 23 17:52:34.708282 ignition[835]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:52:34.708291 ignition[835]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 17:52:34.709004 ignition[835]: disks: disks passed Jan 23 17:52:34.712543 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 17:52:34.709047 ignition[835]: Ignition finished successfully Jan 23 17:52:34.714248 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 17:52:34.715454 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 17:52:34.716139 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 17:52:34.716752 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 17:52:34.718180 systemd[1]: Reached target basic.target - Basic System. Jan 23 17:52:34.720232 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 17:52:34.749222 systemd-fsck[843]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jan 23 17:52:34.753759 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 17:52:34.757870 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 17:52:34.830518 kernel: EXT4-fs (sda9): mounted filesystem 5f1f19a2-81b4-48e9-bfdb-d3843ff70e8e r/w with ordered data mode. Quota mode: none. Jan 23 17:52:34.832252 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 17:52:34.834693 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 17:52:34.837423 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 17:52:34.839342 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 17:52:34.843066 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 23 17:52:34.846507 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 17:52:34.847663 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 17:52:34.852874 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 17:52:34.857902 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 17:52:34.875481 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (851) Jan 23 17:52:34.880763 kernel: BTRFS info (device sda6): first mount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:52:34.880819 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:52:34.884967 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 17:52:34.885066 kernel: BTRFS info (device sda6): turning on async discard Jan 23 17:52:34.885088 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 17:52:34.887299 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 17:52:34.911016 coreos-metadata[853]: Jan 23 17:52:34.910 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 23 17:52:34.916164 coreos-metadata[853]: Jan 23 17:52:34.915 INFO Fetch successful Jan 23 17:52:34.919481 coreos-metadata[853]: Jan 23 17:52:34.919 INFO wrote hostname ci-4459-2-3-1-a204a5ad1b to /sysroot/etc/hostname Jan 23 17:52:34.924750 initrd-setup-root[879]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 17:52:34.925041 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 17:52:34.932217 initrd-setup-root[886]: cut: /sysroot/etc/group: No such file or directory Jan 23 17:52:34.937503 initrd-setup-root[893]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 17:52:34.942750 initrd-setup-root[900]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 17:52:35.039948 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 17:52:35.042250 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 17:52:35.044789 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 17:52:35.071473 kernel: BTRFS info (device sda6): last unmount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:52:35.086574 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 17:52:35.106291 ignition[969]: INFO : Ignition 2.22.0 Jan 23 17:52:35.108021 ignition[969]: INFO : Stage: mount Jan 23 17:52:35.108021 ignition[969]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:52:35.108021 ignition[969]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 17:52:35.108021 ignition[969]: INFO : mount: mount passed Jan 23 17:52:35.108021 ignition[969]: INFO : Ignition finished successfully Jan 23 17:52:35.111217 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 17:52:35.113883 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 17:52:35.143912 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 17:52:35.145947 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 17:52:35.169472 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (979) Jan 23 17:52:35.170891 kernel: BTRFS info (device sda6): first mount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:52:35.170932 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:52:35.176532 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 17:52:35.176684 kernel: BTRFS info (device sda6): turning on async discard Jan 23 17:52:35.176707 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 17:52:35.179639 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 17:52:35.213650 ignition[996]: INFO : Ignition 2.22.0 Jan 23 17:52:35.213650 ignition[996]: INFO : Stage: files Jan 23 17:52:35.215187 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:52:35.215187 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 17:52:35.215187 ignition[996]: DEBUG : files: compiled without relabeling support, skipping Jan 23 17:52:35.218801 ignition[996]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 17:52:35.218801 ignition[996]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 17:52:35.218801 ignition[996]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 17:52:35.218801 ignition[996]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 17:52:35.223753 ignition[996]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 17:52:35.219929 unknown[996]: wrote ssh authorized keys file for user: core Jan 23 17:52:35.226893 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 17:52:35.226893 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 23 17:52:35.520667 systemd-networkd[813]: eth0: Gained IPv6LL Jan 23 17:52:35.551459 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 17:52:35.633407 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 17:52:35.633407 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 17:52:35.633407 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 17:52:35.633407 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 17:52:35.633407 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 17:52:35.633407 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 17:52:35.633407 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 17:52:35.633407 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 17:52:35.633407 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 17:52:35.645823 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 17:52:35.645823 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 17:52:35.645823 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 17:52:35.645823 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 17:52:35.645823 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 17:52:35.645823 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 23 17:52:35.840820 systemd-networkd[813]: eth1: Gained IPv6LL Jan 23 17:52:36.002059 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 17:52:36.477388 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 17:52:36.477388 ignition[996]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 17:52:36.482802 ignition[996]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 17:52:36.485831 ignition[996]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 17:52:36.485831 ignition[996]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 17:52:36.489159 ignition[996]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 23 17:52:36.489159 ignition[996]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 23 17:52:36.489159 ignition[996]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 23 17:52:36.489159 ignition[996]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 23 17:52:36.489159 ignition[996]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 23 17:52:36.489159 ignition[996]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 17:52:36.489159 ignition[996]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 17:52:36.489159 ignition[996]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 17:52:36.489159 ignition[996]: INFO : files: files passed Jan 23 17:52:36.489159 ignition[996]: INFO : Ignition finished successfully Jan 23 17:52:36.489852 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 17:52:36.491819 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 17:52:36.495681 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 17:52:36.520886 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 17:52:36.521019 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 17:52:36.530327 initrd-setup-root-after-ignition[1025]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:52:36.530327 initrd-setup-root-after-ignition[1025]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:52:36.533685 initrd-setup-root-after-ignition[1029]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:52:36.536489 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 17:52:36.537895 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 17:52:36.540625 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 17:52:36.594975 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 17:52:36.595123 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 17:52:36.597112 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 17:52:36.598458 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 17:52:36.599994 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 17:52:36.600918 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 17:52:36.645154 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 17:52:36.648411 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 17:52:36.694815 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:52:36.696407 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:52:36.697711 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 17:52:36.698759 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 17:52:36.698889 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 17:52:36.700347 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 17:52:36.701059 systemd[1]: Stopped target basic.target - Basic System. Jan 23 17:52:36.702245 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 17:52:36.703305 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 17:52:36.704366 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 17:52:36.705499 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 17:52:36.706741 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 17:52:36.707870 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 17:52:36.709176 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 17:52:36.710180 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 17:52:36.711372 systemd[1]: Stopped target swap.target - Swaps. Jan 23 17:52:36.712296 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 17:52:36.712419 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 17:52:36.713791 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:52:36.714471 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:52:36.715583 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 17:52:36.716080 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:52:36.716868 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 17:52:36.716984 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 17:52:36.718791 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 17:52:36.718925 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 17:52:36.720224 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 17:52:36.720336 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 17:52:36.721520 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 23 17:52:36.721622 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 17:52:36.723658 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 17:52:36.726741 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 17:52:36.728904 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 17:52:36.729051 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:52:36.731326 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 17:52:36.732492 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 17:52:36.740535 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 17:52:36.740662 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 17:52:36.755327 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 17:52:36.763528 ignition[1049]: INFO : Ignition 2.22.0 Jan 23 17:52:36.763528 ignition[1049]: INFO : Stage: umount Jan 23 17:52:36.765832 ignition[1049]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:52:36.765832 ignition[1049]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 17:52:36.767323 ignition[1049]: INFO : umount: umount passed Jan 23 17:52:36.767323 ignition[1049]: INFO : Ignition finished successfully Jan 23 17:52:36.771856 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 17:52:36.774527 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 17:52:36.780394 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 17:52:36.780517 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 17:52:36.782399 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 17:52:36.783580 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 17:52:36.784605 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 17:52:36.784659 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 17:52:36.786444 systemd[1]: Stopped target network.target - Network. Jan 23 17:52:36.791905 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 17:52:36.792001 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 17:52:36.794802 systemd[1]: Stopped target paths.target - Path Units. Jan 23 17:52:36.796472 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 17:52:36.798969 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:52:36.799850 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 17:52:36.802930 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 17:52:36.804250 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 17:52:36.804306 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 17:52:36.805550 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 17:52:36.805590 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 17:52:36.806970 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 17:52:36.807034 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 17:52:36.807948 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 17:52:36.807990 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 17:52:36.809219 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 17:52:36.809950 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 17:52:36.811845 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 17:52:36.811930 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 17:52:36.813124 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 17:52:36.814559 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 17:52:36.818813 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 17:52:36.818943 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 17:52:36.824230 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 17:52:36.824479 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 17:52:36.824602 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 17:52:36.827102 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 17:52:36.828164 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 17:52:36.830025 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 17:52:36.830078 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:52:36.832586 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 17:52:36.833104 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 17:52:36.833200 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 17:52:36.834042 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 17:52:36.834094 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:52:36.835326 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 17:52:36.835368 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 17:52:36.836690 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 17:52:36.836735 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:52:36.839825 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:52:36.841909 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 17:52:36.841975 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 17:52:36.859926 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 17:52:36.860188 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:52:36.861312 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 17:52:36.861356 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 17:52:36.862386 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 17:52:36.862415 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:52:36.864159 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 17:52:36.864219 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 17:52:36.867262 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 17:52:36.867350 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 17:52:36.869312 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 17:52:36.869414 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 17:52:36.880549 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 17:52:36.883573 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 17:52:36.883659 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:52:36.887319 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 17:52:36.887387 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:52:36.889754 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 17:52:36.889812 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 17:52:36.892410 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 17:52:36.892492 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:52:36.893866 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 17:52:36.893913 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:52:36.900834 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 17:52:36.900903 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jan 23 17:52:36.900932 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 17:52:36.900968 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 17:52:36.901407 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 17:52:36.903599 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 17:52:36.906929 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 17:52:36.908471 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 17:52:36.910223 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 17:52:36.911994 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 17:52:36.933419 systemd[1]: Switching root. Jan 23 17:52:36.973398 systemd-journald[245]: Journal stopped Jan 23 17:52:37.878546 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Jan 23 17:52:37.878616 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 17:52:37.878629 kernel: SELinux: policy capability open_perms=1 Jan 23 17:52:37.878638 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 17:52:37.878652 kernel: SELinux: policy capability always_check_network=0 Jan 23 17:52:37.878660 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 17:52:37.878669 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 17:52:37.878678 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 17:52:37.878687 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 17:52:37.878701 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 17:52:37.878710 kernel: audit: type=1403 audit(1769190757.101:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 17:52:37.878720 systemd[1]: Successfully loaded SELinux policy in 50.420ms. Jan 23 17:52:37.878739 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.926ms. Jan 23 17:52:37.878749 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 17:52:37.878760 systemd[1]: Detected virtualization kvm. Jan 23 17:52:37.878769 systemd[1]: Detected architecture arm64. Jan 23 17:52:37.878779 systemd[1]: Detected first boot. Jan 23 17:52:37.878790 systemd[1]: Hostname set to . Jan 23 17:52:37.878800 systemd[1]: Initializing machine ID from VM UUID. Jan 23 17:52:37.878811 zram_generator::config[1093]: No configuration found. Jan 23 17:52:37.878823 kernel: NET: Registered PF_VSOCK protocol family Jan 23 17:52:37.878834 systemd[1]: Populated /etc with preset unit settings. Jan 23 17:52:37.878845 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 17:52:37.878856 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 17:52:37.878866 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 17:52:37.878876 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 17:52:37.878886 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 17:52:37.878896 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 17:52:37.878905 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 17:52:37.878915 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 17:52:37.878924 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 17:52:37.878936 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 17:52:37.878946 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 17:52:37.878959 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 17:52:37.878969 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:52:37.878979 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:52:37.878989 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 17:52:37.878999 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 17:52:37.879009 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 17:52:37.879019 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 17:52:37.879030 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 23 17:52:37.879040 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:52:37.879050 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:52:37.879060 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 17:52:37.879077 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 17:52:37.879087 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 17:52:37.879098 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 17:52:37.879108 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:52:37.879118 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 17:52:37.879142 systemd[1]: Reached target slices.target - Slice Units. Jan 23 17:52:37.879153 systemd[1]: Reached target swap.target - Swaps. Jan 23 17:52:37.879164 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 17:52:37.879174 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 17:52:37.879184 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 17:52:37.879194 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:52:37.879206 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 17:52:37.879216 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:52:37.879226 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 17:52:37.879236 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 17:52:37.879246 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 17:52:37.879257 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 17:52:37.879267 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 17:52:37.879277 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 17:52:37.879286 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 17:52:37.879298 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 17:52:37.879309 systemd[1]: Reached target machines.target - Containers. Jan 23 17:52:37.879319 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 17:52:37.879330 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:52:37.879340 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 17:52:37.879350 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 17:52:37.879359 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 17:52:37.880499 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 17:52:37.880518 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 17:52:37.880533 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 17:52:37.880545 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 17:52:37.880556 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 17:52:37.880566 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 17:52:37.880576 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 17:52:37.880587 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 17:52:37.880598 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 17:52:37.880610 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:52:37.880620 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 17:52:37.880630 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 17:52:37.880640 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 17:52:37.880651 kernel: fuse: init (API version 7.41) Jan 23 17:52:37.880662 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 17:52:37.880672 kernel: loop: module loaded Jan 23 17:52:37.880681 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 17:52:37.880692 kernel: ACPI: bus type drm_connector registered Jan 23 17:52:37.880701 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 17:52:37.880713 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 17:52:37.880724 systemd[1]: Stopped verity-setup.service. Jan 23 17:52:37.880734 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 17:52:37.880744 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 17:52:37.880754 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 17:52:37.880764 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 17:52:37.880773 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 17:52:37.880783 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 17:52:37.880793 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:52:37.880805 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 17:52:37.880815 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 17:52:37.880825 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 17:52:37.880834 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 17:52:37.880844 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 17:52:37.880854 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 17:52:37.880865 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 17:52:37.880876 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 17:52:37.880886 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 17:52:37.880897 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 17:52:37.880908 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 17:52:37.880917 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 17:52:37.880927 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 17:52:37.880937 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 17:52:37.880948 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:52:37.880958 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 17:52:37.880968 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 17:52:37.881002 systemd-journald[1161]: Collecting audit messages is disabled. Jan 23 17:52:37.881026 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 17:52:37.881038 systemd-journald[1161]: Journal started Jan 23 17:52:37.881060 systemd-journald[1161]: Runtime Journal (/run/log/journal/562a170905b449b3b6cdd5babf862df6) is 8M, max 76.5M, 68.5M free. Jan 23 17:52:37.592247 systemd[1]: Queued start job for default target multi-user.target. Jan 23 17:52:37.609951 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 23 17:52:37.610413 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 17:52:37.884647 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 17:52:37.887671 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 17:52:37.887708 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 17:52:37.891557 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 17:52:37.898825 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 17:52:37.898880 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:52:37.905444 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 17:52:37.905503 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 17:52:37.907722 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 17:52:37.911473 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 17:52:37.919444 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 17:52:37.923454 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 17:52:37.931204 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 17:52:37.932924 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 17:52:37.935695 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 17:52:37.938180 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 17:52:37.939616 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 17:52:37.952575 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 17:52:37.968044 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 17:52:37.973552 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 17:52:37.981669 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 17:52:37.989498 kernel: loop0: detected capacity change from 0 to 207008 Jan 23 17:52:38.007505 systemd-journald[1161]: Time spent on flushing to /var/log/journal/562a170905b449b3b6cdd5babf862df6 is 22.895ms for 1179 entries. Jan 23 17:52:38.007505 systemd-journald[1161]: System Journal (/var/log/journal/562a170905b449b3b6cdd5babf862df6) is 8M, max 584.8M, 576.8M free. Jan 23 17:52:38.041527 systemd-journald[1161]: Received client request to flush runtime journal. Jan 23 17:52:38.041571 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 17:52:38.016658 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:52:38.017782 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jan 23 17:52:38.017796 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jan 23 17:52:38.029383 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 17:52:38.032910 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:52:38.039514 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 17:52:38.045993 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 17:52:38.059958 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 17:52:38.069632 kernel: loop1: detected capacity change from 0 to 8 Jan 23 17:52:38.077965 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 17:52:38.087501 kernel: loop2: detected capacity change from 0 to 100632 Jan 23 17:52:38.085578 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 17:52:38.117461 kernel: loop3: detected capacity change from 0 to 119840 Jan 23 17:52:38.124924 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Jan 23 17:52:38.124947 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Jan 23 17:52:38.131671 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:52:38.153456 kernel: loop4: detected capacity change from 0 to 207008 Jan 23 17:52:38.173581 kernel: loop5: detected capacity change from 0 to 8 Jan 23 17:52:38.178515 kernel: loop6: detected capacity change from 0 to 100632 Jan 23 17:52:38.203462 kernel: loop7: detected capacity change from 0 to 119840 Jan 23 17:52:38.215234 (sd-merge)[1244]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 23 17:52:38.216033 (sd-merge)[1244]: Merged extensions into '/usr'. Jan 23 17:52:38.221546 systemd[1]: Reload requested from client PID 1193 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 17:52:38.221562 systemd[1]: Reloading... Jan 23 17:52:38.320583 zram_generator::config[1270]: No configuration found. Jan 23 17:52:38.481585 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 17:52:38.481935 systemd[1]: Reloading finished in 259 ms. Jan 23 17:52:38.504480 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 17:52:38.511577 systemd[1]: Starting ensure-sysext.service... Jan 23 17:52:38.513678 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 17:52:38.515389 ldconfig[1189]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 17:52:38.525418 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 17:52:38.541759 systemd[1]: Reload requested from client PID 1306 ('systemctl') (unit ensure-sysext.service)... Jan 23 17:52:38.541777 systemd[1]: Reloading... Jan 23 17:52:38.559402 systemd-tmpfiles[1307]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 17:52:38.560957 systemd-tmpfiles[1307]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 17:52:38.561257 systemd-tmpfiles[1307]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 17:52:38.561623 systemd-tmpfiles[1307]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 17:52:38.562299 systemd-tmpfiles[1307]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 17:52:38.562546 systemd-tmpfiles[1307]: ACLs are not supported, ignoring. Jan 23 17:52:38.562596 systemd-tmpfiles[1307]: ACLs are not supported, ignoring. Jan 23 17:52:38.571743 systemd-tmpfiles[1307]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 17:52:38.571756 systemd-tmpfiles[1307]: Skipping /boot Jan 23 17:52:38.582858 systemd-tmpfiles[1307]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 17:52:38.582875 systemd-tmpfiles[1307]: Skipping /boot Jan 23 17:52:38.641461 zram_generator::config[1344]: No configuration found. Jan 23 17:52:38.791303 systemd[1]: Reloading finished in 249 ms. Jan 23 17:52:38.819514 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 17:52:38.827682 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:52:38.835731 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 17:52:38.840399 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 17:52:38.845193 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 17:52:38.851689 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 17:52:38.857584 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:52:38.863376 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 17:52:38.870905 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:52:38.873343 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 17:52:38.879670 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 17:52:38.884010 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 17:52:38.886306 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:52:38.886459 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:52:38.888449 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:52:38.888591 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:52:38.888668 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:52:38.894482 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 17:52:38.896842 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:52:38.905816 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 17:52:38.908642 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:52:38.908791 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:52:38.913856 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 17:52:38.917451 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 17:52:38.926665 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 17:52:38.937509 systemd[1]: Finished ensure-sysext.service. Jan 23 17:52:38.940849 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 17:52:38.943262 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 17:52:38.944302 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 17:52:38.947904 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 17:52:38.948409 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 17:52:38.950392 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 17:52:38.955145 systemd-udevd[1378]: Using default interface naming scheme 'v255'. Jan 23 17:52:38.958455 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 17:52:38.959906 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 17:52:38.960813 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 17:52:38.963870 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 17:52:38.964057 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 17:52:38.966833 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 23 17:52:38.971560 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 17:52:38.972908 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 17:52:38.986376 augenrules[1415]: No rules Jan 23 17:52:38.988742 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 17:52:38.989574 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 17:52:38.994295 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:52:38.999685 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 17:52:39.024934 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 17:52:39.142890 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 23 17:52:39.231453 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 17:52:39.303535 systemd-networkd[1422]: lo: Link UP Jan 23 17:52:39.303547 systemd-networkd[1422]: lo: Gained carrier Jan 23 17:52:39.306350 systemd-networkd[1422]: Enumeration completed Jan 23 17:52:39.306478 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 17:52:39.307530 systemd-networkd[1422]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:52:39.307542 systemd-networkd[1422]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 17:52:39.309334 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 17:52:39.310794 systemd-networkd[1422]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:52:39.310806 systemd-networkd[1422]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 17:52:39.313664 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 17:52:39.316854 systemd-networkd[1422]: eth0: Link UP Jan 23 17:52:39.317030 systemd-networkd[1422]: eth0: Gained carrier Jan 23 17:52:39.317058 systemd-networkd[1422]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:52:39.341681 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 23 17:52:39.343671 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 17:52:39.344922 systemd-networkd[1422]: eth1: Link UP Jan 23 17:52:39.345662 systemd-networkd[1422]: eth1: Gained carrier Jan 23 17:52:39.345686 systemd-networkd[1422]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:52:39.364058 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 23 17:52:39.365842 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 17:52:39.372972 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 17:52:39.383507 systemd-networkd[1422]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 23 17:52:39.384406 systemd-timesyncd[1410]: Network configuration changed, trying to establish connection. Jan 23 17:52:39.396881 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 17:52:39.398519 systemd-networkd[1422]: eth0: DHCPv4 address 49.13.3.65/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 23 17:52:39.400770 systemd-timesyncd[1410]: Network configuration changed, trying to establish connection. Jan 23 17:52:39.402338 systemd-resolved[1377]: Positive Trust Anchors: Jan 23 17:52:39.402972 systemd-resolved[1377]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 17:52:39.403065 systemd-resolved[1377]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 17:52:39.404919 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 23 17:52:39.404968 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 23 17:52:39.404989 kernel: [drm] features: -context_init Jan 23 17:52:39.405005 kernel: [drm] number of scanouts: 1 Jan 23 17:52:39.405018 kernel: [drm] number of cap sets: 0 Jan 23 17:52:39.405033 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Jan 23 17:52:39.410188 systemd-resolved[1377]: Using system hostname 'ci-4459-2-3-1-a204a5ad1b'. Jan 23 17:52:39.410606 kernel: Console: switching to colour frame buffer device 160x50 Jan 23 17:52:39.413190 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 17:52:39.427313 systemd[1]: Reached target network.target - Network. Jan 23 17:52:39.428143 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:52:39.429375 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 17:52:39.430211 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 17:52:39.431301 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 17:52:39.432556 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 17:52:39.433398 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 17:52:39.434567 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 17:52:39.435232 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 17:52:39.435264 systemd[1]: Reached target paths.target - Path Units. Jan 23 17:52:39.436361 systemd[1]: Reached target timers.target - Timer Units. Jan 23 17:52:39.436487 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 23 17:52:39.438400 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 17:52:39.440521 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 17:52:39.444799 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 17:52:39.446596 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 17:52:39.447988 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 17:52:39.454103 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 17:52:39.455392 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 17:52:39.458392 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 17:52:39.460039 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 17:52:39.460760 systemd[1]: Reached target basic.target - Basic System. Jan 23 17:52:39.462175 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 17:52:39.462220 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 17:52:39.463659 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 17:52:39.467613 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 17:52:39.470671 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 17:52:39.474587 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 17:52:39.476666 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 17:52:39.482683 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 17:52:39.483258 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 17:52:39.488683 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 17:52:39.491522 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 17:52:39.496804 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 17:52:39.498602 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 17:52:39.504624 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 17:52:39.506740 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 17:52:39.507243 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 17:52:39.510441 jq[1503]: false Jan 23 17:52:39.512727 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 17:52:39.516601 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 17:52:39.522490 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 17:52:39.523448 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 17:52:39.524648 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 17:52:39.541804 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 17:52:39.542014 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 17:52:39.550965 jq[1513]: true Jan 23 17:52:39.591420 (ntainerd)[1527]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 17:52:39.594082 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 17:52:39.594334 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 17:52:39.597269 update_engine[1512]: I20260123 17:52:39.597014 1512 main.cc:92] Flatcar Update Engine starting Jan 23 17:52:39.601520 jq[1529]: true Jan 23 17:52:39.601756 tar[1520]: linux-arm64/LICENSE Jan 23 17:52:39.601756 tar[1520]: linux-arm64/helm Jan 23 17:52:39.600384 dbus-daemon[1501]: [system] SELinux support is enabled Jan 23 17:52:39.601781 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 17:52:39.605508 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 17:52:39.605540 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 17:52:39.607530 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 17:52:39.607557 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 17:52:39.624361 systemd[1]: Started update-engine.service - Update Engine. Jan 23 17:52:39.629732 update_engine[1512]: I20260123 17:52:39.626566 1512 update_check_scheduler.cc:74] Next update check in 2m9s Jan 23 17:52:39.647104 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 17:52:39.667243 coreos-metadata[1500]: Jan 23 17:52:39.666 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 23 17:52:39.669812 coreos-metadata[1500]: Jan 23 17:52:39.669 INFO Fetch successful Jan 23 17:52:39.669812 coreos-metadata[1500]: Jan 23 17:52:39.669 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 23 17:52:39.670685 coreos-metadata[1500]: Jan 23 17:52:39.670 INFO Fetch successful Jan 23 17:52:39.682195 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 23 17:52:39.685347 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 23 17:52:39.698730 extend-filesystems[1505]: Found /dev/sda6 Jan 23 17:52:39.718049 extend-filesystems[1505]: Found /dev/sda9 Jan 23 17:52:39.727625 extend-filesystems[1505]: Checking size of /dev/sda9 Jan 23 17:52:39.750718 extend-filesystems[1505]: Resized partition /dev/sda9 Jan 23 17:52:39.761335 extend-filesystems[1574]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 17:52:39.768831 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 23 17:52:39.768909 bash[1571]: Updated "/home/core/.ssh/authorized_keys" Jan 23 17:52:39.771139 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 17:52:39.776858 systemd[1]: Starting sshkeys.service... Jan 23 17:52:39.834213 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 17:52:39.837092 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 17:52:39.870518 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:52:39.909644 containerd[1527]: time="2026-01-23T17:52:39Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 17:52:39.912277 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 17:52:39.913841 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 17:52:39.933480 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 23 17:52:39.944653 coreos-metadata[1581]: Jan 23 17:52:39.939 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 23 17:52:39.944653 coreos-metadata[1581]: Jan 23 17:52:39.941 INFO Fetch successful Jan 23 17:52:39.946543 unknown[1581]: wrote ssh authorized keys file for user: core Jan 23 17:52:39.948864 extend-filesystems[1574]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 23 17:52:39.948864 extend-filesystems[1574]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 23 17:52:39.948864 extend-filesystems[1574]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 23 17:52:39.954729 extend-filesystems[1505]: Resized filesystem in /dev/sda9 Jan 23 17:52:39.951010 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 17:52:39.952334 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 17:52:39.974005 containerd[1527]: time="2026-01-23T17:52:39.973943480Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 17:52:39.995185 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 17:52:40.003152 containerd[1527]: time="2026-01-23T17:52:40.003091720Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.12µs" Jan 23 17:52:40.007462 containerd[1527]: time="2026-01-23T17:52:40.006478920Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 17:52:40.007462 containerd[1527]: time="2026-01-23T17:52:40.006529040Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 17:52:40.007462 containerd[1527]: time="2026-01-23T17:52:40.006685240Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 17:52:40.007462 containerd[1527]: time="2026-01-23T17:52:40.006701760Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 17:52:40.007462 containerd[1527]: time="2026-01-23T17:52:40.006730520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 17:52:40.007462 containerd[1527]: time="2026-01-23T17:52:40.006784760Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 17:52:40.007462 containerd[1527]: time="2026-01-23T17:52:40.006798960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 17:52:40.007462 containerd[1527]: time="2026-01-23T17:52:40.006979480Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 17:52:40.007462 containerd[1527]: time="2026-01-23T17:52:40.006993400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 17:52:40.007462 containerd[1527]: time="2026-01-23T17:52:40.007004760Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 17:52:40.007462 containerd[1527]: time="2026-01-23T17:52:40.007013280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 17:52:40.007462 containerd[1527]: time="2026-01-23T17:52:40.007093880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 17:52:40.007742 containerd[1527]: time="2026-01-23T17:52:40.007337600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 17:52:40.007742 containerd[1527]: time="2026-01-23T17:52:40.007368280Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 17:52:40.007742 containerd[1527]: time="2026-01-23T17:52:40.007380400Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 17:52:40.008142 containerd[1527]: time="2026-01-23T17:52:40.007423120Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 17:52:40.008481 containerd[1527]: time="2026-01-23T17:52:40.008457800Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 17:52:40.009568 containerd[1527]: time="2026-01-23T17:52:40.009543520Z" level=info msg="metadata content store policy set" policy=shared Jan 23 17:52:40.013872 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:52:40.017280 containerd[1527]: time="2026-01-23T17:52:40.017239160Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 17:52:40.017606 containerd[1527]: time="2026-01-23T17:52:40.017492000Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 17:52:40.017606 containerd[1527]: time="2026-01-23T17:52:40.017516080Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 17:52:40.017606 containerd[1527]: time="2026-01-23T17:52:40.017540280Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 17:52:40.017606 containerd[1527]: time="2026-01-23T17:52:40.017554480Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 17:52:40.017606 containerd[1527]: time="2026-01-23T17:52:40.017565640Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 17:52:40.017606 containerd[1527]: time="2026-01-23T17:52:40.017581080Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 17:52:40.019297 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:52:40.019995 containerd[1527]: time="2026-01-23T17:52:40.017593160Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 17:52:40.019995 containerd[1527]: time="2026-01-23T17:52:40.019335200Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 17:52:40.019995 containerd[1527]: time="2026-01-23T17:52:40.019360760Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 17:52:40.019995 containerd[1527]: time="2026-01-23T17:52:40.019381640Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 17:52:40.019995 containerd[1527]: time="2026-01-23T17:52:40.019396400Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 17:52:40.019995 containerd[1527]: time="2026-01-23T17:52:40.019546360Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 17:52:40.019995 containerd[1527]: time="2026-01-23T17:52:40.019576080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 17:52:40.019995 containerd[1527]: time="2026-01-23T17:52:40.019602320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 17:52:40.019995 containerd[1527]: time="2026-01-23T17:52:40.019617680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 17:52:40.019995 containerd[1527]: time="2026-01-23T17:52:40.019628400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 17:52:40.019995 containerd[1527]: time="2026-01-23T17:52:40.019638360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 17:52:40.019995 containerd[1527]: time="2026-01-23T17:52:40.019649400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 17:52:40.019995 containerd[1527]: time="2026-01-23T17:52:40.019660000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 17:52:40.019995 containerd[1527]: time="2026-01-23T17:52:40.019681000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 17:52:40.019995 containerd[1527]: time="2026-01-23T17:52:40.019692760Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 17:52:40.020310 containerd[1527]: time="2026-01-23T17:52:40.019703600Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 17:52:40.020520 containerd[1527]: time="2026-01-23T17:52:40.020391760Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 17:52:40.020520 containerd[1527]: time="2026-01-23T17:52:40.020423080Z" level=info msg="Start snapshots syncer" Jan 23 17:52:40.020520 containerd[1527]: time="2026-01-23T17:52:40.020475320Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 17:52:40.023189 containerd[1527]: time="2026-01-23T17:52:40.022485560Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 17:52:40.023189 containerd[1527]: time="2026-01-23T17:52:40.022555720Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 17:52:40.023378 containerd[1527]: time="2026-01-23T17:52:40.022629640Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 17:52:40.023809 containerd[1527]: time="2026-01-23T17:52:40.023546640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 17:52:40.023809 containerd[1527]: time="2026-01-23T17:52:40.023580600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 17:52:40.023809 containerd[1527]: time="2026-01-23T17:52:40.023602960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 17:52:40.023809 containerd[1527]: time="2026-01-23T17:52:40.023619280Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 17:52:40.023809 containerd[1527]: time="2026-01-23T17:52:40.023631880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 17:52:40.023809 containerd[1527]: time="2026-01-23T17:52:40.023641840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 17:52:40.023809 containerd[1527]: time="2026-01-23T17:52:40.023653000Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 17:52:40.023809 containerd[1527]: time="2026-01-23T17:52:40.023689080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 17:52:40.023809 containerd[1527]: time="2026-01-23T17:52:40.023702320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 17:52:40.023809 containerd[1527]: time="2026-01-23T17:52:40.023713880Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 17:52:40.024434 containerd[1527]: time="2026-01-23T17:52:40.024378160Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 17:52:40.026154 containerd[1527]: time="2026-01-23T17:52:40.024418000Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 17:52:40.026154 containerd[1527]: time="2026-01-23T17:52:40.025421280Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 17:52:40.026154 containerd[1527]: time="2026-01-23T17:52:40.025505600Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 17:52:40.026154 containerd[1527]: time="2026-01-23T17:52:40.025514920Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 17:52:40.026154 containerd[1527]: time="2026-01-23T17:52:40.025533000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 17:52:40.026154 containerd[1527]: time="2026-01-23T17:52:40.025544800Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 17:52:40.026154 containerd[1527]: time="2026-01-23T17:52:40.025634800Z" level=info msg="runtime interface created" Jan 23 17:52:40.026154 containerd[1527]: time="2026-01-23T17:52:40.025641920Z" level=info msg="created NRI interface" Jan 23 17:52:40.026154 containerd[1527]: time="2026-01-23T17:52:40.025651160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 17:52:40.026154 containerd[1527]: time="2026-01-23T17:52:40.025666200Z" level=info msg="Connect containerd service" Jan 23 17:52:40.026154 containerd[1527]: time="2026-01-23T17:52:40.026081280Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 17:52:40.032785 containerd[1527]: time="2026-01-23T17:52:40.032241960Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 17:52:40.040605 update-ssh-keys[1596]: Updated "/home/core/.ssh/authorized_keys" Jan 23 17:52:40.041903 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 17:52:40.046617 systemd[1]: Finished sshkeys.service. Jan 23 17:52:40.152310 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:52:40.187536 locksmithd[1544]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 17:52:40.237721 containerd[1527]: time="2026-01-23T17:52:40.237597160Z" level=info msg="Start subscribing containerd event" Jan 23 17:52:40.242453 containerd[1527]: time="2026-01-23T17:52:40.239315840Z" level=info msg="Start recovering state" Jan 23 17:52:40.242453 containerd[1527]: time="2026-01-23T17:52:40.239449120Z" level=info msg="Start event monitor" Jan 23 17:52:40.242453 containerd[1527]: time="2026-01-23T17:52:40.239464040Z" level=info msg="Start cni network conf syncer for default" Jan 23 17:52:40.242453 containerd[1527]: time="2026-01-23T17:52:40.239471520Z" level=info msg="Start streaming server" Jan 23 17:52:40.242453 containerd[1527]: time="2026-01-23T17:52:40.239479880Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 17:52:40.242453 containerd[1527]: time="2026-01-23T17:52:40.239487560Z" level=info msg="runtime interface starting up..." Jan 23 17:52:40.242453 containerd[1527]: time="2026-01-23T17:52:40.239492960Z" level=info msg="starting plugins..." Jan 23 17:52:40.242453 containerd[1527]: time="2026-01-23T17:52:40.239506040Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 17:52:40.242453 containerd[1527]: time="2026-01-23T17:52:40.237810400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 17:52:40.242453 containerd[1527]: time="2026-01-23T17:52:40.239662720Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 17:52:40.242453 containerd[1527]: time="2026-01-23T17:52:40.239720680Z" level=info msg="containerd successfully booted in 0.331997s" Jan 23 17:52:40.239827 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 17:52:40.334354 systemd-logind[1511]: New seat seat0. Jan 23 17:52:40.335916 systemd-logind[1511]: Watching system buttons on /dev/input/event0 (Power Button) Jan 23 17:52:40.337473 systemd-logind[1511]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 23 17:52:40.337750 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 17:52:40.423511 tar[1520]: linux-arm64/README.md Jan 23 17:52:40.439451 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 17:52:40.576686 systemd-networkd[1422]: eth0: Gained IPv6LL Jan 23 17:52:40.579534 systemd-timesyncd[1410]: Network configuration changed, trying to establish connection. Jan 23 17:52:40.582779 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 17:52:40.584296 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 17:52:40.589648 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:52:40.595545 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 17:52:40.629804 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 17:52:40.737230 sshd_keygen[1543]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 17:52:40.761530 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 17:52:40.767420 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 17:52:40.788284 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 17:52:40.790480 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 17:52:40.795764 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 17:52:40.819801 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 17:52:40.824350 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 17:52:40.827675 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 23 17:52:40.828538 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 17:52:41.345688 systemd-networkd[1422]: eth1: Gained IPv6LL Jan 23 17:52:41.346520 systemd-timesyncd[1410]: Network configuration changed, trying to establish connection. Jan 23 17:52:41.415230 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:52:41.416977 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 17:52:41.418325 systemd[1]: Startup finished in 2.355s (kernel) + 5.479s (initrd) + 4.365s (userspace) = 12.200s. Jan 23 17:52:41.428319 (kubelet)[1661]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:52:41.917830 kubelet[1661]: E0123 17:52:41.917710 1661 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:52:41.922843 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:52:41.923137 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:52:41.924596 systemd[1]: kubelet.service: Consumed 866ms CPU time, 256.2M memory peak. Jan 23 17:52:52.173695 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 17:52:52.176847 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:52:52.361176 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:52:52.372975 (kubelet)[1680]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:52:52.422127 kubelet[1680]: E0123 17:52:52.422079 1680 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:52:52.426666 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:52:52.426808 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:52:52.427409 systemd[1]: kubelet.service: Consumed 169ms CPU time, 105.3M memory peak. Jan 23 17:53:02.677763 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 17:53:02.681237 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:53:02.846072 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:53:02.861318 (kubelet)[1694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:53:02.915852 kubelet[1694]: E0123 17:53:02.915803 1694 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:53:02.918562 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:53:02.918693 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:53:02.919314 systemd[1]: kubelet.service: Consumed 183ms CPU time, 106M memory peak. Jan 23 17:53:11.740711 systemd-timesyncd[1410]: Contacted time server 5.45.97.204:123 (2.flatcar.pool.ntp.org). Jan 23 17:53:11.741353 systemd-timesyncd[1410]: Initial clock synchronization to Fri 2026-01-23 17:53:11.948905 UTC. Jan 23 17:53:13.170141 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 17:53:13.174041 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:53:13.327983 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:53:13.340088 (kubelet)[1709]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:53:13.390310 kubelet[1709]: E0123 17:53:13.390241 1709 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:53:13.394392 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:53:13.394781 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:53:13.395845 systemd[1]: kubelet.service: Consumed 164ms CPU time, 105.3M memory peak. Jan 23 17:53:14.784974 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 17:53:14.787197 systemd[1]: Started sshd@0-49.13.3.65:22-68.220.241.50:33250.service - OpenSSH per-connection server daemon (68.220.241.50:33250). Jan 23 17:53:15.450225 sshd[1717]: Accepted publickey for core from 68.220.241.50 port 33250 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:53:15.452748 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:53:15.469616 systemd-logind[1511]: New session 1 of user core. Jan 23 17:53:15.471705 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 17:53:15.472734 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 17:53:15.515394 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 17:53:15.519279 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 17:53:15.536692 (systemd)[1722]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 17:53:15.541631 systemd-logind[1511]: New session c1 of user core. Jan 23 17:53:15.697886 systemd[1722]: Queued start job for default target default.target. Jan 23 17:53:15.711501 systemd[1722]: Created slice app.slice - User Application Slice. Jan 23 17:53:15.711568 systemd[1722]: Reached target paths.target - Paths. Jan 23 17:53:15.711649 systemd[1722]: Reached target timers.target - Timers. Jan 23 17:53:15.713647 systemd[1722]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 17:53:15.727730 systemd[1722]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 17:53:15.727851 systemd[1722]: Reached target sockets.target - Sockets. Jan 23 17:53:15.727894 systemd[1722]: Reached target basic.target - Basic System. Jan 23 17:53:15.727922 systemd[1722]: Reached target default.target - Main User Target. Jan 23 17:53:15.727954 systemd[1722]: Startup finished in 177ms. Jan 23 17:53:15.728920 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 17:53:15.738749 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 17:53:16.197424 systemd[1]: Started sshd@1-49.13.3.65:22-68.220.241.50:33254.service - OpenSSH per-connection server daemon (68.220.241.50:33254). Jan 23 17:53:16.840617 sshd[1733]: Accepted publickey for core from 68.220.241.50 port 33254 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:53:16.843665 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:53:16.849528 systemd-logind[1511]: New session 2 of user core. Jan 23 17:53:16.855819 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 17:53:17.286718 sshd[1736]: Connection closed by 68.220.241.50 port 33254 Jan 23 17:53:17.285818 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Jan 23 17:53:17.292612 systemd[1]: sshd@1-49.13.3.65:22-68.220.241.50:33254.service: Deactivated successfully. Jan 23 17:53:17.292857 systemd-logind[1511]: Session 2 logged out. Waiting for processes to exit. Jan 23 17:53:17.297177 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 17:53:17.300473 systemd-logind[1511]: Removed session 2. Jan 23 17:53:17.406658 systemd[1]: Started sshd@2-49.13.3.65:22-68.220.241.50:33262.service - OpenSSH per-connection server daemon (68.220.241.50:33262). Jan 23 17:53:18.041320 sshd[1742]: Accepted publickey for core from 68.220.241.50 port 33262 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:53:18.043410 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:53:18.050253 systemd-logind[1511]: New session 3 of user core. Jan 23 17:53:18.056858 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 17:53:18.474844 sshd[1745]: Connection closed by 68.220.241.50 port 33262 Jan 23 17:53:18.473800 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Jan 23 17:53:18.479377 systemd-logind[1511]: Session 3 logged out. Waiting for processes to exit. Jan 23 17:53:18.479844 systemd[1]: sshd@2-49.13.3.65:22-68.220.241.50:33262.service: Deactivated successfully. Jan 23 17:53:18.483238 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 17:53:18.488557 systemd-logind[1511]: Removed session 3. Jan 23 17:53:18.583210 systemd[1]: Started sshd@3-49.13.3.65:22-68.220.241.50:33264.service - OpenSSH per-connection server daemon (68.220.241.50:33264). Jan 23 17:53:19.224516 sshd[1751]: Accepted publickey for core from 68.220.241.50 port 33264 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:53:19.226334 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:53:19.233268 systemd-logind[1511]: New session 4 of user core. Jan 23 17:53:19.239706 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 17:53:19.667139 sshd[1754]: Connection closed by 68.220.241.50 port 33264 Jan 23 17:53:19.667826 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Jan 23 17:53:19.674511 systemd[1]: sshd@3-49.13.3.65:22-68.220.241.50:33264.service: Deactivated successfully. Jan 23 17:53:19.678945 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 17:53:19.680742 systemd-logind[1511]: Session 4 logged out. Waiting for processes to exit. Jan 23 17:53:19.682233 systemd-logind[1511]: Removed session 4. Jan 23 17:53:19.781597 systemd[1]: Started sshd@4-49.13.3.65:22-68.220.241.50:33276.service - OpenSSH per-connection server daemon (68.220.241.50:33276). Jan 23 17:53:20.402524 sshd[1760]: Accepted publickey for core from 68.220.241.50 port 33276 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:53:20.403941 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:53:20.409901 systemd-logind[1511]: New session 5 of user core. Jan 23 17:53:20.414716 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 17:53:20.742844 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 17:53:20.743108 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:53:20.756691 sudo[1764]: pam_unix(sudo:session): session closed for user root Jan 23 17:53:20.855157 sshd[1763]: Connection closed by 68.220.241.50 port 33276 Jan 23 17:53:20.854963 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Jan 23 17:53:20.861227 systemd[1]: sshd@4-49.13.3.65:22-68.220.241.50:33276.service: Deactivated successfully. Jan 23 17:53:20.864564 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 17:53:20.865736 systemd-logind[1511]: Session 5 logged out. Waiting for processes to exit. Jan 23 17:53:20.867680 systemd-logind[1511]: Removed session 5. Jan 23 17:53:20.968670 systemd[1]: Started sshd@5-49.13.3.65:22-68.220.241.50:33278.service - OpenSSH per-connection server daemon (68.220.241.50:33278). Jan 23 17:53:21.618150 sshd[1770]: Accepted publickey for core from 68.220.241.50 port 33278 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:53:21.620554 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:53:21.625892 systemd-logind[1511]: New session 6 of user core. Jan 23 17:53:21.631743 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 17:53:21.962791 sudo[1775]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 17:53:21.963096 sudo[1775]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:53:21.969609 sudo[1775]: pam_unix(sudo:session): session closed for user root Jan 23 17:53:21.976428 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 17:53:21.977091 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:53:21.991640 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 17:53:22.040722 augenrules[1797]: No rules Jan 23 17:53:22.043239 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 17:53:22.043774 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 17:53:22.045528 sudo[1774]: pam_unix(sudo:session): session closed for user root Jan 23 17:53:22.144936 sshd[1773]: Connection closed by 68.220.241.50 port 33278 Jan 23 17:53:22.145722 sshd-session[1770]: pam_unix(sshd:session): session closed for user core Jan 23 17:53:22.151237 systemd[1]: sshd@5-49.13.3.65:22-68.220.241.50:33278.service: Deactivated successfully. Jan 23 17:53:22.153168 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 17:53:22.154025 systemd-logind[1511]: Session 6 logged out. Waiting for processes to exit. Jan 23 17:53:22.155494 systemd-logind[1511]: Removed session 6. Jan 23 17:53:22.256687 systemd[1]: Started sshd@6-49.13.3.65:22-68.220.241.50:33294.service - OpenSSH per-connection server daemon (68.220.241.50:33294). Jan 23 17:53:22.872246 sshd[1806]: Accepted publickey for core from 68.220.241.50 port 33294 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:53:22.875307 sshd-session[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:53:22.883343 systemd-logind[1511]: New session 7 of user core. Jan 23 17:53:22.892742 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 17:53:23.204303 sudo[1810]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 17:53:23.204639 sudo[1810]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:53:23.535366 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 17:53:23.537740 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 17:53:23.540643 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:53:23.547994 (dockerd)[1828]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 17:53:23.693899 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:53:23.703964 (kubelet)[1841]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:53:23.755461 kubelet[1841]: E0123 17:53:23.754752 1841 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:53:23.757395 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:53:23.757550 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:53:23.758116 systemd[1]: kubelet.service: Consumed 160ms CPU time, 106.8M memory peak. Jan 23 17:53:23.782662 dockerd[1828]: time="2026-01-23T17:53:23.781574083Z" level=info msg="Starting up" Jan 23 17:53:23.785531 dockerd[1828]: time="2026-01-23T17:53:23.785223594Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 17:53:23.800045 dockerd[1828]: time="2026-01-23T17:53:23.799482093Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 17:53:23.829951 systemd[1]: var-lib-docker-metacopy\x2dcheck1586298525-merged.mount: Deactivated successfully. Jan 23 17:53:23.840458 dockerd[1828]: time="2026-01-23T17:53:23.840396943Z" level=info msg="Loading containers: start." Jan 23 17:53:23.853472 kernel: Initializing XFRM netlink socket Jan 23 17:53:24.085991 systemd-networkd[1422]: docker0: Link UP Jan 23 17:53:24.090930 dockerd[1828]: time="2026-01-23T17:53:24.090848130Z" level=info msg="Loading containers: done." Jan 23 17:53:24.109898 dockerd[1828]: time="2026-01-23T17:53:24.109817247Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 17:53:24.110099 dockerd[1828]: time="2026-01-23T17:53:24.109914635Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 17:53:24.110099 dockerd[1828]: time="2026-01-23T17:53:24.110004705Z" level=info msg="Initializing buildkit" Jan 23 17:53:24.136025 dockerd[1828]: time="2026-01-23T17:53:24.135930206Z" level=info msg="Completed buildkit initialization" Jan 23 17:53:24.145693 dockerd[1828]: time="2026-01-23T17:53:24.145636139Z" level=info msg="Daemon has completed initialization" Jan 23 17:53:24.146068 dockerd[1828]: time="2026-01-23T17:53:24.145700314Z" level=info msg="API listen on /run/docker.sock" Jan 23 17:53:24.146587 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 17:53:25.219474 update_engine[1512]: I20260123 17:53:25.218875 1512 update_attempter.cc:509] Updating boot flags... Jan 23 17:53:25.232602 containerd[1527]: time="2026-01-23T17:53:25.232251727Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 23 17:53:25.886520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2943075641.mount: Deactivated successfully. Jan 23 17:53:26.775457 containerd[1527]: time="2026-01-23T17:53:26.775087176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:53:26.776570 containerd[1527]: time="2026-01-23T17:53:26.776519827Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=26442080" Jan 23 17:53:26.778465 containerd[1527]: time="2026-01-23T17:53:26.777732559Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:53:26.780458 containerd[1527]: time="2026-01-23T17:53:26.780380151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:53:26.782107 containerd[1527]: time="2026-01-23T17:53:26.781525051Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 1.549232302s" Jan 23 17:53:26.782107 containerd[1527]: time="2026-01-23T17:53:26.781567943Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 23 17:53:26.782979 containerd[1527]: time="2026-01-23T17:53:26.782937060Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 23 17:53:27.947618 containerd[1527]: time="2026-01-23T17:53:27.947519294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:53:27.949338 containerd[1527]: time="2026-01-23T17:53:27.949285279Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22622106" Jan 23 17:53:27.950036 containerd[1527]: time="2026-01-23T17:53:27.949983203Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:53:27.954486 containerd[1527]: time="2026-01-23T17:53:27.953816749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:53:27.956922 containerd[1527]: time="2026-01-23T17:53:27.956780770Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.173696888s" Jan 23 17:53:27.956922 containerd[1527]: time="2026-01-23T17:53:27.956820509Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 23 17:53:27.957609 containerd[1527]: time="2026-01-23T17:53:27.957449713Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 23 17:53:28.952031 containerd[1527]: time="2026-01-23T17:53:28.951967620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:53:28.953770 containerd[1527]: time="2026-01-23T17:53:28.953727856Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17616767" Jan 23 17:53:28.954462 containerd[1527]: time="2026-01-23T17:53:28.954412275Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:53:28.958770 containerd[1527]: time="2026-01-23T17:53:28.958657971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:53:28.959443 containerd[1527]: time="2026-01-23T17:53:28.959364938Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.00188087s" Jan 23 17:53:28.959443 containerd[1527]: time="2026-01-23T17:53:28.959426688Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 23 17:53:28.960990 containerd[1527]: time="2026-01-23T17:53:28.959939018Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 23 17:53:29.973927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2632928279.mount: Deactivated successfully. Jan 23 17:53:30.321331 containerd[1527]: time="2026-01-23T17:53:30.321234959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:53:30.322971 containerd[1527]: time="2026-01-23T17:53:30.322915787Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27558750" Jan 23 17:53:30.323696 containerd[1527]: time="2026-01-23T17:53:30.323652237Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:53:30.327232 containerd[1527]: time="2026-01-23T17:53:30.327155266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:53:30.328863 containerd[1527]: time="2026-01-23T17:53:30.328786337Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.368807648s" Jan 23 17:53:30.328967 containerd[1527]: time="2026-01-23T17:53:30.328859830Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 23 17:53:30.329717 containerd[1527]: time="2026-01-23T17:53:30.329676909Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 23 17:53:31.073730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount741619969.mount: Deactivated successfully. Jan 23 17:53:31.683839 containerd[1527]: time="2026-01-23T17:53:31.683741963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:53:31.686216 containerd[1527]: time="2026-01-23T17:53:31.685557616Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Jan 23 17:53:31.687383 containerd[1527]: time="2026-01-23T17:53:31.687329298Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:53:31.694390 containerd[1527]: time="2026-01-23T17:53:31.694345082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:53:31.695626 containerd[1527]: time="2026-01-23T17:53:31.695597096Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.365878693s" Jan 23 17:53:31.695715 containerd[1527]: time="2026-01-23T17:53:31.695699988Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 23 17:53:31.696240 containerd[1527]: time="2026-01-23T17:53:31.696195086Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 17:53:32.242077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3297572547.mount: Deactivated successfully. Jan 23 17:53:32.247666 containerd[1527]: time="2026-01-23T17:53:32.247578858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:53:32.248699 containerd[1527]: time="2026-01-23T17:53:32.248664972Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Jan 23 17:53:32.250459 containerd[1527]: time="2026-01-23T17:53:32.249491099Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:53:32.251812 containerd[1527]: time="2026-01-23T17:53:32.251752328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:53:32.252737 containerd[1527]: time="2026-01-23T17:53:32.252361905Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 556.019598ms" Jan 23 17:53:32.252737 containerd[1527]: time="2026-01-23T17:53:32.252395886Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 23 17:53:32.252979 containerd[1527]: time="2026-01-23T17:53:32.252949402Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 23 17:53:32.836069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3217080465.mount: Deactivated successfully. Jan 23 17:53:33.763279 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 23 17:53:33.766298 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:53:33.918570 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:53:33.927937 (kubelet)[2257]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:53:33.982870 kubelet[2257]: E0123 17:53:33.982755 2257 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:53:33.986908 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:53:33.987070 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:53:33.987491 systemd[1]: kubelet.service: Consumed 160ms CPU time, 107.5M memory peak. Jan 23 17:53:34.549934 containerd[1527]: time="2026-01-23T17:53:34.549877182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:53:34.551377 containerd[1527]: time="2026-01-23T17:53:34.551350772Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943239" Jan 23 17:53:34.553100 containerd[1527]: time="2026-01-23T17:53:34.553072225Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:53:34.558626 containerd[1527]: time="2026-01-23T17:53:34.558547611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:53:34.559652 containerd[1527]: time="2026-01-23T17:53:34.559617206Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.306627655s" Jan 23 17:53:34.559652 containerd[1527]: time="2026-01-23T17:53:34.559651172Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 23 17:53:39.252085 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:53:39.252296 systemd[1]: kubelet.service: Consumed 160ms CPU time, 107.5M memory peak. Jan 23 17:53:39.255275 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:53:39.283922 systemd[1]: Reload requested from client PID 2292 ('systemctl') (unit session-7.scope)... Jan 23 17:53:39.284064 systemd[1]: Reloading... Jan 23 17:53:39.422477 zram_generator::config[2342]: No configuration found. Jan 23 17:53:39.595140 systemd[1]: Reloading finished in 310 ms. Jan 23 17:53:39.670483 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 17:53:39.670631 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 17:53:39.671034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:53:39.671100 systemd[1]: kubelet.service: Consumed 108ms CPU time, 95M memory peak. Jan 23 17:53:39.674964 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:53:39.839367 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:53:39.858548 (kubelet)[2384]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 17:53:39.917149 kubelet[2384]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:53:39.917149 kubelet[2384]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 17:53:39.917149 kubelet[2384]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:53:39.918700 kubelet[2384]: I0123 17:53:39.918651 2384 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 17:53:40.970139 kubelet[2384]: I0123 17:53:40.970064 2384 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 17:53:40.970139 kubelet[2384]: I0123 17:53:40.970103 2384 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 17:53:40.970891 kubelet[2384]: I0123 17:53:40.970380 2384 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 17:53:41.009109 kubelet[2384]: E0123 17:53:41.009046 2384 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://49.13.3.65:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 49.13.3.65:6443: connect: connection refused" logger="UnhandledError" Jan 23 17:53:41.010335 kubelet[2384]: I0123 17:53:41.010287 2384 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 17:53:41.019015 kubelet[2384]: I0123 17:53:41.018982 2384 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 17:53:41.021814 kubelet[2384]: I0123 17:53:41.021745 2384 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 17:53:41.022823 kubelet[2384]: I0123 17:53:41.022773 2384 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 17:53:41.023021 kubelet[2384]: I0123 17:53:41.022828 2384 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-3-1-a204a5ad1b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 17:53:41.023112 kubelet[2384]: I0123 17:53:41.023090 2384 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 17:53:41.023112 kubelet[2384]: I0123 17:53:41.023103 2384 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 17:53:41.023324 kubelet[2384]: I0123 17:53:41.023309 2384 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:53:41.026787 kubelet[2384]: I0123 17:53:41.026636 2384 kubelet.go:446] "Attempting to sync node with API server" Jan 23 17:53:41.026787 kubelet[2384]: I0123 17:53:41.026695 2384 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 17:53:41.026787 kubelet[2384]: I0123 17:53:41.026724 2384 kubelet.go:352] "Adding apiserver pod source" Jan 23 17:53:41.026787 kubelet[2384]: I0123 17:53:41.026734 2384 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 17:53:41.030655 kubelet[2384]: W0123 17:53:41.030597 2384 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://49.13.3.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-3-1-a204a5ad1b&limit=500&resourceVersion=0": dial tcp 49.13.3.65:6443: connect: connection refused Jan 23 17:53:41.030879 kubelet[2384]: E0123 17:53:41.030826 2384 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://49.13.3.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-3-1-a204a5ad1b&limit=500&resourceVersion=0\": dial tcp 49.13.3.65:6443: connect: connection refused" logger="UnhandledError" Jan 23 17:53:41.031029 kubelet[2384]: I0123 17:53:41.031014 2384 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 17:53:41.031805 kubelet[2384]: I0123 17:53:41.031779 2384 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 17:53:41.032019 kubelet[2384]: W0123 17:53:41.032006 2384 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 17:53:41.034706 kubelet[2384]: I0123 17:53:41.034683 2384 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 17:53:41.034917 kubelet[2384]: I0123 17:53:41.034902 2384 server.go:1287] "Started kubelet" Jan 23 17:53:41.042100 kubelet[2384]: E0123 17:53:41.041845 2384 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://49.13.3.65:6443/api/v1/namespaces/default/events\": dial tcp 49.13.3.65:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-3-1-a204a5ad1b.188d6dad9ba9678e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-3-1-a204a5ad1b,UID:ci-4459-2-3-1-a204a5ad1b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-3-1-a204a5ad1b,},FirstTimestamp:2026-01-23 17:53:41.034813326 +0000 UTC m=+1.167290380,LastTimestamp:2026-01-23 17:53:41.034813326 +0000 UTC m=+1.167290380,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-3-1-a204a5ad1b,}" Jan 23 17:53:41.042343 kubelet[2384]: W0123 17:53:41.042193 2384 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://49.13.3.65:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 49.13.3.65:6443: connect: connection refused Jan 23 17:53:41.042343 kubelet[2384]: E0123 17:53:41.042241 2384 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://49.13.3.65:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 49.13.3.65:6443: connect: connection refused" logger="UnhandledError" Jan 23 17:53:41.042922 kubelet[2384]: I0123 17:53:41.042851 2384 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 17:53:41.046850 kubelet[2384]: E0123 17:53:41.046819 2384 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 17:53:41.049576 kubelet[2384]: I0123 17:53:41.049553 2384 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 17:53:41.049976 kubelet[2384]: E0123 17:53:41.049954 2384 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-3-1-a204a5ad1b\" not found" Jan 23 17:53:41.050341 kubelet[2384]: I0123 17:53:41.050283 2384 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 17:53:41.051507 kubelet[2384]: I0123 17:53:41.050658 2384 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 17:53:41.051507 kubelet[2384]: I0123 17:53:41.050880 2384 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 17:53:41.051507 kubelet[2384]: I0123 17:53:41.051029 2384 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 17:53:41.051507 kubelet[2384]: I0123 17:53:41.049565 2384 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 17:53:41.052213 kubelet[2384]: I0123 17:53:41.052194 2384 server.go:479] "Adding debug handlers to kubelet server" Jan 23 17:53:41.053362 kubelet[2384]: I0123 17:53:41.053334 2384 reconciler.go:26] "Reconciler: start to sync state" Jan 23 17:53:41.054491 kubelet[2384]: W0123 17:53:41.054451 2384 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://49.13.3.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.3.65:6443: connect: connection refused Jan 23 17:53:41.054644 kubelet[2384]: E0123 17:53:41.054623 2384 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://49.13.3.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 49.13.3.65:6443: connect: connection refused" logger="UnhandledError" Jan 23 17:53:41.054971 kubelet[2384]: I0123 17:53:41.054945 2384 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 17:53:41.055304 kubelet[2384]: E0123 17:53:41.055280 2384 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.3.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-3-1-a204a5ad1b?timeout=10s\": dial tcp 49.13.3.65:6443: connect: connection refused" interval="200ms" Jan 23 17:53:41.057563 kubelet[2384]: I0123 17:53:41.057544 2384 factory.go:221] Registration of the containerd container factory successfully Jan 23 17:53:41.058551 kubelet[2384]: I0123 17:53:41.057665 2384 factory.go:221] Registration of the systemd container factory successfully Jan 23 17:53:41.068781 kubelet[2384]: I0123 17:53:41.068623 2384 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 17:53:41.069823 kubelet[2384]: I0123 17:53:41.069800 2384 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 17:53:41.069924 kubelet[2384]: I0123 17:53:41.069915 2384 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 17:53:41.069989 kubelet[2384]: I0123 17:53:41.069981 2384 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 17:53:41.070035 kubelet[2384]: I0123 17:53:41.070028 2384 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 17:53:41.070133 kubelet[2384]: E0123 17:53:41.070109 2384 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 17:53:41.077458 kubelet[2384]: W0123 17:53:41.077104 2384 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://49.13.3.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.3.65:6443: connect: connection refused Jan 23 17:53:41.077838 kubelet[2384]: E0123 17:53:41.077780 2384 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://49.13.3.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 49.13.3.65:6443: connect: connection refused" logger="UnhandledError" Jan 23 17:53:41.086108 kubelet[2384]: I0123 17:53:41.085838 2384 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 17:53:41.086108 kubelet[2384]: I0123 17:53:41.085867 2384 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 17:53:41.086108 kubelet[2384]: I0123 17:53:41.085886 2384 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:53:41.088378 kubelet[2384]: I0123 17:53:41.088358 2384 policy_none.go:49] "None policy: Start" Jan 23 17:53:41.088505 kubelet[2384]: I0123 17:53:41.088495 2384 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 17:53:41.088623 kubelet[2384]: I0123 17:53:41.088611 2384 state_mem.go:35] "Initializing new in-memory state store" Jan 23 17:53:41.095667 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 17:53:41.110326 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 17:53:41.116965 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 17:53:41.129989 kubelet[2384]: I0123 17:53:41.129930 2384 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 17:53:41.130251 kubelet[2384]: I0123 17:53:41.130213 2384 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 17:53:41.130337 kubelet[2384]: I0123 17:53:41.130241 2384 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 17:53:41.131349 kubelet[2384]: I0123 17:53:41.131025 2384 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 17:53:41.134130 kubelet[2384]: E0123 17:53:41.133630 2384 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 17:53:41.134130 kubelet[2384]: E0123 17:53:41.133740 2384 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-3-1-a204a5ad1b\" not found" Jan 23 17:53:41.184442 systemd[1]: Created slice kubepods-burstable-podce9811df4419f7a938354af71d68532b.slice - libcontainer container kubepods-burstable-podce9811df4419f7a938354af71d68532b.slice. Jan 23 17:53:41.195119 kubelet[2384]: E0123 17:53:41.194849 2384 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-1-a204a5ad1b\" not found" node="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:41.199551 systemd[1]: Created slice kubepods-burstable-pod528e801571ceda7163e55fc08446afd3.slice - libcontainer container kubepods-burstable-pod528e801571ceda7163e55fc08446afd3.slice. Jan 23 17:53:41.210315 kubelet[2384]: E0123 17:53:41.210274 2384 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-1-a204a5ad1b\" not found" node="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:41.213762 systemd[1]: Created slice kubepods-burstable-pod75e8c495c966634cc0641399e6f81de9.slice - libcontainer container kubepods-burstable-pod75e8c495c966634cc0641399e6f81de9.slice. Jan 23 17:53:41.216106 kubelet[2384]: E0123 17:53:41.216079 2384 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-1-a204a5ad1b\" not found" node="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:41.234175 kubelet[2384]: I0123 17:53:41.233973 2384 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:41.234804 kubelet[2384]: E0123 17:53:41.234749 2384 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.13.3.65:6443/api/v1/nodes\": dial tcp 49.13.3.65:6443: connect: connection refused" node="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:41.255641 kubelet[2384]: I0123 17:53:41.255573 2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce9811df4419f7a938354af71d68532b-ca-certs\") pod \"kube-apiserver-ci-4459-2-3-1-a204a5ad1b\" (UID: \"ce9811df4419f7a938354af71d68532b\") " pod="kube-system/kube-apiserver-ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:41.255641 kubelet[2384]: I0123 17:53:41.255636 2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce9811df4419f7a938354af71d68532b-k8s-certs\") pod \"kube-apiserver-ci-4459-2-3-1-a204a5ad1b\" (UID: \"ce9811df4419f7a938354af71d68532b\") " pod="kube-system/kube-apiserver-ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:41.255861 kubelet[2384]: I0123 17:53:41.255666 2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/528e801571ceda7163e55fc08446afd3-ca-certs\") pod \"kube-controller-manager-ci-4459-2-3-1-a204a5ad1b\" (UID: \"528e801571ceda7163e55fc08446afd3\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:41.255861 kubelet[2384]: I0123 17:53:41.255690 2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/528e801571ceda7163e55fc08446afd3-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-3-1-a204a5ad1b\" (UID: \"528e801571ceda7163e55fc08446afd3\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:41.255861 kubelet[2384]: I0123 17:53:41.255726 2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce9811df4419f7a938354af71d68532b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-3-1-a204a5ad1b\" (UID: \"ce9811df4419f7a938354af71d68532b\") " pod="kube-system/kube-apiserver-ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:41.255861 kubelet[2384]: I0123 17:53:41.255750 2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/528e801571ceda7163e55fc08446afd3-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-3-1-a204a5ad1b\" (UID: \"528e801571ceda7163e55fc08446afd3\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:41.255861 kubelet[2384]: I0123 17:53:41.255774 2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/528e801571ceda7163e55fc08446afd3-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-3-1-a204a5ad1b\" (UID: \"528e801571ceda7163e55fc08446afd3\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:41.256092 kubelet[2384]: I0123 17:53:41.255798 2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/528e801571ceda7163e55fc08446afd3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-3-1-a204a5ad1b\" (UID: \"528e801571ceda7163e55fc08446afd3\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:41.256092 kubelet[2384]: I0123 17:53:41.255830 2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/75e8c495c966634cc0641399e6f81de9-kubeconfig\") pod \"kube-scheduler-ci-4459-2-3-1-a204a5ad1b\" (UID: \"75e8c495c966634cc0641399e6f81de9\") " pod="kube-system/kube-scheduler-ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:41.256861 kubelet[2384]: E0123 17:53:41.256744 2384 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.3.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-3-1-a204a5ad1b?timeout=10s\": dial tcp 49.13.3.65:6443: connect: connection refused" interval="400ms" Jan 23 17:53:41.437691 kubelet[2384]: I0123 17:53:41.437643 2384 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:41.438530 kubelet[2384]: E0123 17:53:41.438415 2384 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.13.3.65:6443/api/v1/nodes\": dial tcp 49.13.3.65:6443: connect: connection refused" node="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:41.496711 containerd[1527]: time="2026-01-23T17:53:41.496264927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-3-1-a204a5ad1b,Uid:ce9811df4419f7a938354af71d68532b,Namespace:kube-system,Attempt:0,}" Jan 23 17:53:41.512987 containerd[1527]: time="2026-01-23T17:53:41.512163459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-3-1-a204a5ad1b,Uid:528e801571ceda7163e55fc08446afd3,Namespace:kube-system,Attempt:0,}" Jan 23 17:53:41.533327 containerd[1527]: time="2026-01-23T17:53:41.533289503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-3-1-a204a5ad1b,Uid:75e8c495c966634cc0641399e6f81de9,Namespace:kube-system,Attempt:0,}" Jan 23 17:53:41.536787 containerd[1527]: time="2026-01-23T17:53:41.536712877Z" level=info msg="connecting to shim 18c68094fb03ac80c25df28976542331d623922663d2ed2fe13f28b766a88b64" address="unix:///run/containerd/s/2372c480b519e8cbd778f79b08aeeab375dd94e34e63c1b94f1eab74998e99f1" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:53:41.550154 containerd[1527]: time="2026-01-23T17:53:41.550089283Z" level=info msg="connecting to shim 9cf1c0561aa3b9b4621797710ce8b830d3b1fa478b27839b41afaa8a2f110b53" address="unix:///run/containerd/s/5174e7c5a55727a4b25a90bc737ab6938c5f8683d93d9b54a25557bf5f72a27f" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:53:41.575083 containerd[1527]: time="2026-01-23T17:53:41.575024190Z" level=info msg="connecting to shim 1b67fd30eb14b5742985500d0501a84f41071d022e74ad400e9d2ddca67bfb3e" address="unix:///run/containerd/s/8ffc30f3380378f98b2c5f94ae477d690f30107fd372f7602a98fce2d41d03a4" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:53:41.580877 systemd[1]: Started cri-containerd-18c68094fb03ac80c25df28976542331d623922663d2ed2fe13f28b766a88b64.scope - libcontainer container 18c68094fb03ac80c25df28976542331d623922663d2ed2fe13f28b766a88b64. Jan 23 17:53:41.598708 systemd[1]: Started cri-containerd-9cf1c0561aa3b9b4621797710ce8b830d3b1fa478b27839b41afaa8a2f110b53.scope - libcontainer container 9cf1c0561aa3b9b4621797710ce8b830d3b1fa478b27839b41afaa8a2f110b53. Jan 23 17:53:41.619981 systemd[1]: Started cri-containerd-1b67fd30eb14b5742985500d0501a84f41071d022e74ad400e9d2ddca67bfb3e.scope - libcontainer container 1b67fd30eb14b5742985500d0501a84f41071d022e74ad400e9d2ddca67bfb3e. Jan 23 17:53:41.656918 containerd[1527]: time="2026-01-23T17:53:41.656864161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-3-1-a204a5ad1b,Uid:ce9811df4419f7a938354af71d68532b,Namespace:kube-system,Attempt:0,} returns sandbox id \"18c68094fb03ac80c25df28976542331d623922663d2ed2fe13f28b766a88b64\"" Jan 23 17:53:41.658062 kubelet[2384]: E0123 17:53:41.657985 2384 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.3.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-3-1-a204a5ad1b?timeout=10s\": dial tcp 49.13.3.65:6443: connect: connection refused" interval="800ms" Jan 23 17:53:41.661054 containerd[1527]: time="2026-01-23T17:53:41.660927522Z" level=info msg="CreateContainer within sandbox \"18c68094fb03ac80c25df28976542331d623922663d2ed2fe13f28b766a88b64\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 17:53:41.669199 containerd[1527]: time="2026-01-23T17:53:41.669147415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-3-1-a204a5ad1b,Uid:528e801571ceda7163e55fc08446afd3,Namespace:kube-system,Attempt:0,} returns sandbox id \"9cf1c0561aa3b9b4621797710ce8b830d3b1fa478b27839b41afaa8a2f110b53\"" Jan 23 17:53:41.672573 containerd[1527]: time="2026-01-23T17:53:41.672505034Z" level=info msg="CreateContainer within sandbox \"9cf1c0561aa3b9b4621797710ce8b830d3b1fa478b27839b41afaa8a2f110b53\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 17:53:41.672768 containerd[1527]: time="2026-01-23T17:53:41.672529607Z" level=info msg="Container 64a67ed3fcb6cf546d502fc14e3bbd4279b9378a373623dbf0f9483214491a19: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:53:41.683462 containerd[1527]: time="2026-01-23T17:53:41.683242090Z" level=info msg="CreateContainer within sandbox \"18c68094fb03ac80c25df28976542331d623922663d2ed2fe13f28b766a88b64\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"64a67ed3fcb6cf546d502fc14e3bbd4279b9378a373623dbf0f9483214491a19\"" Jan 23 17:53:41.683974 containerd[1527]: time="2026-01-23T17:53:41.683945511Z" level=info msg="StartContainer for \"64a67ed3fcb6cf546d502fc14e3bbd4279b9378a373623dbf0f9483214491a19\"" Jan 23 17:53:41.685205 containerd[1527]: time="2026-01-23T17:53:41.685171175Z" level=info msg="connecting to shim 64a67ed3fcb6cf546d502fc14e3bbd4279b9378a373623dbf0f9483214491a19" address="unix:///run/containerd/s/2372c480b519e8cbd778f79b08aeeab375dd94e34e63c1b94f1eab74998e99f1" protocol=ttrpc version=3 Jan 23 17:53:41.700336 containerd[1527]: time="2026-01-23T17:53:41.700247101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-3-1-a204a5ad1b,Uid:75e8c495c966634cc0641399e6f81de9,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b67fd30eb14b5742985500d0501a84f41071d022e74ad400e9d2ddca67bfb3e\"" Jan 23 17:53:41.700887 containerd[1527]: time="2026-01-23T17:53:41.700794358Z" level=info msg="Container 715bf934593dfdaa329b6d99e3765b4df72163079d485934f3a628297cedc0cf: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:53:41.704113 containerd[1527]: time="2026-01-23T17:53:41.704070893Z" level=info msg="CreateContainer within sandbox \"1b67fd30eb14b5742985500d0501a84f41071d022e74ad400e9d2ddca67bfb3e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 17:53:41.716895 containerd[1527]: time="2026-01-23T17:53:41.716718223Z" level=info msg="CreateContainer within sandbox \"9cf1c0561aa3b9b4621797710ce8b830d3b1fa478b27839b41afaa8a2f110b53\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"715bf934593dfdaa329b6d99e3765b4df72163079d485934f3a628297cedc0cf\"" Jan 23 17:53:41.718512 containerd[1527]: time="2026-01-23T17:53:41.718474415Z" level=info msg="StartContainer for \"715bf934593dfdaa329b6d99e3765b4df72163079d485934f3a628297cedc0cf\"" Jan 23 17:53:41.720625 containerd[1527]: time="2026-01-23T17:53:41.720591561Z" level=info msg="connecting to shim 715bf934593dfdaa329b6d99e3765b4df72163079d485934f3a628297cedc0cf" address="unix:///run/containerd/s/5174e7c5a55727a4b25a90bc737ab6938c5f8683d93d9b54a25557bf5f72a27f" protocol=ttrpc version=3 Jan 23 17:53:41.724553 containerd[1527]: time="2026-01-23T17:53:41.724514206Z" level=info msg="Container 63027c7e3956a09690086823e43a9a08b469150be79303cc85f1eb7ac8502846: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:53:41.724721 systemd[1]: Started cri-containerd-64a67ed3fcb6cf546d502fc14e3bbd4279b9378a373623dbf0f9483214491a19.scope - libcontainer container 64a67ed3fcb6cf546d502fc14e3bbd4279b9378a373623dbf0f9483214491a19. Jan 23 17:53:41.740463 containerd[1527]: time="2026-01-23T17:53:41.739561757Z" level=info msg="CreateContainer within sandbox \"1b67fd30eb14b5742985500d0501a84f41071d022e74ad400e9d2ddca67bfb3e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"63027c7e3956a09690086823e43a9a08b469150be79303cc85f1eb7ac8502846\"" Jan 23 17:53:41.741519 containerd[1527]: time="2026-01-23T17:53:41.741491082Z" level=info msg="StartContainer for \"63027c7e3956a09690086823e43a9a08b469150be79303cc85f1eb7ac8502846\"" Jan 23 17:53:41.744084 containerd[1527]: time="2026-01-23T17:53:41.744053751Z" level=info msg="connecting to shim 63027c7e3956a09690086823e43a9a08b469150be79303cc85f1eb7ac8502846" address="unix:///run/containerd/s/8ffc30f3380378f98b2c5f94ae477d690f30107fd372f7602a98fce2d41d03a4" protocol=ttrpc version=3 Jan 23 17:53:41.757959 systemd[1]: Started cri-containerd-715bf934593dfdaa329b6d99e3765b4df72163079d485934f3a628297cedc0cf.scope - libcontainer container 715bf934593dfdaa329b6d99e3765b4df72163079d485934f3a628297cedc0cf. Jan 23 17:53:41.778558 systemd[1]: Started cri-containerd-63027c7e3956a09690086823e43a9a08b469150be79303cc85f1eb7ac8502846.scope - libcontainer container 63027c7e3956a09690086823e43a9a08b469150be79303cc85f1eb7ac8502846. Jan 23 17:53:41.801180 containerd[1527]: time="2026-01-23T17:53:41.800931400Z" level=info msg="StartContainer for \"64a67ed3fcb6cf546d502fc14e3bbd4279b9378a373623dbf0f9483214491a19\" returns successfully" Jan 23 17:53:41.837122 containerd[1527]: time="2026-01-23T17:53:41.837075579Z" level=info msg="StartContainer for \"715bf934593dfdaa329b6d99e3765b4df72163079d485934f3a628297cedc0cf\" returns successfully" Jan 23 17:53:41.841325 kubelet[2384]: I0123 17:53:41.841218 2384 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:41.841926 kubelet[2384]: E0123 17:53:41.841866 2384 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.13.3.65:6443/api/v1/nodes\": dial tcp 49.13.3.65:6443: connect: connection refused" node="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:41.880903 containerd[1527]: time="2026-01-23T17:53:41.880863939Z" level=info msg="StartContainer for \"63027c7e3956a09690086823e43a9a08b469150be79303cc85f1eb7ac8502846\" returns successfully" Jan 23 17:53:42.088565 kubelet[2384]: E0123 17:53:42.088533 2384 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-1-a204a5ad1b\" not found" node="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:42.094395 kubelet[2384]: E0123 17:53:42.094354 2384 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-1-a204a5ad1b\" not found" node="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:42.095919 kubelet[2384]: E0123 17:53:42.095886 2384 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-1-a204a5ad1b\" not found" node="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:42.644750 kubelet[2384]: I0123 17:53:42.644721 2384 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:43.099210 kubelet[2384]: E0123 17:53:43.099176 2384 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-1-a204a5ad1b\" not found" node="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:43.099536 kubelet[2384]: E0123 17:53:43.099471 2384 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-1-a204a5ad1b\" not found" node="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:43.957314 kubelet[2384]: E0123 17:53:43.957274 2384 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-1-a204a5ad1b\" not found" node="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:44.456257 kubelet[2384]: E0123 17:53:44.456214 2384 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-1-a204a5ad1b\" not found" node="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:44.500272 kubelet[2384]: E0123 17:53:44.500229 2384 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-2-3-1-a204a5ad1b\" not found" node="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:44.542673 kubelet[2384]: I0123 17:53:44.542629 2384 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:44.542673 kubelet[2384]: E0123 17:53:44.542670 2384 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459-2-3-1-a204a5ad1b\": node \"ci-4459-2-3-1-a204a5ad1b\" not found" Jan 23 17:53:44.551990 kubelet[2384]: I0123 17:53:44.551944 2384 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:44.609755 kubelet[2384]: E0123 17:53:44.609709 2384 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-3-1-a204a5ad1b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:44.609755 kubelet[2384]: I0123 17:53:44.609749 2384 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:44.620110 kubelet[2384]: E0123 17:53:44.620074 2384 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-3-1-a204a5ad1b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:44.620110 kubelet[2384]: I0123 17:53:44.620111 2384 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:44.627927 kubelet[2384]: E0123 17:53:44.627889 2384 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-3-1-a204a5ad1b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:45.040844 kubelet[2384]: I0123 17:53:45.040782 2384 apiserver.go:52] "Watching apiserver" Jan 23 17:53:45.051344 kubelet[2384]: I0123 17:53:45.051230 2384 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 17:53:46.812889 systemd[1]: Reload requested from client PID 2649 ('systemctl') (unit session-7.scope)... Jan 23 17:53:46.812905 systemd[1]: Reloading... Jan 23 17:53:46.936513 zram_generator::config[2699]: No configuration found. Jan 23 17:53:47.136903 systemd[1]: Reloading finished in 323 ms. Jan 23 17:53:47.174749 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:53:47.191878 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 17:53:47.192381 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:53:47.192541 systemd[1]: kubelet.service: Consumed 1.624s CPU time, 126.1M memory peak. Jan 23 17:53:47.196720 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:53:47.354669 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:53:47.366092 (kubelet)[2738]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 17:53:47.427261 kubelet[2738]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:53:47.428497 kubelet[2738]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 17:53:47.429504 kubelet[2738]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:53:47.429644 kubelet[2738]: I0123 17:53:47.429605 2738 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 17:53:47.438976 kubelet[2738]: I0123 17:53:47.438929 2738 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 17:53:47.438976 kubelet[2738]: I0123 17:53:47.438962 2738 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 17:53:47.439255 kubelet[2738]: I0123 17:53:47.439231 2738 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 17:53:47.444052 kubelet[2738]: I0123 17:53:47.442536 2738 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 17:53:47.447531 kubelet[2738]: I0123 17:53:47.447502 2738 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 17:53:47.452743 kubelet[2738]: I0123 17:53:47.452711 2738 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 17:53:47.455315 kubelet[2738]: I0123 17:53:47.455293 2738 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 17:53:47.455578 kubelet[2738]: I0123 17:53:47.455549 2738 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 17:53:47.455751 kubelet[2738]: I0123 17:53:47.455580 2738 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-3-1-a204a5ad1b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 17:53:47.455830 kubelet[2738]: I0123 17:53:47.455759 2738 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 17:53:47.455830 kubelet[2738]: I0123 17:53:47.455769 2738 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 17:53:47.455830 kubelet[2738]: I0123 17:53:47.455812 2738 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:53:47.455978 kubelet[2738]: I0123 17:53:47.455963 2738 kubelet.go:446] "Attempting to sync node with API server" Jan 23 17:53:47.456011 kubelet[2738]: I0123 17:53:47.455981 2738 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 17:53:47.456578 kubelet[2738]: I0123 17:53:47.456556 2738 kubelet.go:352] "Adding apiserver pod source" Jan 23 17:53:47.456629 kubelet[2738]: I0123 17:53:47.456584 2738 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 17:53:47.461443 kubelet[2738]: I0123 17:53:47.459180 2738 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 17:53:47.461443 kubelet[2738]: I0123 17:53:47.459771 2738 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 17:53:47.461443 kubelet[2738]: I0123 17:53:47.460254 2738 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 17:53:47.461443 kubelet[2738]: I0123 17:53:47.460281 2738 server.go:1287] "Started kubelet" Jan 23 17:53:47.465987 kubelet[2738]: I0123 17:53:47.465958 2738 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 17:53:47.467635 kubelet[2738]: I0123 17:53:47.467558 2738 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 17:53:47.472459 kubelet[2738]: I0123 17:53:47.470257 2738 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 17:53:47.472459 kubelet[2738]: I0123 17:53:47.470623 2738 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 17:53:47.472459 kubelet[2738]: I0123 17:53:47.470673 2738 server.go:479] "Adding debug handlers to kubelet server" Jan 23 17:53:47.472459 kubelet[2738]: I0123 17:53:47.470866 2738 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 17:53:47.477296 kubelet[2738]: I0123 17:53:47.477251 2738 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 17:53:47.478628 kubelet[2738]: E0123 17:53:47.478597 2738 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-3-1-a204a5ad1b\" not found" Jan 23 17:53:47.483995 kubelet[2738]: I0123 17:53:47.483973 2738 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 17:53:47.485441 kubelet[2738]: I0123 17:53:47.484240 2738 reconciler.go:26] "Reconciler: start to sync state" Jan 23 17:53:47.490928 kubelet[2738]: I0123 17:53:47.490901 2738 factory.go:221] Registration of the containerd container factory successfully Jan 23 17:53:47.490928 kubelet[2738]: I0123 17:53:47.490922 2738 factory.go:221] Registration of the systemd container factory successfully Jan 23 17:53:47.491063 kubelet[2738]: I0123 17:53:47.490998 2738 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 17:53:47.500076 kubelet[2738]: I0123 17:53:47.500003 2738 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 17:53:47.501307 kubelet[2738]: I0123 17:53:47.501279 2738 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 17:53:47.501621 kubelet[2738]: I0123 17:53:47.501534 2738 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 17:53:47.502093 kubelet[2738]: I0123 17:53:47.502073 2738 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 17:53:47.502195 kubelet[2738]: I0123 17:53:47.502164 2738 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 17:53:47.502296 kubelet[2738]: E0123 17:53:47.502278 2738 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 17:53:47.555768 kubelet[2738]: I0123 17:53:47.555738 2738 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 17:53:47.555928 kubelet[2738]: I0123 17:53:47.555914 2738 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 17:53:47.556015 kubelet[2738]: I0123 17:53:47.556004 2738 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:53:47.556319 kubelet[2738]: I0123 17:53:47.556305 2738 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 17:53:47.556415 kubelet[2738]: I0123 17:53:47.556391 2738 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 17:53:47.556714 kubelet[2738]: I0123 17:53:47.556688 2738 policy_none.go:49] "None policy: Start" Jan 23 17:53:47.556795 kubelet[2738]: I0123 17:53:47.556786 2738 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 17:53:47.556966 kubelet[2738]: I0123 17:53:47.556943 2738 state_mem.go:35] "Initializing new in-memory state store" Jan 23 17:53:47.557286 kubelet[2738]: I0123 17:53:47.557250 2738 state_mem.go:75] "Updated machine memory state" Jan 23 17:53:47.561876 kubelet[2738]: I0123 17:53:47.561844 2738 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 17:53:47.562711 kubelet[2738]: I0123 17:53:47.562683 2738 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 17:53:47.562779 kubelet[2738]: I0123 17:53:47.562703 2738 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 17:53:47.563097 kubelet[2738]: I0123 17:53:47.563071 2738 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 17:53:47.564875 kubelet[2738]: E0123 17:53:47.564656 2738 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 17:53:47.603985 kubelet[2738]: I0123 17:53:47.603880 2738 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:47.604287 kubelet[2738]: I0123 17:53:47.603851 2738 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:47.606788 kubelet[2738]: I0123 17:53:47.606755 2738 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:47.668121 kubelet[2738]: I0123 17:53:47.668098 2738 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:47.682339 kubelet[2738]: I0123 17:53:47.682181 2738 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:47.682339 kubelet[2738]: I0123 17:53:47.682323 2738 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:47.686763 kubelet[2738]: I0123 17:53:47.686735 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce9811df4419f7a938354af71d68532b-ca-certs\") pod \"kube-apiserver-ci-4459-2-3-1-a204a5ad1b\" (UID: \"ce9811df4419f7a938354af71d68532b\") " pod="kube-system/kube-apiserver-ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:47.686879 kubelet[2738]: I0123 17:53:47.686768 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce9811df4419f7a938354af71d68532b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-3-1-a204a5ad1b\" (UID: \"ce9811df4419f7a938354af71d68532b\") " pod="kube-system/kube-apiserver-ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:47.686879 kubelet[2738]: I0123 17:53:47.686791 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/528e801571ceda7163e55fc08446afd3-ca-certs\") pod \"kube-controller-manager-ci-4459-2-3-1-a204a5ad1b\" (UID: \"528e801571ceda7163e55fc08446afd3\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:47.686879 kubelet[2738]: I0123 17:53:47.686807 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/528e801571ceda7163e55fc08446afd3-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-3-1-a204a5ad1b\" (UID: \"528e801571ceda7163e55fc08446afd3\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:47.686879 kubelet[2738]: I0123 17:53:47.686825 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/528e801571ceda7163e55fc08446afd3-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-3-1-a204a5ad1b\" (UID: \"528e801571ceda7163e55fc08446afd3\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:47.686879 kubelet[2738]: I0123 17:53:47.686840 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/528e801571ceda7163e55fc08446afd3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-3-1-a204a5ad1b\" (UID: \"528e801571ceda7163e55fc08446afd3\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:47.687004 kubelet[2738]: I0123 17:53:47.686855 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce9811df4419f7a938354af71d68532b-k8s-certs\") pod \"kube-apiserver-ci-4459-2-3-1-a204a5ad1b\" (UID: \"ce9811df4419f7a938354af71d68532b\") " pod="kube-system/kube-apiserver-ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:47.687004 kubelet[2738]: I0123 17:53:47.686871 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/528e801571ceda7163e55fc08446afd3-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-3-1-a204a5ad1b\" (UID: \"528e801571ceda7163e55fc08446afd3\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:47.687004 kubelet[2738]: I0123 17:53:47.686904 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/75e8c495c966634cc0641399e6f81de9-kubeconfig\") pod \"kube-scheduler-ci-4459-2-3-1-a204a5ad1b\" (UID: \"75e8c495c966634cc0641399e6f81de9\") " pod="kube-system/kube-scheduler-ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:48.467018 kubelet[2738]: I0123 17:53:48.466967 2738 apiserver.go:52] "Watching apiserver" Jan 23 17:53:48.484557 kubelet[2738]: I0123 17:53:48.484515 2738 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 17:53:48.538527 kubelet[2738]: I0123 17:53:48.538076 2738 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:48.538527 kubelet[2738]: I0123 17:53:48.538314 2738 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:48.549894 kubelet[2738]: E0123 17:53:48.549861 2738 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-3-1-a204a5ad1b\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:48.550335 kubelet[2738]: E0123 17:53:48.550306 2738 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-3-1-a204a5ad1b\" already exists" pod="kube-system/kube-scheduler-ci-4459-2-3-1-a204a5ad1b" Jan 23 17:53:48.591538 kubelet[2738]: I0123 17:53:48.591423 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-3-1-a204a5ad1b" podStartSLOduration=1.5914013439999999 podStartE2EDuration="1.591401344s" podCreationTimestamp="2026-01-23 17:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:53:48.576986302 +0000 UTC m=+1.204662035" watchObservedRunningTime="2026-01-23 17:53:48.591401344 +0000 UTC m=+1.219077037" Jan 23 17:53:48.608580 kubelet[2738]: I0123 17:53:48.608510 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-3-1-a204a5ad1b" podStartSLOduration=1.608492716 podStartE2EDuration="1.608492716s" podCreationTimestamp="2026-01-23 17:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:53:48.591803776 +0000 UTC m=+1.219479469" watchObservedRunningTime="2026-01-23 17:53:48.608492716 +0000 UTC m=+1.236168409" Jan 23 17:53:48.627545 kubelet[2738]: I0123 17:53:48.627484 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-3-1-a204a5ad1b" podStartSLOduration=1.6274669990000001 podStartE2EDuration="1.627466999s" podCreationTimestamp="2026-01-23 17:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:53:48.609144402 +0000 UTC m=+1.236820095" watchObservedRunningTime="2026-01-23 17:53:48.627466999 +0000 UTC m=+1.255142692" Jan 23 17:53:52.711009 kubelet[2738]: I0123 17:53:52.710957 2738 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 17:53:52.711519 containerd[1527]: time="2026-01-23T17:53:52.711273525Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 17:53:52.711858 kubelet[2738]: I0123 17:53:52.711607 2738 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 17:53:53.408107 systemd[1]: Created slice kubepods-besteffort-pod61e3d4de_979f_432b_a297_0a050c8ff08b.slice - libcontainer container kubepods-besteffort-pod61e3d4de_979f_432b_a297_0a050c8ff08b.slice. Jan 23 17:53:53.428659 kubelet[2738]: I0123 17:53:53.428503 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/61e3d4de-979f-432b-a297-0a050c8ff08b-kube-proxy\") pod \"kube-proxy-stxz9\" (UID: \"61e3d4de-979f-432b-a297-0a050c8ff08b\") " pod="kube-system/kube-proxy-stxz9" Jan 23 17:53:53.428659 kubelet[2738]: I0123 17:53:53.428544 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/61e3d4de-979f-432b-a297-0a050c8ff08b-xtables-lock\") pod \"kube-proxy-stxz9\" (UID: \"61e3d4de-979f-432b-a297-0a050c8ff08b\") " pod="kube-system/kube-proxy-stxz9" Jan 23 17:53:53.428659 kubelet[2738]: I0123 17:53:53.428577 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/61e3d4de-979f-432b-a297-0a050c8ff08b-lib-modules\") pod \"kube-proxy-stxz9\" (UID: \"61e3d4de-979f-432b-a297-0a050c8ff08b\") " pod="kube-system/kube-proxy-stxz9" Jan 23 17:53:53.428659 kubelet[2738]: I0123 17:53:53.428594 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jkln\" (UniqueName: \"kubernetes.io/projected/61e3d4de-979f-432b-a297-0a050c8ff08b-kube-api-access-6jkln\") pod \"kube-proxy-stxz9\" (UID: \"61e3d4de-979f-432b-a297-0a050c8ff08b\") " pod="kube-system/kube-proxy-stxz9" Jan 23 17:53:53.539487 kubelet[2738]: E0123 17:53:53.539420 2738 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 23 17:53:53.539487 kubelet[2738]: E0123 17:53:53.539472 2738 projected.go:194] Error preparing data for projected volume kube-api-access-6jkln for pod kube-system/kube-proxy-stxz9: configmap "kube-root-ca.crt" not found Jan 23 17:53:53.539665 kubelet[2738]: E0123 17:53:53.539548 2738 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/61e3d4de-979f-432b-a297-0a050c8ff08b-kube-api-access-6jkln podName:61e3d4de-979f-432b-a297-0a050c8ff08b nodeName:}" failed. No retries permitted until 2026-01-23 17:53:54.039526353 +0000 UTC m=+6.667202046 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6jkln" (UniqueName: "kubernetes.io/projected/61e3d4de-979f-432b-a297-0a050c8ff08b-kube-api-access-6jkln") pod "kube-proxy-stxz9" (UID: "61e3d4de-979f-432b-a297-0a050c8ff08b") : configmap "kube-root-ca.crt" not found Jan 23 17:53:53.777274 systemd[1]: Created slice kubepods-besteffort-pode7854aab_eee6_44d3_b5e7_d8bd4ca62703.slice - libcontainer container kubepods-besteffort-pode7854aab_eee6_44d3_b5e7_d8bd4ca62703.slice. Jan 23 17:53:53.832523 kubelet[2738]: I0123 17:53:53.832330 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e7854aab-eee6-44d3-b5e7-d8bd4ca62703-var-lib-calico\") pod \"tigera-operator-7dcd859c48-d2nmp\" (UID: \"e7854aab-eee6-44d3-b5e7-d8bd4ca62703\") " pod="tigera-operator/tigera-operator-7dcd859c48-d2nmp" Jan 23 17:53:53.832523 kubelet[2738]: I0123 17:53:53.832415 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxz2j\" (UniqueName: \"kubernetes.io/projected/e7854aab-eee6-44d3-b5e7-d8bd4ca62703-kube-api-access-gxz2j\") pod \"tigera-operator-7dcd859c48-d2nmp\" (UID: \"e7854aab-eee6-44d3-b5e7-d8bd4ca62703\") " pod="tigera-operator/tigera-operator-7dcd859c48-d2nmp" Jan 23 17:53:54.083740 containerd[1527]: time="2026-01-23T17:53:54.083620579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-d2nmp,Uid:e7854aab-eee6-44d3-b5e7-d8bd4ca62703,Namespace:tigera-operator,Attempt:0,}" Jan 23 17:53:54.108171 containerd[1527]: time="2026-01-23T17:53:54.108122714Z" level=info msg="connecting to shim 21247af450b15aa7e89182938653e34d9021cf7a0158656dd1f7d4541c85a53e" address="unix:///run/containerd/s/51bd3dac0fa830fa513539dd3f70f61e59dc7b79d41de59498d0fba2335ff905" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:53:54.133729 systemd[1]: Started cri-containerd-21247af450b15aa7e89182938653e34d9021cf7a0158656dd1f7d4541c85a53e.scope - libcontainer container 21247af450b15aa7e89182938653e34d9021cf7a0158656dd1f7d4541c85a53e. Jan 23 17:53:54.183538 containerd[1527]: time="2026-01-23T17:53:54.183491306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-d2nmp,Uid:e7854aab-eee6-44d3-b5e7-d8bd4ca62703,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"21247af450b15aa7e89182938653e34d9021cf7a0158656dd1f7d4541c85a53e\"" Jan 23 17:53:54.187718 containerd[1527]: time="2026-01-23T17:53:54.187667286Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 17:53:54.321135 containerd[1527]: time="2026-01-23T17:53:54.319276164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-stxz9,Uid:61e3d4de-979f-432b-a297-0a050c8ff08b,Namespace:kube-system,Attempt:0,}" Jan 23 17:53:54.351018 containerd[1527]: time="2026-01-23T17:53:54.350908566Z" level=info msg="connecting to shim 513f4d0ef64bede7733b9051fe2a9034ccde5cb3d5d7b650b9dff749862e519d" address="unix:///run/containerd/s/0b986e1df9f217c772033d32b282f84eb38a3c6bcc51fc1cbc8cc6d0b94313db" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:53:54.382667 systemd[1]: Started cri-containerd-513f4d0ef64bede7733b9051fe2a9034ccde5cb3d5d7b650b9dff749862e519d.scope - libcontainer container 513f4d0ef64bede7733b9051fe2a9034ccde5cb3d5d7b650b9dff749862e519d. Jan 23 17:53:54.421306 containerd[1527]: time="2026-01-23T17:53:54.421250985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-stxz9,Uid:61e3d4de-979f-432b-a297-0a050c8ff08b,Namespace:kube-system,Attempt:0,} returns sandbox id \"513f4d0ef64bede7733b9051fe2a9034ccde5cb3d5d7b650b9dff749862e519d\"" Jan 23 17:53:54.426538 containerd[1527]: time="2026-01-23T17:53:54.426255632Z" level=info msg="CreateContainer within sandbox \"513f4d0ef64bede7733b9051fe2a9034ccde5cb3d5d7b650b9dff749862e519d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 17:53:54.437567 containerd[1527]: time="2026-01-23T17:53:54.437521310Z" level=info msg="Container ec49f568ab506e7fcbe8f9671e68373a8d57f9e84bd2b752e42e6d3d8c58d8d4: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:53:54.445857 containerd[1527]: time="2026-01-23T17:53:54.445815736Z" level=info msg="CreateContainer within sandbox \"513f4d0ef64bede7733b9051fe2a9034ccde5cb3d5d7b650b9dff749862e519d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ec49f568ab506e7fcbe8f9671e68373a8d57f9e84bd2b752e42e6d3d8c58d8d4\"" Jan 23 17:53:54.447204 containerd[1527]: time="2026-01-23T17:53:54.446921038Z" level=info msg="StartContainer for \"ec49f568ab506e7fcbe8f9671e68373a8d57f9e84bd2b752e42e6d3d8c58d8d4\"" Jan 23 17:53:54.448786 containerd[1527]: time="2026-01-23T17:53:54.448751178Z" level=info msg="connecting to shim ec49f568ab506e7fcbe8f9671e68373a8d57f9e84bd2b752e42e6d3d8c58d8d4" address="unix:///run/containerd/s/0b986e1df9f217c772033d32b282f84eb38a3c6bcc51fc1cbc8cc6d0b94313db" protocol=ttrpc version=3 Jan 23 17:53:54.469692 systemd[1]: Started cri-containerd-ec49f568ab506e7fcbe8f9671e68373a8d57f9e84bd2b752e42e6d3d8c58d8d4.scope - libcontainer container ec49f568ab506e7fcbe8f9671e68373a8d57f9e84bd2b752e42e6d3d8c58d8d4. Jan 23 17:53:54.553240 containerd[1527]: time="2026-01-23T17:53:54.553075601Z" level=info msg="StartContainer for \"ec49f568ab506e7fcbe8f9671e68373a8d57f9e84bd2b752e42e6d3d8c58d8d4\" returns successfully" Jan 23 17:53:55.569861 kubelet[2738]: I0123 17:53:55.569726 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-stxz9" podStartSLOduration=2.569704951 podStartE2EDuration="2.569704951s" podCreationTimestamp="2026-01-23 17:53:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:53:55.56966386 +0000 UTC m=+8.197339593" watchObservedRunningTime="2026-01-23 17:53:55.569704951 +0000 UTC m=+8.197380644" Jan 23 17:53:56.101410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2345790843.mount: Deactivated successfully. Jan 23 17:53:56.505790 containerd[1527]: time="2026-01-23T17:53:56.505480763Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:53:56.507473 containerd[1527]: time="2026-01-23T17:53:56.507338181Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Jan 23 17:53:56.508555 containerd[1527]: time="2026-01-23T17:53:56.508485664Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:53:56.512354 containerd[1527]: time="2026-01-23T17:53:56.511660567Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:53:56.513131 containerd[1527]: time="2026-01-23T17:53:56.513101602Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.325374899s" Jan 23 17:53:56.513282 containerd[1527]: time="2026-01-23T17:53:56.513263162Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 23 17:53:56.518264 containerd[1527]: time="2026-01-23T17:53:56.518117959Z" level=info msg="CreateContainer within sandbox \"21247af450b15aa7e89182938653e34d9021cf7a0158656dd1f7d4541c85a53e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 17:53:56.527649 containerd[1527]: time="2026-01-23T17:53:56.527604337Z" level=info msg="Container f9a1cb1ad673c27f09e1014848ec09bbb9a4595ff38959caf3871f059de2f080: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:53:56.529374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1225280828.mount: Deactivated successfully. Jan 23 17:53:56.541336 containerd[1527]: time="2026-01-23T17:53:56.541292511Z" level=info msg="CreateContainer within sandbox \"21247af450b15aa7e89182938653e34d9021cf7a0158656dd1f7d4541c85a53e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f9a1cb1ad673c27f09e1014848ec09bbb9a4595ff38959caf3871f059de2f080\"" Jan 23 17:53:56.542425 containerd[1527]: time="2026-01-23T17:53:56.542389982Z" level=info msg="StartContainer for \"f9a1cb1ad673c27f09e1014848ec09bbb9a4595ff38959caf3871f059de2f080\"" Jan 23 17:53:56.543749 containerd[1527]: time="2026-01-23T17:53:56.543713588Z" level=info msg="connecting to shim f9a1cb1ad673c27f09e1014848ec09bbb9a4595ff38959caf3871f059de2f080" address="unix:///run/containerd/s/51bd3dac0fa830fa513539dd3f70f61e59dc7b79d41de59498d0fba2335ff905" protocol=ttrpc version=3 Jan 23 17:53:56.570646 systemd[1]: Started cri-containerd-f9a1cb1ad673c27f09e1014848ec09bbb9a4595ff38959caf3871f059de2f080.scope - libcontainer container f9a1cb1ad673c27f09e1014848ec09bbb9a4595ff38959caf3871f059de2f080. Jan 23 17:53:56.606958 containerd[1527]: time="2026-01-23T17:53:56.606909686Z" level=info msg="StartContainer for \"f9a1cb1ad673c27f09e1014848ec09bbb9a4595ff38959caf3871f059de2f080\" returns successfully" Jan 23 17:53:58.662651 kubelet[2738]: I0123 17:53:58.662174 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-d2nmp" podStartSLOduration=3.333035252 podStartE2EDuration="5.662154764s" podCreationTimestamp="2026-01-23 17:53:53 +0000 UTC" firstStartedPulling="2026-01-23 17:53:54.185884439 +0000 UTC m=+6.813560132" lastFinishedPulling="2026-01-23 17:53:56.515003951 +0000 UTC m=+9.142679644" observedRunningTime="2026-01-23 17:53:57.588042312 +0000 UTC m=+10.215718005" watchObservedRunningTime="2026-01-23 17:53:58.662154764 +0000 UTC m=+11.289830457" Jan 23 17:54:02.899561 sudo[1810]: pam_unix(sudo:session): session closed for user root Jan 23 17:54:02.997686 sshd[1809]: Connection closed by 68.220.241.50 port 33294 Jan 23 17:54:02.998889 sshd-session[1806]: pam_unix(sshd:session): session closed for user core Jan 23 17:54:03.008471 systemd-logind[1511]: Session 7 logged out. Waiting for processes to exit. Jan 23 17:54:03.010567 systemd[1]: sshd@6-49.13.3.65:22-68.220.241.50:33294.service: Deactivated successfully. Jan 23 17:54:03.016979 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 17:54:03.017464 systemd[1]: session-7.scope: Consumed 6.586s CPU time, 221.1M memory peak. Jan 23 17:54:03.020697 systemd-logind[1511]: Removed session 7. Jan 23 17:54:16.123164 systemd[1]: Created slice kubepods-besteffort-podeb2a14ac_bc0b_46ff_8ea4_f22fab7a6af1.slice - libcontainer container kubepods-besteffort-podeb2a14ac_bc0b_46ff_8ea4_f22fab7a6af1.slice. Jan 23 17:54:16.176153 kubelet[2738]: I0123 17:54:16.176105 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/eb2a14ac-bc0b-46ff-8ea4-f22fab7a6af1-typha-certs\") pod \"calico-typha-9f4bcb944-9fqzk\" (UID: \"eb2a14ac-bc0b-46ff-8ea4-f22fab7a6af1\") " pod="calico-system/calico-typha-9f4bcb944-9fqzk" Jan 23 17:54:16.178198 kubelet[2738]: I0123 17:54:16.177885 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb2a14ac-bc0b-46ff-8ea4-f22fab7a6af1-tigera-ca-bundle\") pod \"calico-typha-9f4bcb944-9fqzk\" (UID: \"eb2a14ac-bc0b-46ff-8ea4-f22fab7a6af1\") " pod="calico-system/calico-typha-9f4bcb944-9fqzk" Jan 23 17:54:16.178198 kubelet[2738]: I0123 17:54:16.178134 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr4xq\" (UniqueName: \"kubernetes.io/projected/eb2a14ac-bc0b-46ff-8ea4-f22fab7a6af1-kube-api-access-vr4xq\") pod \"calico-typha-9f4bcb944-9fqzk\" (UID: \"eb2a14ac-bc0b-46ff-8ea4-f22fab7a6af1\") " pod="calico-system/calico-typha-9f4bcb944-9fqzk" Jan 23 17:54:16.356481 systemd[1]: Created slice kubepods-besteffort-pode44de8d1_a7b6_49f6_af91_f4f101ed8135.slice - libcontainer container kubepods-besteffort-pode44de8d1_a7b6_49f6_af91_f4f101ed8135.slice. Jan 23 17:54:16.380822 kubelet[2738]: I0123 17:54:16.380267 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e44de8d1-a7b6-49f6-af91-f4f101ed8135-var-run-calico\") pod \"calico-node-kzqjp\" (UID: \"e44de8d1-a7b6-49f6-af91-f4f101ed8135\") " pod="calico-system/calico-node-kzqjp" Jan 23 17:54:16.383411 kubelet[2738]: I0123 17:54:16.383345 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e44de8d1-a7b6-49f6-af91-f4f101ed8135-node-certs\") pod \"calico-node-kzqjp\" (UID: \"e44de8d1-a7b6-49f6-af91-f4f101ed8135\") " pod="calico-system/calico-node-kzqjp" Jan 23 17:54:16.383411 kubelet[2738]: I0123 17:54:16.383416 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e44de8d1-a7b6-49f6-af91-f4f101ed8135-tigera-ca-bundle\") pod \"calico-node-kzqjp\" (UID: \"e44de8d1-a7b6-49f6-af91-f4f101ed8135\") " pod="calico-system/calico-node-kzqjp" Jan 23 17:54:16.386690 kubelet[2738]: I0123 17:54:16.386642 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e44de8d1-a7b6-49f6-af91-f4f101ed8135-var-lib-calico\") pod \"calico-node-kzqjp\" (UID: \"e44de8d1-a7b6-49f6-af91-f4f101ed8135\") " pod="calico-system/calico-node-kzqjp" Jan 23 17:54:16.388812 kubelet[2738]: I0123 17:54:16.388761 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e44de8d1-a7b6-49f6-af91-f4f101ed8135-xtables-lock\") pod \"calico-node-kzqjp\" (UID: \"e44de8d1-a7b6-49f6-af91-f4f101ed8135\") " pod="calico-system/calico-node-kzqjp" Jan 23 17:54:16.390018 kubelet[2738]: I0123 17:54:16.389229 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e44de8d1-a7b6-49f6-af91-f4f101ed8135-cni-log-dir\") pod \"calico-node-kzqjp\" (UID: \"e44de8d1-a7b6-49f6-af91-f4f101ed8135\") " pod="calico-system/calico-node-kzqjp" Jan 23 17:54:16.390182 kubelet[2738]: I0123 17:54:16.390084 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e44de8d1-a7b6-49f6-af91-f4f101ed8135-flexvol-driver-host\") pod \"calico-node-kzqjp\" (UID: \"e44de8d1-a7b6-49f6-af91-f4f101ed8135\") " pod="calico-system/calico-node-kzqjp" Jan 23 17:54:16.390182 kubelet[2738]: I0123 17:54:16.390174 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4t75\" (UniqueName: \"kubernetes.io/projected/e44de8d1-a7b6-49f6-af91-f4f101ed8135-kube-api-access-s4t75\") pod \"calico-node-kzqjp\" (UID: \"e44de8d1-a7b6-49f6-af91-f4f101ed8135\") " pod="calico-system/calico-node-kzqjp" Jan 23 17:54:16.390264 kubelet[2738]: I0123 17:54:16.390218 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e44de8d1-a7b6-49f6-af91-f4f101ed8135-cni-bin-dir\") pod \"calico-node-kzqjp\" (UID: \"e44de8d1-a7b6-49f6-af91-f4f101ed8135\") " pod="calico-system/calico-node-kzqjp" Jan 23 17:54:16.390264 kubelet[2738]: I0123 17:54:16.390245 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e44de8d1-a7b6-49f6-af91-f4f101ed8135-lib-modules\") pod \"calico-node-kzqjp\" (UID: \"e44de8d1-a7b6-49f6-af91-f4f101ed8135\") " pod="calico-system/calico-node-kzqjp" Jan 23 17:54:16.390325 kubelet[2738]: I0123 17:54:16.390283 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e44de8d1-a7b6-49f6-af91-f4f101ed8135-policysync\") pod \"calico-node-kzqjp\" (UID: \"e44de8d1-a7b6-49f6-af91-f4f101ed8135\") " pod="calico-system/calico-node-kzqjp" Jan 23 17:54:16.390325 kubelet[2738]: I0123 17:54:16.390315 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e44de8d1-a7b6-49f6-af91-f4f101ed8135-cni-net-dir\") pod \"calico-node-kzqjp\" (UID: \"e44de8d1-a7b6-49f6-af91-f4f101ed8135\") " pod="calico-system/calico-node-kzqjp" Jan 23 17:54:16.427949 containerd[1527]: time="2026-01-23T17:54:16.427846215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9f4bcb944-9fqzk,Uid:eb2a14ac-bc0b-46ff-8ea4-f22fab7a6af1,Namespace:calico-system,Attempt:0,}" Jan 23 17:54:16.461720 containerd[1527]: time="2026-01-23T17:54:16.461541384Z" level=info msg="connecting to shim ffd5d8f341cc75fd645daf038f17a646c700c0bff5b476a2d7bbbe40b6a25f6d" address="unix:///run/containerd/s/5c44d35034e1fc4354d5406213ea7eb7ce2dcd571503a5a57894739d25bbce82" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:54:16.495576 kubelet[2738]: E0123 17:54:16.495527 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.495576 kubelet[2738]: W0123 17:54:16.495564 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.495749 kubelet[2738]: E0123 17:54:16.495593 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.497511 kubelet[2738]: E0123 17:54:16.495829 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.497511 kubelet[2738]: W0123 17:54:16.495841 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.497511 kubelet[2738]: E0123 17:54:16.495850 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.499156 kubelet[2738]: E0123 17:54:16.499121 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.499156 kubelet[2738]: W0123 17:54:16.499147 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.499156 kubelet[2738]: E0123 17:54:16.499174 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.504627 kubelet[2738]: E0123 17:54:16.504586 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.504627 kubelet[2738]: W0123 17:54:16.504618 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.504790 kubelet[2738]: E0123 17:54:16.504641 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.525080 kubelet[2738]: E0123 17:54:16.524972 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.525080 kubelet[2738]: W0123 17:54:16.524998 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.525080 kubelet[2738]: E0123 17:54:16.525033 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.527573 systemd[1]: Started cri-containerd-ffd5d8f341cc75fd645daf038f17a646c700c0bff5b476a2d7bbbe40b6a25f6d.scope - libcontainer container ffd5d8f341cc75fd645daf038f17a646c700c0bff5b476a2d7bbbe40b6a25f6d. Jan 23 17:54:16.600422 containerd[1527]: time="2026-01-23T17:54:16.599170645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9f4bcb944-9fqzk,Uid:eb2a14ac-bc0b-46ff-8ea4-f22fab7a6af1,Namespace:calico-system,Attempt:0,} returns sandbox id \"ffd5d8f341cc75fd645daf038f17a646c700c0bff5b476a2d7bbbe40b6a25f6d\"" Jan 23 17:54:16.604475 containerd[1527]: time="2026-01-23T17:54:16.604225660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 17:54:16.664114 containerd[1527]: time="2026-01-23T17:54:16.663363165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kzqjp,Uid:e44de8d1-a7b6-49f6-af91-f4f101ed8135,Namespace:calico-system,Attempt:0,}" Jan 23 17:54:16.696761 kubelet[2738]: E0123 17:54:16.696447 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jkkz7" podUID="6e403bca-286c-4acf-bbf0-2ee7f3d0b56e" Jan 23 17:54:16.707827 containerd[1527]: time="2026-01-23T17:54:16.707778191Z" level=info msg="connecting to shim 75ba181e263c1f39629c474b5007d97052397f9ff9a0842aceb46698247ce522" address="unix:///run/containerd/s/8a0877571393751aabff664471f578fa2cc0777dd358f417583d0635a22c4ae9" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:54:16.757201 systemd[1]: Started cri-containerd-75ba181e263c1f39629c474b5007d97052397f9ff9a0842aceb46698247ce522.scope - libcontainer container 75ba181e263c1f39629c474b5007d97052397f9ff9a0842aceb46698247ce522. Jan 23 17:54:16.759112 kubelet[2738]: E0123 17:54:16.759053 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.759112 kubelet[2738]: W0123 17:54:16.759083 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.760590 kubelet[2738]: E0123 17:54:16.760230 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.761444 kubelet[2738]: E0123 17:54:16.761407 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.762250 kubelet[2738]: W0123 17:54:16.762088 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.762250 kubelet[2738]: E0123 17:54:16.762201 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.762739 kubelet[2738]: E0123 17:54:16.762663 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.762977 kubelet[2738]: W0123 17:54:16.762864 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.763545 kubelet[2738]: E0123 17:54:16.763519 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.763973 kubelet[2738]: E0123 17:54:16.763957 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.764539 kubelet[2738]: W0123 17:54:16.764463 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.764715 kubelet[2738]: E0123 17:54:16.764619 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.765119 kubelet[2738]: E0123 17:54:16.765047 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.765491 kubelet[2738]: W0123 17:54:16.765228 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.765491 kubelet[2738]: E0123 17:54:16.765254 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.767404 kubelet[2738]: E0123 17:54:16.767155 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.767404 kubelet[2738]: W0123 17:54:16.767170 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.767404 kubelet[2738]: E0123 17:54:16.767186 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.769414 kubelet[2738]: E0123 17:54:16.769356 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.769851 kubelet[2738]: W0123 17:54:16.769793 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.770302 kubelet[2738]: E0123 17:54:16.769954 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.771858 kubelet[2738]: E0123 17:54:16.771833 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.772444 kubelet[2738]: W0123 17:54:16.772160 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.772444 kubelet[2738]: E0123 17:54:16.772189 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.773733 kubelet[2738]: E0123 17:54:16.773660 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.773733 kubelet[2738]: W0123 17:54:16.773681 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.773733 kubelet[2738]: E0123 17:54:16.773701 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.774785 kubelet[2738]: E0123 17:54:16.774721 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.774785 kubelet[2738]: W0123 17:54:16.774737 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.774785 kubelet[2738]: E0123 17:54:16.774754 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.775896 kubelet[2738]: E0123 17:54:16.775585 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.775896 kubelet[2738]: W0123 17:54:16.775705 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.775896 kubelet[2738]: E0123 17:54:16.775727 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.777104 kubelet[2738]: E0123 17:54:16.777058 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.778576 kubelet[2738]: W0123 17:54:16.778490 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.778576 kubelet[2738]: E0123 17:54:16.778522 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.779281 kubelet[2738]: E0123 17:54:16.779211 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.779281 kubelet[2738]: W0123 17:54:16.779228 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.779281 kubelet[2738]: E0123 17:54:16.779242 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.779693 kubelet[2738]: E0123 17:54:16.779624 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.779693 kubelet[2738]: W0123 17:54:16.779637 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.779693 kubelet[2738]: E0123 17:54:16.779649 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.780092 kubelet[2738]: E0123 17:54:16.780022 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.780092 kubelet[2738]: W0123 17:54:16.780036 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.780092 kubelet[2738]: E0123 17:54:16.780048 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.780931 kubelet[2738]: E0123 17:54:16.780471 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.780931 kubelet[2738]: W0123 17:54:16.780561 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.780931 kubelet[2738]: E0123 17:54:16.780574 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.781904 kubelet[2738]: E0123 17:54:16.781738 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.781904 kubelet[2738]: W0123 17:54:16.781776 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.782810 kubelet[2738]: E0123 17:54:16.782465 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.783091 kubelet[2738]: E0123 17:54:16.782994 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.783189 kubelet[2738]: W0123 17:54:16.783173 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.784058 kubelet[2738]: E0123 17:54:16.783510 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.786656 kubelet[2738]: E0123 17:54:16.786633 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.786891 kubelet[2738]: W0123 17:54:16.786735 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.786891 kubelet[2738]: E0123 17:54:16.786758 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.787212 kubelet[2738]: E0123 17:54:16.787076 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.787212 kubelet[2738]: W0123 17:54:16.787092 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.787212 kubelet[2738]: E0123 17:54:16.787109 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.793539 kubelet[2738]: E0123 17:54:16.793500 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.793539 kubelet[2738]: W0123 17:54:16.793524 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.793539 kubelet[2738]: E0123 17:54:16.793548 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.794123 kubelet[2738]: I0123 17:54:16.794075 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6e403bca-286c-4acf-bbf0-2ee7f3d0b56e-registration-dir\") pod \"csi-node-driver-jkkz7\" (UID: \"6e403bca-286c-4acf-bbf0-2ee7f3d0b56e\") " pod="calico-system/csi-node-driver-jkkz7" Jan 23 17:54:16.794449 kubelet[2738]: E0123 17:54:16.794405 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.794449 kubelet[2738]: W0123 17:54:16.794424 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.794449 kubelet[2738]: E0123 17:54:16.794457 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.794655 kubelet[2738]: E0123 17:54:16.794619 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.794655 kubelet[2738]: W0123 17:54:16.794628 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.795392 kubelet[2738]: E0123 17:54:16.795272 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.795857 kubelet[2738]: E0123 17:54:16.795816 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.795857 kubelet[2738]: W0123 17:54:16.795835 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.796045 kubelet[2738]: E0123 17:54:16.795956 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.796118 kubelet[2738]: I0123 17:54:16.796001 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6e403bca-286c-4acf-bbf0-2ee7f3d0b56e-kubelet-dir\") pod \"csi-node-driver-jkkz7\" (UID: \"6e403bca-286c-4acf-bbf0-2ee7f3d0b56e\") " pod="calico-system/csi-node-driver-jkkz7" Jan 23 17:54:16.796474 kubelet[2738]: E0123 17:54:16.796425 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.796474 kubelet[2738]: W0123 17:54:16.796471 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.796667 kubelet[2738]: E0123 17:54:16.796494 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.796667 kubelet[2738]: E0123 17:54:16.796686 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.796822 kubelet[2738]: W0123 17:54:16.796694 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.796822 kubelet[2738]: E0123 17:54:16.796711 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.797230 kubelet[2738]: E0123 17:54:16.797127 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.797230 kubelet[2738]: W0123 17:54:16.797164 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.797230 kubelet[2738]: E0123 17:54:16.797180 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.797522 kubelet[2738]: I0123 17:54:16.797333 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6e403bca-286c-4acf-bbf0-2ee7f3d0b56e-varrun\") pod \"csi-node-driver-jkkz7\" (UID: \"6e403bca-286c-4acf-bbf0-2ee7f3d0b56e\") " pod="calico-system/csi-node-driver-jkkz7" Jan 23 17:54:16.797522 kubelet[2738]: E0123 17:54:16.797504 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.797522 kubelet[2738]: W0123 17:54:16.797518 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.797522 kubelet[2738]: E0123 17:54:16.797538 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.797970 kubelet[2738]: E0123 17:54:16.797869 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.797970 kubelet[2738]: W0123 17:54:16.797907 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.797970 kubelet[2738]: E0123 17:54:16.797924 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.799037 kubelet[2738]: E0123 17:54:16.799004 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.799037 kubelet[2738]: W0123 17:54:16.799024 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.799037 kubelet[2738]: E0123 17:54:16.799041 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.799644 kubelet[2738]: I0123 17:54:16.799074 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6e403bca-286c-4acf-bbf0-2ee7f3d0b56e-socket-dir\") pod \"csi-node-driver-jkkz7\" (UID: \"6e403bca-286c-4acf-bbf0-2ee7f3d0b56e\") " pod="calico-system/csi-node-driver-jkkz7" Jan 23 17:54:16.799644 kubelet[2738]: E0123 17:54:16.799234 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.799644 kubelet[2738]: W0123 17:54:16.799243 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.799644 kubelet[2738]: E0123 17:54:16.799260 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.799644 kubelet[2738]: I0123 17:54:16.799275 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7872c\" (UniqueName: \"kubernetes.io/projected/6e403bca-286c-4acf-bbf0-2ee7f3d0b56e-kube-api-access-7872c\") pod \"csi-node-driver-jkkz7\" (UID: \"6e403bca-286c-4acf-bbf0-2ee7f3d0b56e\") " pod="calico-system/csi-node-driver-jkkz7" Jan 23 17:54:16.800647 kubelet[2738]: E0123 17:54:16.800610 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.800647 kubelet[2738]: W0123 17:54:16.800634 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.800647 kubelet[2738]: E0123 17:54:16.800659 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.801016 kubelet[2738]: E0123 17:54:16.800815 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.801016 kubelet[2738]: W0123 17:54:16.800822 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.801016 kubelet[2738]: E0123 17:54:16.800831 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.801016 kubelet[2738]: E0123 17:54:16.800967 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.801016 kubelet[2738]: W0123 17:54:16.800974 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.801016 kubelet[2738]: E0123 17:54:16.800982 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.801780 kubelet[2738]: E0123 17:54:16.801084 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.801780 kubelet[2738]: W0123 17:54:16.801091 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.801780 kubelet[2738]: E0123 17:54:16.801107 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.848208 containerd[1527]: time="2026-01-23T17:54:16.848137060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kzqjp,Uid:e44de8d1-a7b6-49f6-af91-f4f101ed8135,Namespace:calico-system,Attempt:0,} returns sandbox id \"75ba181e263c1f39629c474b5007d97052397f9ff9a0842aceb46698247ce522\"" Jan 23 17:54:16.900499 kubelet[2738]: E0123 17:54:16.900412 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.900499 kubelet[2738]: W0123 17:54:16.900467 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.900499 kubelet[2738]: E0123 17:54:16.900498 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.901134 kubelet[2738]: E0123 17:54:16.900845 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.901134 kubelet[2738]: W0123 17:54:16.900861 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.901134 kubelet[2738]: E0123 17:54:16.900889 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.901836 kubelet[2738]: E0123 17:54:16.901775 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.901836 kubelet[2738]: W0123 17:54:16.901806 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.902120 kubelet[2738]: E0123 17:54:16.902056 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.902356 kubelet[2738]: E0123 17:54:16.902324 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.902356 kubelet[2738]: W0123 17:54:16.902351 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.902679 kubelet[2738]: E0123 17:54:16.902385 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.902934 kubelet[2738]: E0123 17:54:16.902893 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.902934 kubelet[2738]: W0123 17:54:16.902908 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.903238 kubelet[2738]: E0123 17:54:16.903082 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.903238 kubelet[2738]: W0123 17:54:16.903099 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.903238 kubelet[2738]: E0123 17:54:16.903092 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.903627 kubelet[2738]: E0123 17:54:16.903215 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.903936 kubelet[2738]: E0123 17:54:16.903902 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.903936 kubelet[2738]: W0123 17:54:16.903932 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.904134 kubelet[2738]: E0123 17:54:16.903994 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.904414 kubelet[2738]: E0123 17:54:16.904332 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.904414 kubelet[2738]: W0123 17:54:16.904363 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.904881 kubelet[2738]: E0123 17:54:16.904541 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.905194 kubelet[2738]: E0123 17:54:16.904924 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.905194 kubelet[2738]: W0123 17:54:16.904941 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.905194 kubelet[2738]: E0123 17:54:16.905065 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.905694 kubelet[2738]: E0123 17:54:16.905250 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.905694 kubelet[2738]: W0123 17:54:16.905266 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.906116 kubelet[2738]: E0123 17:54:16.905776 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.906116 kubelet[2738]: E0123 17:54:16.906053 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.906116 kubelet[2738]: W0123 17:54:16.906076 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.906456 kubelet[2738]: E0123 17:54:16.906222 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.906835 kubelet[2738]: E0123 17:54:16.906478 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.906835 kubelet[2738]: W0123 17:54:16.906591 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.906835 kubelet[2738]: E0123 17:54:16.906734 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.907563 kubelet[2738]: E0123 17:54:16.907415 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.907563 kubelet[2738]: W0123 17:54:16.907459 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.907891 kubelet[2738]: E0123 17:54:16.907865 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.908325 kubelet[2738]: E0123 17:54:16.908295 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.908636 kubelet[2738]: W0123 17:54:16.908508 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.908636 kubelet[2738]: E0123 17:54:16.908599 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.909169 kubelet[2738]: E0123 17:54:16.909093 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.909169 kubelet[2738]: W0123 17:54:16.909111 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.909271 kubelet[2738]: E0123 17:54:16.909169 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.909628 kubelet[2738]: E0123 17:54:16.909522 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.909628 kubelet[2738]: W0123 17:54:16.909537 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.909628 kubelet[2738]: E0123 17:54:16.909569 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.910048 kubelet[2738]: E0123 17:54:16.909987 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.910048 kubelet[2738]: W0123 17:54:16.910003 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.910048 kubelet[2738]: E0123 17:54:16.910038 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.910352 kubelet[2738]: E0123 17:54:16.910281 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.910352 kubelet[2738]: W0123 17:54:16.910294 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.910452 kubelet[2738]: E0123 17:54:16.910372 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.910723 kubelet[2738]: E0123 17:54:16.910666 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.910723 kubelet[2738]: W0123 17:54:16.910680 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.910723 kubelet[2738]: E0123 17:54:16.910712 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.911115 kubelet[2738]: E0123 17:54:16.911035 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.911115 kubelet[2738]: W0123 17:54:16.911049 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.911115 kubelet[2738]: E0123 17:54:16.911073 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.911394 kubelet[2738]: E0123 17:54:16.911380 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.911583 kubelet[2738]: W0123 17:54:16.911447 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.911583 kubelet[2738]: E0123 17:54:16.911477 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.913039 kubelet[2738]: E0123 17:54:16.913013 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.913159 kubelet[2738]: W0123 17:54:16.913144 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.913380 kubelet[2738]: E0123 17:54:16.913336 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.913610 kubelet[2738]: E0123 17:54:16.913519 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.913610 kubelet[2738]: W0123 17:54:16.913532 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.913610 kubelet[2738]: E0123 17:54:16.913568 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.913937 kubelet[2738]: E0123 17:54:16.913922 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.914543 kubelet[2738]: W0123 17:54:16.914411 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.915801 kubelet[2738]: E0123 17:54:16.914668 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.916543 kubelet[2738]: E0123 17:54:16.916523 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.916637 kubelet[2738]: W0123 17:54:16.916621 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.916693 kubelet[2738]: E0123 17:54:16.916683 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:16.926933 kubelet[2738]: E0123 17:54:16.926904 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:16.927110 kubelet[2738]: W0123 17:54:16.927042 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:16.927110 kubelet[2738]: E0123 17:54:16.927072 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:18.252684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount90407648.mount: Deactivated successfully. Jan 23 17:54:18.503751 kubelet[2738]: E0123 17:54:18.503510 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jkkz7" podUID="6e403bca-286c-4acf-bbf0-2ee7f3d0b56e" Jan 23 17:54:19.100553 containerd[1527]: time="2026-01-23T17:54:19.100480038Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:54:19.102970 containerd[1527]: time="2026-01-23T17:54:19.102914153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Jan 23 17:54:19.104468 containerd[1527]: time="2026-01-23T17:54:19.104389175Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:54:19.108590 containerd[1527]: time="2026-01-23T17:54:19.108528415Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:54:19.109204 containerd[1527]: time="2026-01-23T17:54:19.109173357Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.504826124s" Jan 23 17:54:19.109292 containerd[1527]: time="2026-01-23T17:54:19.109277167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 23 17:54:19.111398 containerd[1527]: time="2026-01-23T17:54:19.111357728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 17:54:19.130415 containerd[1527]: time="2026-01-23T17:54:19.130377445Z" level=info msg="CreateContainer within sandbox \"ffd5d8f341cc75fd645daf038f17a646c700c0bff5b476a2d7bbbe40b6a25f6d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 17:54:19.140831 containerd[1527]: time="2026-01-23T17:54:19.140776929Z" level=info msg="Container 0723e54fba09a05528cdf871f97bf2addaa2e55f708439120bbf55dc74f05d12: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:54:19.146382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount571533182.mount: Deactivated successfully. Jan 23 17:54:19.153583 containerd[1527]: time="2026-01-23T17:54:19.153424190Z" level=info msg="CreateContainer within sandbox \"ffd5d8f341cc75fd645daf038f17a646c700c0bff5b476a2d7bbbe40b6a25f6d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0723e54fba09a05528cdf871f97bf2addaa2e55f708439120bbf55dc74f05d12\"" Jan 23 17:54:19.155155 containerd[1527]: time="2026-01-23T17:54:19.154373001Z" level=info msg="StartContainer for \"0723e54fba09a05528cdf871f97bf2addaa2e55f708439120bbf55dc74f05d12\"" Jan 23 17:54:19.157479 containerd[1527]: time="2026-01-23T17:54:19.157379412Z" level=info msg="connecting to shim 0723e54fba09a05528cdf871f97bf2addaa2e55f708439120bbf55dc74f05d12" address="unix:///run/containerd/s/5c44d35034e1fc4354d5406213ea7eb7ce2dcd571503a5a57894739d25bbce82" protocol=ttrpc version=3 Jan 23 17:54:19.192790 systemd[1]: Started cri-containerd-0723e54fba09a05528cdf871f97bf2addaa2e55f708439120bbf55dc74f05d12.scope - libcontainer container 0723e54fba09a05528cdf871f97bf2addaa2e55f708439120bbf55dc74f05d12. Jan 23 17:54:19.243181 containerd[1527]: time="2026-01-23T17:54:19.243115089Z" level=info msg="StartContainer for \"0723e54fba09a05528cdf871f97bf2addaa2e55f708439120bbf55dc74f05d12\" returns successfully" Jan 23 17:54:19.702952 kubelet[2738]: E0123 17:54:19.702898 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:19.702952 kubelet[2738]: W0123 17:54:19.702936 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:19.704754 kubelet[2738]: E0123 17:54:19.702970 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:19.704754 kubelet[2738]: E0123 17:54:19.703203 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:19.704754 kubelet[2738]: W0123 17:54:19.703216 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:19.704754 kubelet[2738]: E0123 17:54:19.703231 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:19.704754 kubelet[2738]: E0123 17:54:19.703462 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:19.704754 kubelet[2738]: W0123 17:54:19.703476 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:19.704754 kubelet[2738]: E0123 17:54:19.703502 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:19.704754 kubelet[2738]: E0123 17:54:19.703814 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:19.704754 kubelet[2738]: W0123 17:54:19.703827 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:19.704754 kubelet[2738]: E0123 17:54:19.703844 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:19.705468 kubelet[2738]: E0123 17:54:19.704166 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:19.705468 kubelet[2738]: W0123 17:54:19.704181 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:19.705468 kubelet[2738]: E0123 17:54:19.704196 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:19.705468 kubelet[2738]: E0123 17:54:19.704422 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:19.705468 kubelet[2738]: W0123 17:54:19.704464 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:19.705468 kubelet[2738]: E0123 17:54:19.704480 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:19.705468 kubelet[2738]: E0123 17:54:19.704785 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:19.705468 kubelet[2738]: W0123 17:54:19.704801 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:19.705468 kubelet[2738]: E0123 17:54:19.704818 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:19.705468 kubelet[2738]: E0123 17:54:19.705056 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:19.705708 kubelet[2738]: W0123 17:54:19.705069 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:19.705708 kubelet[2738]: E0123 17:54:19.705084 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:19.705708 kubelet[2738]: E0123 17:54:19.705316 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:19.705708 kubelet[2738]: W0123 17:54:19.705328 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:19.705708 kubelet[2738]: E0123 17:54:19.705343 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:19.705708 kubelet[2738]: E0123 17:54:19.705642 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:19.705708 kubelet[2738]: W0123 17:54:19.705659 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:19.705708 kubelet[2738]: E0123 17:54:19.705676 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:19.706602 kubelet[2738]: E0123 17:54:19.706175 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:19.706602 kubelet[2738]: W0123 17:54:19.706191 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:19.706602 kubelet[2738]: E0123 17:54:19.706202 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:19.706602 kubelet[2738]: E0123 17:54:19.706321 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:19.706602 kubelet[2738]: W0123 17:54:19.706328 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:19.706602 kubelet[2738]: E0123 17:54:19.706335 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:19.706602 kubelet[2738]: E0123 17:54:19.706458 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:19.706602 kubelet[2738]: W0123 17:54:19.706465 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:19.706602 kubelet[2738]: E0123 17:54:19.706473 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:19.706898 kubelet[2738]: E0123 17:54:19.706623 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:19.706898 kubelet[2738]: W0123 17:54:19.706645 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:19.706898 kubelet[2738]: E0123 17:54:19.706655 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:19.706898 kubelet[2738]: E0123 17:54:19.706775 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:19.706898 kubelet[2738]: W0123 17:54:19.706781 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:19.706898 kubelet[2738]: E0123 17:54:19.706788 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:19.726633 kubelet[2738]: E0123 17:54:19.726548 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:19.726633 kubelet[2738]: W0123 17:54:19.726600 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:19.726633 kubelet[2738]: E0123 17:54:19.726627 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:19.726957 kubelet[2738]: E0123 17:54:19.726869 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:19.726957 kubelet[2738]: W0123 17:54:19.726882 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:19.726957 kubelet[2738]: E0123 17:54:19.726896 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:19.727330 kubelet[2738]: E0123 17:54:19.727111 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:19.727330 kubelet[2738]: W0123 17:54:19.727133 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:19.727330 kubelet[2738]: E0123 17:54:19.727148 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:19.728345 kubelet[2738]: E0123 17:54:19.728276 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:19.728580 kubelet[2738]: W0123 17:54:19.728309 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:19.728580 kubelet[2738]: E0123 17:54:19.728489 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:19.729345 kubelet[2738]: E0123 17:54:19.729279 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:19.729345 kubelet[2738]: W0123 17:54:19.729297 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:19.729345 kubelet[2738]: E0123 17:54:19.729330 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:19.729994 kubelet[2738]: E0123 17:54:19.729931 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:19.730606 kubelet[2738]: W0123 17:54:19.730515 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:19.730937 kubelet[2738]: E0123 17:54:19.730844 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:19.731309 kubelet[2738]: E0123 17:54:19.731271 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:19.731309 kubelet[2738]: W0123 17:54:19.731289 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:19.731644 kubelet[2738]: E0123 17:54:19.731485 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:19.732037 kubelet[2738]: E0123 17:54:19.731927 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:19.732299 kubelet[2738]: W0123 17:54:19.732133 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:19.732299 kubelet[2738]: E0123 17:54:19.732171 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:19.732852 kubelet[2738]: E0123 17:54:19.732822 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:19.733087 kubelet[2738]: W0123 17:54:19.732924 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:19.733087 kubelet[2738]: E0123 17:54:19.732959 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:19.734032 kubelet[2738]: E0123 17:54:19.733637 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:19.734032 kubelet[2738]: W0123 17:54:19.733655 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:19.734032 kubelet[2738]: E0123 17:54:19.733683 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:19.734487 kubelet[2738]: E0123 17:54:19.734461 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:19.734683 kubelet[2738]: W0123 17:54:19.734556 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:19.734994 kubelet[2738]: E0123 17:54:19.734908 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:19.735358 kubelet[2738]: E0123 17:54:19.735340 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:19.736127 kubelet[2738]: W0123 17:54:19.736082 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:19.737266 kubelet[2738]: E0123 17:54:19.737245 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:19.737362 kubelet[2738]: W0123 17:54:19.737336 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:19.737458 kubelet[2738]: E0123 17:54:19.737424 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:19.737925 kubelet[2738]: E0123 17:54:19.737911 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:19.737987 kubelet[2738]: W0123 17:54:19.737975 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:19.738056 kubelet[2738]: E0123 17:54:19.738045 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:19.738802 kubelet[2738]: E0123 17:54:19.738776 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:19.738884 kubelet[2738]: W0123 17:54:19.738871 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:19.738955 kubelet[2738]: E0123 17:54:19.738943 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:19.740178 kubelet[2738]: E0123 17:54:19.740157 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:19.740277 kubelet[2738]: W0123 17:54:19.740262 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:19.740356 kubelet[2738]: E0123 17:54:19.740343 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:19.740418 kubelet[2738]: E0123 17:54:19.737991 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:19.741053 kubelet[2738]: E0123 17:54:19.740739 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:19.741053 kubelet[2738]: W0123 17:54:19.740754 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:19.741053 kubelet[2738]: E0123 17:54:19.740767 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:19.742160 kubelet[2738]: E0123 17:54:19.742137 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:19.742244 kubelet[2738]: W0123 17:54:19.742221 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:19.742305 kubelet[2738]: E0123 17:54:19.742290 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:20.503316 kubelet[2738]: E0123 17:54:20.503245 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jkkz7" podUID="6e403bca-286c-4acf-bbf0-2ee7f3d0b56e" Jan 23 17:54:20.641753 kubelet[2738]: I0123 17:54:20.641414 2738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 17:54:20.713987 kubelet[2738]: E0123 17:54:20.713922 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:20.713987 kubelet[2738]: W0123 17:54:20.713960 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:20.714388 kubelet[2738]: E0123 17:54:20.714003 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:20.714388 kubelet[2738]: E0123 17:54:20.714255 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:20.714388 kubelet[2738]: W0123 17:54:20.714266 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:20.714388 kubelet[2738]: E0123 17:54:20.714370 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:20.714945 kubelet[2738]: E0123 17:54:20.714875 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:20.714945 kubelet[2738]: W0123 17:54:20.714893 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:20.714945 kubelet[2738]: E0123 17:54:20.714912 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:20.715328 kubelet[2738]: E0123 17:54:20.715300 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:20.715328 kubelet[2738]: W0123 17:54:20.715322 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:20.715621 kubelet[2738]: E0123 17:54:20.715340 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:20.715772 kubelet[2738]: E0123 17:54:20.715746 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:20.715833 kubelet[2738]: W0123 17:54:20.715769 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:20.715833 kubelet[2738]: E0123 17:54:20.715787 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:20.716006 kubelet[2738]: E0123 17:54:20.715989 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:20.716046 kubelet[2738]: W0123 17:54:20.716007 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:20.716046 kubelet[2738]: E0123 17:54:20.716023 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:20.716528 kubelet[2738]: E0123 17:54:20.716494 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:20.716590 kubelet[2738]: W0123 17:54:20.716517 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:20.716590 kubelet[2738]: E0123 17:54:20.716555 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:20.718017 kubelet[2738]: E0123 17:54:20.716889 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:20.718017 kubelet[2738]: W0123 17:54:20.716912 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:20.718017 kubelet[2738]: E0123 17:54:20.716927 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:20.718017 kubelet[2738]: E0123 17:54:20.717348 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:20.718017 kubelet[2738]: W0123 17:54:20.717366 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:20.718017 kubelet[2738]: E0123 17:54:20.717382 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:20.718017 kubelet[2738]: E0123 17:54:20.717777 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:20.718017 kubelet[2738]: W0123 17:54:20.717825 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:20.718017 kubelet[2738]: E0123 17:54:20.717844 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:20.718721 kubelet[2738]: E0123 17:54:20.718698 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:20.718791 kubelet[2738]: W0123 17:54:20.718723 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:20.718791 kubelet[2738]: E0123 17:54:20.718742 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:20.719119 kubelet[2738]: E0123 17:54:20.719097 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:20.719178 kubelet[2738]: W0123 17:54:20.719121 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:20.719178 kubelet[2738]: E0123 17:54:20.719139 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:20.719742 kubelet[2738]: E0123 17:54:20.719534 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:20.719804 kubelet[2738]: W0123 17:54:20.719749 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:20.719804 kubelet[2738]: E0123 17:54:20.719773 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:20.720514 kubelet[2738]: E0123 17:54:20.720496 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:20.720563 kubelet[2738]: W0123 17:54:20.720515 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:20.720563 kubelet[2738]: E0123 17:54:20.720529 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:20.721120 kubelet[2738]: E0123 17:54:20.721102 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:20.721120 kubelet[2738]: W0123 17:54:20.721118 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:20.721177 kubelet[2738]: E0123 17:54:20.721128 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:20.735962 kubelet[2738]: E0123 17:54:20.735866 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:20.735962 kubelet[2738]: W0123 17:54:20.735891 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:20.735962 kubelet[2738]: E0123 17:54:20.735914 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:20.735962 kubelet[2738]: E0123 17:54:20.736363 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:20.735962 kubelet[2738]: W0123 17:54:20.736376 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:20.735962 kubelet[2738]: E0123 17:54:20.736400 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:20.737201 kubelet[2738]: E0123 17:54:20.736690 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:20.737201 kubelet[2738]: W0123 17:54:20.736704 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:20.737201 kubelet[2738]: E0123 17:54:20.736723 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:20.737201 kubelet[2738]: E0123 17:54:20.736930 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:20.737201 kubelet[2738]: W0123 17:54:20.736946 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:20.737201 kubelet[2738]: E0123 17:54:20.736970 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:20.737492 kubelet[2738]: E0123 17:54:20.737476 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:20.737673 kubelet[2738]: W0123 17:54:20.737558 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:20.737867 kubelet[2738]: E0123 17:54:20.737756 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:20.738173 kubelet[2738]: E0123 17:54:20.738083 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:20.738173 kubelet[2738]: W0123 17:54:20.738096 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:20.738173 kubelet[2738]: E0123 17:54:20.738132 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:20.738425 kubelet[2738]: E0123 17:54:20.738411 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:20.738723 kubelet[2738]: W0123 17:54:20.738500 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:20.738723 kubelet[2738]: E0123 17:54:20.738675 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:20.738900 kubelet[2738]: E0123 17:54:20.738887 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:20.738972 kubelet[2738]: W0123 17:54:20.738960 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:20.739046 kubelet[2738]: E0123 17:54:20.739025 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:20.739247 kubelet[2738]: E0123 17:54:20.739233 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:20.739742 kubelet[2738]: W0123 17:54:20.739348 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:20.739742 kubelet[2738]: E0123 17:54:20.739378 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:20.739905 kubelet[2738]: E0123 17:54:20.739890 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:20.739970 kubelet[2738]: W0123 17:54:20.739957 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:20.740029 kubelet[2738]: E0123 17:54:20.740018 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:20.740832 kubelet[2738]: E0123 17:54:20.740805 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:20.740832 kubelet[2738]: W0123 17:54:20.740829 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:20.740926 kubelet[2738]: E0123 17:54:20.740864 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:20.741106 kubelet[2738]: E0123 17:54:20.741090 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:20.741145 kubelet[2738]: W0123 17:54:20.741107 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:20.741278 kubelet[2738]: E0123 17:54:20.741223 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:20.741324 kubelet[2738]: E0123 17:54:20.741318 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:20.741353 kubelet[2738]: W0123 17:54:20.741328 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:20.741465 kubelet[2738]: E0123 17:54:20.741394 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:20.741575 kubelet[2738]: E0123 17:54:20.741559 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:20.741624 kubelet[2738]: W0123 17:54:20.741575 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:20.741754 kubelet[2738]: E0123 17:54:20.741729 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:20.742382 kubelet[2738]: E0123 17:54:20.742051 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:20.742382 kubelet[2738]: W0123 17:54:20.742067 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:20.742382 kubelet[2738]: E0123 17:54:20.742085 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:20.743227 kubelet[2738]: E0123 17:54:20.742979 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:20.743287 kubelet[2738]: W0123 17:54:20.743232 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:20.743287 kubelet[2738]: E0123 17:54:20.743262 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:20.744601 kubelet[2738]: E0123 17:54:20.744558 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:20.744601 kubelet[2738]: W0123 17:54:20.744594 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:20.744711 kubelet[2738]: E0123 17:54:20.744619 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:20.745922 kubelet[2738]: E0123 17:54:20.745870 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:54:20.745922 kubelet[2738]: W0123 17:54:20.745901 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:54:20.746010 kubelet[2738]: E0123 17:54:20.745927 2738 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:54:20.807493 containerd[1527]: time="2026-01-23T17:54:20.807408146Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:54:20.810672 containerd[1527]: time="2026-01-23T17:54:20.810374944Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Jan 23 17:54:20.811797 containerd[1527]: time="2026-01-23T17:54:20.811735392Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:54:20.816797 containerd[1527]: time="2026-01-23T17:54:20.816700618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:54:20.817889 containerd[1527]: time="2026-01-23T17:54:20.817198744Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.705795492s" Jan 23 17:54:20.817889 containerd[1527]: time="2026-01-23T17:54:20.817237588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 23 17:54:20.821697 containerd[1527]: time="2026-01-23T17:54:20.821615359Z" level=info msg="CreateContainer within sandbox \"75ba181e263c1f39629c474b5007d97052397f9ff9a0842aceb46698247ce522\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 17:54:20.833463 containerd[1527]: time="2026-01-23T17:54:20.832696238Z" level=info msg="Container 4946ec32803ef3d78ad9fa111e31461879e80db4ae5d09306e9ede9acd0c7c28: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:54:20.853011 containerd[1527]: time="2026-01-23T17:54:20.852948098Z" level=info msg="CreateContainer within sandbox \"75ba181e263c1f39629c474b5007d97052397f9ff9a0842aceb46698247ce522\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4946ec32803ef3d78ad9fa111e31461879e80db4ae5d09306e9ede9acd0c7c28\"" Jan 23 17:54:20.853534 containerd[1527]: time="2026-01-23T17:54:20.853504230Z" level=info msg="StartContainer for \"4946ec32803ef3d78ad9fa111e31461879e80db4ae5d09306e9ede9acd0c7c28\"" Jan 23 17:54:20.856315 containerd[1527]: time="2026-01-23T17:54:20.856267489Z" level=info msg="connecting to shim 4946ec32803ef3d78ad9fa111e31461879e80db4ae5d09306e9ede9acd0c7c28" address="unix:///run/containerd/s/8a0877571393751aabff664471f578fa2cc0777dd358f417583d0635a22c4ae9" protocol=ttrpc version=3 Jan 23 17:54:20.887834 systemd[1]: Started cri-containerd-4946ec32803ef3d78ad9fa111e31461879e80db4ae5d09306e9ede9acd0c7c28.scope - libcontainer container 4946ec32803ef3d78ad9fa111e31461879e80db4ae5d09306e9ede9acd0c7c28. Jan 23 17:54:20.960877 containerd[1527]: time="2026-01-23T17:54:20.960832858Z" level=info msg="StartContainer for \"4946ec32803ef3d78ad9fa111e31461879e80db4ae5d09306e9ede9acd0c7c28\" returns successfully" Jan 23 17:54:20.979867 systemd[1]: cri-containerd-4946ec32803ef3d78ad9fa111e31461879e80db4ae5d09306e9ede9acd0c7c28.scope: Deactivated successfully. Jan 23 17:54:20.985253 containerd[1527]: time="2026-01-23T17:54:20.984930678Z" level=info msg="received container exit event container_id:\"4946ec32803ef3d78ad9fa111e31461879e80db4ae5d09306e9ede9acd0c7c28\" id:\"4946ec32803ef3d78ad9fa111e31461879e80db4ae5d09306e9ede9acd0c7c28\" pid:3443 exited_at:{seconds:1769190860 nanos:984386987}" Jan 23 17:54:21.010989 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4946ec32803ef3d78ad9fa111e31461879e80db4ae5d09306e9ede9acd0c7c28-rootfs.mount: Deactivated successfully. Jan 23 17:54:21.651008 containerd[1527]: time="2026-01-23T17:54:21.650886432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 17:54:21.682180 kubelet[2738]: I0123 17:54:21.682121 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-9f4bcb944-9fqzk" podStartSLOduration=3.175222751 podStartE2EDuration="5.68210432s" podCreationTimestamp="2026-01-23 17:54:16 +0000 UTC" firstStartedPulling="2026-01-23 17:54:16.603682883 +0000 UTC m=+29.231358576" lastFinishedPulling="2026-01-23 17:54:19.110564412 +0000 UTC m=+31.738240145" observedRunningTime="2026-01-23 17:54:19.65544602 +0000 UTC m=+32.283121753" watchObservedRunningTime="2026-01-23 17:54:21.68210432 +0000 UTC m=+34.309780013" Jan 23 17:54:22.503791 kubelet[2738]: E0123 17:54:22.503705 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jkkz7" podUID="6e403bca-286c-4acf-bbf0-2ee7f3d0b56e" Jan 23 17:54:24.503027 kubelet[2738]: E0123 17:54:24.502754 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jkkz7" podUID="6e403bca-286c-4acf-bbf0-2ee7f3d0b56e" Jan 23 17:54:25.169448 containerd[1527]: time="2026-01-23T17:54:25.169303366Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:54:25.170737 containerd[1527]: time="2026-01-23T17:54:25.170710002Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Jan 23 17:54:25.171751 containerd[1527]: time="2026-01-23T17:54:25.171724645Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:54:25.173985 containerd[1527]: time="2026-01-23T17:54:25.173917906Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:54:25.175107 containerd[1527]: time="2026-01-23T17:54:25.175065801Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.524046757s" Jan 23 17:54:25.175267 containerd[1527]: time="2026-01-23T17:54:25.175238815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 23 17:54:25.178734 containerd[1527]: time="2026-01-23T17:54:25.178263184Z" level=info msg="CreateContainer within sandbox \"75ba181e263c1f39629c474b5007d97052397f9ff9a0842aceb46698247ce522\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 17:54:25.192453 containerd[1527]: time="2026-01-23T17:54:25.190737733Z" level=info msg="Container f7e3a8492f4a8061eead9acfe88fe28abbb0e63a25552fbe1fcb7d65e8fa1907: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:54:25.204370 containerd[1527]: time="2026-01-23T17:54:25.204282850Z" level=info msg="CreateContainer within sandbox \"75ba181e263c1f39629c474b5007d97052397f9ff9a0842aceb46698247ce522\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f7e3a8492f4a8061eead9acfe88fe28abbb0e63a25552fbe1fcb7d65e8fa1907\"" Jan 23 17:54:25.206676 containerd[1527]: time="2026-01-23T17:54:25.206618522Z" level=info msg="StartContainer for \"f7e3a8492f4a8061eead9acfe88fe28abbb0e63a25552fbe1fcb7d65e8fa1907\"" Jan 23 17:54:25.208361 containerd[1527]: time="2026-01-23T17:54:25.208288540Z" level=info msg="connecting to shim f7e3a8492f4a8061eead9acfe88fe28abbb0e63a25552fbe1fcb7d65e8fa1907" address="unix:///run/containerd/s/8a0877571393751aabff664471f578fa2cc0777dd358f417583d0635a22c4ae9" protocol=ttrpc version=3 Jan 23 17:54:25.231611 systemd[1]: Started cri-containerd-f7e3a8492f4a8061eead9acfe88fe28abbb0e63a25552fbe1fcb7d65e8fa1907.scope - libcontainer container f7e3a8492f4a8061eead9acfe88fe28abbb0e63a25552fbe1fcb7d65e8fa1907. Jan 23 17:54:25.309100 containerd[1527]: time="2026-01-23T17:54:25.309061647Z" level=info msg="StartContainer for \"f7e3a8492f4a8061eead9acfe88fe28abbb0e63a25552fbe1fcb7d65e8fa1907\" returns successfully" Jan 23 17:54:25.865323 containerd[1527]: time="2026-01-23T17:54:25.865112688Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 17:54:25.871091 systemd[1]: cri-containerd-f7e3a8492f4a8061eead9acfe88fe28abbb0e63a25552fbe1fcb7d65e8fa1907.scope: Deactivated successfully. Jan 23 17:54:25.871790 systemd[1]: cri-containerd-f7e3a8492f4a8061eead9acfe88fe28abbb0e63a25552fbe1fcb7d65e8fa1907.scope: Consumed 528ms CPU time, 186.6M memory peak, 165.9M written to disk. Jan 23 17:54:25.876856 containerd[1527]: time="2026-01-23T17:54:25.876543110Z" level=info msg="received container exit event container_id:\"f7e3a8492f4a8061eead9acfe88fe28abbb0e63a25552fbe1fcb7d65e8fa1907\" id:\"f7e3a8492f4a8061eead9acfe88fe28abbb0e63a25552fbe1fcb7d65e8fa1907\" pid:3501 exited_at:{seconds:1769190865 nanos:876178000}" Jan 23 17:54:25.902046 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7e3a8492f4a8061eead9acfe88fe28abbb0e63a25552fbe1fcb7d65e8fa1907-rootfs.mount: Deactivated successfully. Jan 23 17:54:25.972228 kubelet[2738]: I0123 17:54:25.972185 2738 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 17:54:26.026379 systemd[1]: Created slice kubepods-burstable-pod52c7e4c9_38c2_4c47_9d6b_f34c305fc85d.slice - libcontainer container kubepods-burstable-pod52c7e4c9_38c2_4c47_9d6b_f34c305fc85d.slice. Jan 23 17:54:26.044041 systemd[1]: Created slice kubepods-burstable-poddc92af43_aee3_432c_b980_ef838915552e.slice - libcontainer container kubepods-burstable-poddc92af43_aee3_432c_b980_ef838915552e.slice. Jan 23 17:54:26.054583 systemd[1]: Created slice kubepods-besteffort-pod8c709d5d_7113_42e9_bc41_af7907cc6116.slice - libcontainer container kubepods-besteffort-pod8c709d5d_7113_42e9_bc41_af7907cc6116.slice. Jan 23 17:54:26.072949 systemd[1]: Created slice kubepods-besteffort-podfd167579_8a7a_45a7_a1f9_0788814a0466.slice - libcontainer container kubepods-besteffort-podfd167579_8a7a_45a7_a1f9_0788814a0466.slice. Jan 23 17:54:26.077295 kubelet[2738]: I0123 17:54:26.076857 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh9kn\" (UniqueName: \"kubernetes.io/projected/dc92af43-aee3-432c-b980-ef838915552e-kube-api-access-fh9kn\") pod \"coredns-668d6bf9bc-8m2rz\" (UID: \"dc92af43-aee3-432c-b980-ef838915552e\") " pod="kube-system/coredns-668d6bf9bc-8m2rz" Jan 23 17:54:26.077295 kubelet[2738]: I0123 17:54:26.076916 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc92af43-aee3-432c-b980-ef838915552e-config-volume\") pod \"coredns-668d6bf9bc-8m2rz\" (UID: \"dc92af43-aee3-432c-b980-ef838915552e\") " pod="kube-system/coredns-668d6bf9bc-8m2rz" Jan 23 17:54:26.077295 kubelet[2738]: I0123 17:54:26.076946 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fd167579-8a7a-45a7-a1f9-0788814a0466-calico-apiserver-certs\") pod \"calico-apiserver-9fdf556b5-xqkqz\" (UID: \"fd167579-8a7a-45a7-a1f9-0788814a0466\") " pod="calico-apiserver/calico-apiserver-9fdf556b5-xqkqz" Jan 23 17:54:26.077295 kubelet[2738]: I0123 17:54:26.076965 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn85d\" (UniqueName: \"kubernetes.io/projected/fd167579-8a7a-45a7-a1f9-0788814a0466-kube-api-access-tn85d\") pod \"calico-apiserver-9fdf556b5-xqkqz\" (UID: \"fd167579-8a7a-45a7-a1f9-0788814a0466\") " pod="calico-apiserver/calico-apiserver-9fdf556b5-xqkqz" Jan 23 17:54:26.077295 kubelet[2738]: I0123 17:54:26.076991 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p6dt\" (UniqueName: \"kubernetes.io/projected/8c709d5d-7113-42e9-bc41-af7907cc6116-kube-api-access-2p6dt\") pod \"calico-kube-controllers-69ff6445f8-4fhb4\" (UID: \"8c709d5d-7113-42e9-bc41-af7907cc6116\") " pod="calico-system/calico-kube-controllers-69ff6445f8-4fhb4" Jan 23 17:54:26.077549 kubelet[2738]: I0123 17:54:26.077013 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f2da6827-d5c3-485d-a17f-86ee3e12342c-calico-apiserver-certs\") pod \"calico-apiserver-9fdf556b5-bgdgn\" (UID: \"f2da6827-d5c3-485d-a17f-86ee3e12342c\") " pod="calico-apiserver/calico-apiserver-9fdf556b5-bgdgn" Jan 23 17:54:26.077549 kubelet[2738]: I0123 17:54:26.077033 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52c7e4c9-38c2-4c47-9d6b-f34c305fc85d-config-volume\") pod \"coredns-668d6bf9bc-mx8n8\" (UID: \"52c7e4c9-38c2-4c47-9d6b-f34c305fc85d\") " pod="kube-system/coredns-668d6bf9bc-mx8n8" Jan 23 17:54:26.077549 kubelet[2738]: I0123 17:54:26.077057 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fv9r\" (UniqueName: \"kubernetes.io/projected/52c7e4c9-38c2-4c47-9d6b-f34c305fc85d-kube-api-access-2fv9r\") pod \"coredns-668d6bf9bc-mx8n8\" (UID: \"52c7e4c9-38c2-4c47-9d6b-f34c305fc85d\") " pod="kube-system/coredns-668d6bf9bc-mx8n8" Jan 23 17:54:26.077549 kubelet[2738]: I0123 17:54:26.077263 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc9xz\" (UniqueName: \"kubernetes.io/projected/f2da6827-d5c3-485d-a17f-86ee3e12342c-kube-api-access-nc9xz\") pod \"calico-apiserver-9fdf556b5-bgdgn\" (UID: \"f2da6827-d5c3-485d-a17f-86ee3e12342c\") " pod="calico-apiserver/calico-apiserver-9fdf556b5-bgdgn" Jan 23 17:54:26.077549 kubelet[2738]: I0123 17:54:26.077315 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c709d5d-7113-42e9-bc41-af7907cc6116-tigera-ca-bundle\") pod \"calico-kube-controllers-69ff6445f8-4fhb4\" (UID: \"8c709d5d-7113-42e9-bc41-af7907cc6116\") " pod="calico-system/calico-kube-controllers-69ff6445f8-4fhb4" Jan 23 17:54:26.084136 systemd[1]: Created slice kubepods-besteffort-podf2da6827_d5c3_485d_a17f_86ee3e12342c.slice - libcontainer container kubepods-besteffort-podf2da6827_d5c3_485d_a17f_86ee3e12342c.slice. Jan 23 17:54:26.095901 systemd[1]: Created slice kubepods-besteffort-pod67b2de8c_adfd_41ce_a209_5eab9ae1e756.slice - libcontainer container kubepods-besteffort-pod67b2de8c_adfd_41ce_a209_5eab9ae1e756.slice. Jan 23 17:54:26.106881 systemd[1]: Created slice kubepods-besteffort-pod119d6cb7_df4f_4666_99dc_b6b108e59837.slice - libcontainer container kubepods-besteffort-pod119d6cb7_df4f_4666_99dc_b6b108e59837.slice. Jan 23 17:54:26.113702 systemd[1]: Created slice kubepods-besteffort-pod25d835ef_f3bb_42c6_bc1f_07f8b7a82a66.slice - libcontainer container kubepods-besteffort-pod25d835ef_f3bb_42c6_bc1f_07f8b7a82a66.slice. Jan 23 17:54:26.178642 kubelet[2738]: I0123 17:54:26.178221 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25d835ef-f3bb-42c6-bc1f-07f8b7a82a66-goldmane-ca-bundle\") pod \"goldmane-666569f655-zzc7s\" (UID: \"25d835ef-f3bb-42c6-bc1f-07f8b7a82a66\") " pod="calico-system/goldmane-666569f655-zzc7s" Jan 23 17:54:26.178642 kubelet[2738]: I0123 17:54:26.178308 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdjbc\" (UniqueName: \"kubernetes.io/projected/119d6cb7-df4f-4666-99dc-b6b108e59837-kube-api-access-pdjbc\") pod \"whisker-7b99b9cff5-skwv6\" (UID: \"119d6cb7-df4f-4666-99dc-b6b108e59837\") " pod="calico-system/whisker-7b99b9cff5-skwv6" Jan 23 17:54:26.180368 kubelet[2738]: I0123 17:54:26.180235 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25d835ef-f3bb-42c6-bc1f-07f8b7a82a66-config\") pod \"goldmane-666569f655-zzc7s\" (UID: \"25d835ef-f3bb-42c6-bc1f-07f8b7a82a66\") " pod="calico-system/goldmane-666569f655-zzc7s" Jan 23 17:54:26.180640 kubelet[2738]: I0123 17:54:26.180573 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stgxv\" (UniqueName: \"kubernetes.io/projected/25d835ef-f3bb-42c6-bc1f-07f8b7a82a66-kube-api-access-stgxv\") pod \"goldmane-666569f655-zzc7s\" (UID: \"25d835ef-f3bb-42c6-bc1f-07f8b7a82a66\") " pod="calico-system/goldmane-666569f655-zzc7s" Jan 23 17:54:26.181065 kubelet[2738]: I0123 17:54:26.180612 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/119d6cb7-df4f-4666-99dc-b6b108e59837-whisker-ca-bundle\") pod \"whisker-7b99b9cff5-skwv6\" (UID: \"119d6cb7-df4f-4666-99dc-b6b108e59837\") " pod="calico-system/whisker-7b99b9cff5-skwv6" Jan 23 17:54:26.181491 kubelet[2738]: I0123 17:54:26.181384 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/119d6cb7-df4f-4666-99dc-b6b108e59837-whisker-backend-key-pair\") pod \"whisker-7b99b9cff5-skwv6\" (UID: \"119d6cb7-df4f-4666-99dc-b6b108e59837\") " pod="calico-system/whisker-7b99b9cff5-skwv6" Jan 23 17:54:26.181684 kubelet[2738]: I0123 17:54:26.181668 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/25d835ef-f3bb-42c6-bc1f-07f8b7a82a66-goldmane-key-pair\") pod \"goldmane-666569f655-zzc7s\" (UID: \"25d835ef-f3bb-42c6-bc1f-07f8b7a82a66\") " pod="calico-system/goldmane-666569f655-zzc7s" Jan 23 17:54:26.181826 kubelet[2738]: I0123 17:54:26.181767 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9p8n\" (UniqueName: \"kubernetes.io/projected/67b2de8c-adfd-41ce-a209-5eab9ae1e756-kube-api-access-k9p8n\") pod \"calico-apiserver-6bfd8975d7-8wdqx\" (UID: \"67b2de8c-adfd-41ce-a209-5eab9ae1e756\") " pod="calico-apiserver/calico-apiserver-6bfd8975d7-8wdqx" Jan 23 17:54:26.181886 kubelet[2738]: I0123 17:54:26.181803 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/67b2de8c-adfd-41ce-a209-5eab9ae1e756-calico-apiserver-certs\") pod \"calico-apiserver-6bfd8975d7-8wdqx\" (UID: \"67b2de8c-adfd-41ce-a209-5eab9ae1e756\") " pod="calico-apiserver/calico-apiserver-6bfd8975d7-8wdqx" Jan 23 17:54:26.331829 containerd[1527]: time="2026-01-23T17:54:26.331770662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mx8n8,Uid:52c7e4c9-38c2-4c47-9d6b-f34c305fc85d,Namespace:kube-system,Attempt:0,}" Jan 23 17:54:26.350035 containerd[1527]: time="2026-01-23T17:54:26.349994891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8m2rz,Uid:dc92af43-aee3-432c-b980-ef838915552e,Namespace:kube-system,Attempt:0,}" Jan 23 17:54:26.365505 containerd[1527]: time="2026-01-23T17:54:26.365148592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69ff6445f8-4fhb4,Uid:8c709d5d-7113-42e9-bc41-af7907cc6116,Namespace:calico-system,Attempt:0,}" Jan 23 17:54:26.379288 containerd[1527]: time="2026-01-23T17:54:26.379179882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9fdf556b5-xqkqz,Uid:fd167579-8a7a-45a7-a1f9-0788814a0466,Namespace:calico-apiserver,Attempt:0,}" Jan 23 17:54:26.392877 containerd[1527]: time="2026-01-23T17:54:26.392839463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9fdf556b5-bgdgn,Uid:f2da6827-d5c3-485d-a17f-86ee3e12342c,Namespace:calico-apiserver,Attempt:0,}" Jan 23 17:54:26.402591 containerd[1527]: time="2026-01-23T17:54:26.402542965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bfd8975d7-8wdqx,Uid:67b2de8c-adfd-41ce-a209-5eab9ae1e756,Namespace:calico-apiserver,Attempt:0,}" Jan 23 17:54:26.413381 containerd[1527]: time="2026-01-23T17:54:26.413305272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b99b9cff5-skwv6,Uid:119d6cb7-df4f-4666-99dc-b6b108e59837,Namespace:calico-system,Attempt:0,}" Jan 23 17:54:26.422190 containerd[1527]: time="2026-01-23T17:54:26.421922606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-zzc7s,Uid:25d835ef-f3bb-42c6-bc1f-07f8b7a82a66,Namespace:calico-system,Attempt:0,}" Jan 23 17:54:26.493975 containerd[1527]: time="2026-01-23T17:54:26.493861322Z" level=error msg="Failed to destroy network for sandbox \"a270b5e2959abd88f3e4cc49778e521f5d224a659f49b2e46995c5da745f234e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:54:26.496654 containerd[1527]: time="2026-01-23T17:54:26.496596023Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9fdf556b5-xqkqz,Uid:fd167579-8a7a-45a7-a1f9-0788814a0466,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a270b5e2959abd88f3e4cc49778e521f5d224a659f49b2e46995c5da745f234e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:54:26.498567 kubelet[2738]: E0123 17:54:26.496847 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a270b5e2959abd88f3e4cc49778e521f5d224a659f49b2e46995c5da745f234e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:54:26.498567 kubelet[2738]: E0123 17:54:26.496925 2738 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a270b5e2959abd88f3e4cc49778e521f5d224a659f49b2e46995c5da745f234e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9fdf556b5-xqkqz" Jan 23 17:54:26.498567 kubelet[2738]: E0123 17:54:26.496947 2738 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a270b5e2959abd88f3e4cc49778e521f5d224a659f49b2e46995c5da745f234e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9fdf556b5-xqkqz" Jan 23 17:54:26.498720 kubelet[2738]: E0123 17:54:26.496988 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9fdf556b5-xqkqz_calico-apiserver(fd167579-8a7a-45a7-a1f9-0788814a0466)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9fdf556b5-xqkqz_calico-apiserver(fd167579-8a7a-45a7-a1f9-0788814a0466)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a270b5e2959abd88f3e4cc49778e521f5d224a659f49b2e46995c5da745f234e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9fdf556b5-xqkqz" podUID="fd167579-8a7a-45a7-a1f9-0788814a0466" Jan 23 17:54:26.512064 systemd[1]: Created slice kubepods-besteffort-pod6e403bca_286c_4acf_bbf0_2ee7f3d0b56e.slice - libcontainer container kubepods-besteffort-pod6e403bca_286c_4acf_bbf0_2ee7f3d0b56e.slice. Jan 23 17:54:26.517448 containerd[1527]: time="2026-01-23T17:54:26.517392298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jkkz7,Uid:6e403bca-286c-4acf-bbf0-2ee7f3d0b56e,Namespace:calico-system,Attempt:0,}" Jan 23 17:54:26.543372 containerd[1527]: time="2026-01-23T17:54:26.542170815Z" level=error msg="Failed to destroy network for sandbox \"d29a7ab63574c3e3276cb0def806f80441afd2e3bd740a814e141451ffb90283\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:54:26.547905 containerd[1527]: time="2026-01-23T17:54:26.547842872Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9fdf556b5-bgdgn,Uid:f2da6827-d5c3-485d-a17f-86ee3e12342c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d29a7ab63574c3e3276cb0def806f80441afd2e3bd740a814e141451ffb90283\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:54:26.548162 kubelet[2738]: E0123 17:54:26.548078 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d29a7ab63574c3e3276cb0def806f80441afd2e3bd740a814e141451ffb90283\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:54:26.548223 kubelet[2738]: E0123 17:54:26.548183 2738 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d29a7ab63574c3e3276cb0def806f80441afd2e3bd740a814e141451ffb90283\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9fdf556b5-bgdgn" Jan 23 17:54:26.548223 kubelet[2738]: E0123 17:54:26.548205 2738 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d29a7ab63574c3e3276cb0def806f80441afd2e3bd740a814e141451ffb90283\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9fdf556b5-bgdgn" Jan 23 17:54:26.548292 kubelet[2738]: E0123 17:54:26.548249 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9fdf556b5-bgdgn_calico-apiserver(f2da6827-d5c3-485d-a17f-86ee3e12342c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9fdf556b5-bgdgn_calico-apiserver(f2da6827-d5c3-485d-a17f-86ee3e12342c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d29a7ab63574c3e3276cb0def806f80441afd2e3bd740a814e141451ffb90283\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9fdf556b5-bgdgn" podUID="f2da6827-d5c3-485d-a17f-86ee3e12342c" Jan 23 17:54:26.572729 containerd[1527]: time="2026-01-23T17:54:26.572680313Z" level=error msg="Failed to destroy network for sandbox \"40d0f6706679f16ed928d632a8dc9eb781e377499994c304b202dcd1b7d4da4b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:54:26.574762 containerd[1527]: time="2026-01-23T17:54:26.574707957Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69ff6445f8-4fhb4,Uid:8c709d5d-7113-42e9-bc41-af7907cc6116,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"40d0f6706679f16ed928d632a8dc9eb781e377499994c304b202dcd1b7d4da4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:54:26.575628 kubelet[2738]: E0123 17:54:26.574984 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40d0f6706679f16ed928d632a8dc9eb781e377499994c304b202dcd1b7d4da4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:54:26.575628 kubelet[2738]: E0123 17:54:26.575036 2738 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40d0f6706679f16ed928d632a8dc9eb781e377499994c304b202dcd1b7d4da4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69ff6445f8-4fhb4" Jan 23 17:54:26.575628 kubelet[2738]: E0123 17:54:26.575054 2738 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40d0f6706679f16ed928d632a8dc9eb781e377499994c304b202dcd1b7d4da4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69ff6445f8-4fhb4" Jan 23 17:54:26.575738 kubelet[2738]: E0123 17:54:26.575132 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-69ff6445f8-4fhb4_calico-system(8c709d5d-7113-42e9-bc41-af7907cc6116)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-69ff6445f8-4fhb4_calico-system(8c709d5d-7113-42e9-bc41-af7907cc6116)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40d0f6706679f16ed928d632a8dc9eb781e377499994c304b202dcd1b7d4da4b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69ff6445f8-4fhb4" podUID="8c709d5d-7113-42e9-bc41-af7907cc6116" Jan 23 17:54:26.591609 containerd[1527]: time="2026-01-23T17:54:26.591559554Z" level=error msg="Failed to destroy network for sandbox \"483f2393734bdea913c2ab0d80531fcd4de7c65f3ed65056763b05af7f0f9a2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:54:26.595994 containerd[1527]: time="2026-01-23T17:54:26.595747652Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mx8n8,Uid:52c7e4c9-38c2-4c47-9d6b-f34c305fc85d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"483f2393734bdea913c2ab0d80531fcd4de7c65f3ed65056763b05af7f0f9a2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:54:26.597061 kubelet[2738]: E0123 17:54:26.597026 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"483f2393734bdea913c2ab0d80531fcd4de7c65f3ed65056763b05af7f0f9a2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:54:26.597327 kubelet[2738]: E0123 17:54:26.597291 2738 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"483f2393734bdea913c2ab0d80531fcd4de7c65f3ed65056763b05af7f0f9a2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mx8n8" Jan 23 17:54:26.597452 kubelet[2738]: E0123 17:54:26.597387 2738 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"483f2393734bdea913c2ab0d80531fcd4de7c65f3ed65056763b05af7f0f9a2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mx8n8" Jan 23 17:54:26.597590 kubelet[2738]: E0123 17:54:26.597564 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-mx8n8_kube-system(52c7e4c9-38c2-4c47-9d6b-f34c305fc85d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-mx8n8_kube-system(52c7e4c9-38c2-4c47-9d6b-f34c305fc85d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"483f2393734bdea913c2ab0d80531fcd4de7c65f3ed65056763b05af7f0f9a2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mx8n8" podUID="52c7e4c9-38c2-4c47-9d6b-f34c305fc85d" Jan 23 17:54:26.604998 containerd[1527]: time="2026-01-23T17:54:26.604946393Z" level=error msg="Failed to destroy network for sandbox \"c1ae383d287357aeb72647a93fca07c2edef499aaf51e7d17582727da0677fc5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:54:26.610557 containerd[1527]: time="2026-01-23T17:54:26.610505161Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8m2rz,Uid:dc92af43-aee3-432c-b980-ef838915552e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1ae383d287357aeb72647a93fca07c2edef499aaf51e7d17582727da0677fc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:54:26.611052 kubelet[2738]: E0123 17:54:26.610741 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1ae383d287357aeb72647a93fca07c2edef499aaf51e7d17582727da0677fc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:54:26.611052 kubelet[2738]: E0123 17:54:26.610794 2738 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1ae383d287357aeb72647a93fca07c2edef499aaf51e7d17582727da0677fc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8m2rz" Jan 23 17:54:26.611052 kubelet[2738]: E0123 17:54:26.610813 2738 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1ae383d287357aeb72647a93fca07c2edef499aaf51e7d17582727da0677fc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8m2rz" Jan 23 17:54:26.611595 kubelet[2738]: E0123 17:54:26.611249 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-8m2rz_kube-system(dc92af43-aee3-432c-b980-ef838915552e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-8m2rz_kube-system(dc92af43-aee3-432c-b980-ef838915552e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1ae383d287357aeb72647a93fca07c2edef499aaf51e7d17582727da0677fc5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-8m2rz" podUID="dc92af43-aee3-432c-b980-ef838915552e" Jan 23 17:54:26.644693 containerd[1527]: time="2026-01-23T17:54:26.644590267Z" level=error msg="Failed to destroy network for sandbox \"5852237bfd669f028b751aeb689c370fde794cefa048912af3b34c64124afd04\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:54:26.647667 containerd[1527]: time="2026-01-23T17:54:26.647612311Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bfd8975d7-8wdqx,Uid:67b2de8c-adfd-41ce-a209-5eab9ae1e756,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5852237bfd669f028b751aeb689c370fde794cefa048912af3b34c64124afd04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:54:26.648501 kubelet[2738]: E0123 17:54:26.648162 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5852237bfd669f028b751aeb689c370fde794cefa048912af3b34c64124afd04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:54:26.648501 kubelet[2738]: E0123 17:54:26.648322 2738 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5852237bfd669f028b751aeb689c370fde794cefa048912af3b34c64124afd04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bfd8975d7-8wdqx" Jan 23 17:54:26.648501 kubelet[2738]: E0123 17:54:26.648347 2738 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5852237bfd669f028b751aeb689c370fde794cefa048912af3b34c64124afd04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bfd8975d7-8wdqx" Jan 23 17:54:26.648661 kubelet[2738]: E0123 17:54:26.648392 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6bfd8975d7-8wdqx_calico-apiserver(67b2de8c-adfd-41ce-a209-5eab9ae1e756)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6bfd8975d7-8wdqx_calico-apiserver(67b2de8c-adfd-41ce-a209-5eab9ae1e756)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5852237bfd669f028b751aeb689c370fde794cefa048912af3b34c64124afd04\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bfd8975d7-8wdqx" podUID="67b2de8c-adfd-41ce-a209-5eab9ae1e756" Jan 23 17:54:26.656630 containerd[1527]: time="2026-01-23T17:54:26.656578793Z" level=error msg="Failed to destroy network for sandbox \"4884b09bc6f25c40a4e3a3cf850a20c3ab0929751d235dd6e816ce1bafebf08c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:54:26.658664 containerd[1527]: time="2026-01-23T17:54:26.658424742Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b99b9cff5-skwv6,Uid:119d6cb7-df4f-4666-99dc-b6b108e59837,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4884b09bc6f25c40a4e3a3cf850a20c3ab0929751d235dd6e816ce1bafebf08c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:54:26.659659 containerd[1527]: time="2026-01-23T17:54:26.658605356Z" level=error msg="Failed to destroy network for sandbox \"482ee94842817cbd83a01c69408ecacb4a1763519f7a8f21144d537c7fb64179\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:54:26.659710 kubelet[2738]: E0123 17:54:26.658841 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4884b09bc6f25c40a4e3a3cf850a20c3ab0929751d235dd6e816ce1bafebf08c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:54:26.659710 kubelet[2738]: E0123 17:54:26.658899 2738 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4884b09bc6f25c40a4e3a3cf850a20c3ab0929751d235dd6e816ce1bafebf08c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b99b9cff5-skwv6" Jan 23 17:54:26.659710 kubelet[2738]: E0123 17:54:26.658917 2738 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4884b09bc6f25c40a4e3a3cf850a20c3ab0929751d235dd6e816ce1bafebf08c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b99b9cff5-skwv6" Jan 23 17:54:26.659800 kubelet[2738]: E0123 17:54:26.659574 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7b99b9cff5-skwv6_calico-system(119d6cb7-df4f-4666-99dc-b6b108e59837)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7b99b9cff5-skwv6_calico-system(119d6cb7-df4f-4666-99dc-b6b108e59837)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4884b09bc6f25c40a4e3a3cf850a20c3ab0929751d235dd6e816ce1bafebf08c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7b99b9cff5-skwv6" podUID="119d6cb7-df4f-4666-99dc-b6b108e59837" Jan 23 17:54:26.661066 containerd[1527]: time="2026-01-23T17:54:26.661017831Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-zzc7s,Uid:25d835ef-f3bb-42c6-bc1f-07f8b7a82a66,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"482ee94842817cbd83a01c69408ecacb4a1763519f7a8f21144d537c7fb64179\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:54:26.661615 kubelet[2738]: E0123 17:54:26.661574 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"482ee94842817cbd83a01c69408ecacb4a1763519f7a8f21144d537c7fb64179\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:54:26.661739 kubelet[2738]: E0123 17:54:26.661718 2738 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"482ee94842817cbd83a01c69408ecacb4a1763519f7a8f21144d537c7fb64179\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-zzc7s" Jan 23 17:54:26.661776 kubelet[2738]: E0123 17:54:26.661746 2738 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"482ee94842817cbd83a01c69408ecacb4a1763519f7a8f21144d537c7fb64179\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-zzc7s" Jan 23 17:54:26.661862 kubelet[2738]: E0123 17:54:26.661800 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-zzc7s_calico-system(25d835ef-f3bb-42c6-bc1f-07f8b7a82a66)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-zzc7s_calico-system(25d835ef-f3bb-42c6-bc1f-07f8b7a82a66)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"482ee94842817cbd83a01c69408ecacb4a1763519f7a8f21144d537c7fb64179\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-zzc7s" podUID="25d835ef-f3bb-42c6-bc1f-07f8b7a82a66" Jan 23 17:54:26.675072 containerd[1527]: time="2026-01-23T17:54:26.675022039Z" level=error msg="Failed to destroy network for sandbox \"f51873a2c0d9e7634218aab036e1bcf624b889e069dd0d5e03cd1df8bf38199c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:54:26.676821 containerd[1527]: time="2026-01-23T17:54:26.676761299Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jkkz7,Uid:6e403bca-286c-4acf-bbf0-2ee7f3d0b56e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f51873a2c0d9e7634218aab036e1bcf624b889e069dd0d5e03cd1df8bf38199c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:54:26.677339 kubelet[2738]: E0123 17:54:26.677096 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f51873a2c0d9e7634218aab036e1bcf624b889e069dd0d5e03cd1df8bf38199c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:54:26.677339 kubelet[2738]: E0123 17:54:26.677314 2738 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f51873a2c0d9e7634218aab036e1bcf624b889e069dd0d5e03cd1df8bf38199c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jkkz7" Jan 23 17:54:26.677875 kubelet[2738]: E0123 17:54:26.677478 2738 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f51873a2c0d9e7634218aab036e1bcf624b889e069dd0d5e03cd1df8bf38199c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jkkz7" Jan 23 17:54:26.677875 kubelet[2738]: E0123 17:54:26.677532 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jkkz7_calico-system(6e403bca-286c-4acf-bbf0-2ee7f3d0b56e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jkkz7_calico-system(6e403bca-286c-4acf-bbf0-2ee7f3d0b56e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f51873a2c0d9e7634218aab036e1bcf624b889e069dd0d5e03cd1df8bf38199c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jkkz7" podUID="6e403bca-286c-4acf-bbf0-2ee7f3d0b56e" Jan 23 17:54:26.678986 containerd[1527]: time="2026-01-23T17:54:26.678956396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 17:54:32.434347 kubelet[2738]: I0123 17:54:32.434298 2738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 17:54:33.012725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1442710925.mount: Deactivated successfully. Jan 23 17:54:33.035483 containerd[1527]: time="2026-01-23T17:54:33.034823653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:54:33.036006 containerd[1527]: time="2026-01-23T17:54:33.035977935Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Jan 23 17:54:33.036818 containerd[1527]: time="2026-01-23T17:54:33.036786072Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:54:33.038616 containerd[1527]: time="2026-01-23T17:54:33.038548156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:54:33.039068 containerd[1527]: time="2026-01-23T17:54:33.039034430Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 6.359863936s" Jan 23 17:54:33.039139 containerd[1527]: time="2026-01-23T17:54:33.039068752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 23 17:54:33.057892 containerd[1527]: time="2026-01-23T17:54:33.057834393Z" level=info msg="CreateContainer within sandbox \"75ba181e263c1f39629c474b5007d97052397f9ff9a0842aceb46698247ce522\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 17:54:33.071678 containerd[1527]: time="2026-01-23T17:54:33.071619524Z" level=info msg="Container a0adf27b7340a8713b305a38dfac62a99e378b6d4e5a0a8d2dbe93d8d908888b: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:54:33.092082 containerd[1527]: time="2026-01-23T17:54:33.092001079Z" level=info msg="CreateContainer within sandbox \"75ba181e263c1f39629c474b5007d97052397f9ff9a0842aceb46698247ce522\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a0adf27b7340a8713b305a38dfac62a99e378b6d4e5a0a8d2dbe93d8d908888b\"" Jan 23 17:54:33.092836 containerd[1527]: time="2026-01-23T17:54:33.092700608Z" level=info msg="StartContainer for \"a0adf27b7340a8713b305a38dfac62a99e378b6d4e5a0a8d2dbe93d8d908888b\"" Jan 23 17:54:33.096027 containerd[1527]: time="2026-01-23T17:54:33.095990360Z" level=info msg="connecting to shim a0adf27b7340a8713b305a38dfac62a99e378b6d4e5a0a8d2dbe93d8d908888b" address="unix:///run/containerd/s/8a0877571393751aabff664471f578fa2cc0777dd358f417583d0635a22c4ae9" protocol=ttrpc version=3 Jan 23 17:54:33.132729 systemd[1]: Started cri-containerd-a0adf27b7340a8713b305a38dfac62a99e378b6d4e5a0a8d2dbe93d8d908888b.scope - libcontainer container a0adf27b7340a8713b305a38dfac62a99e378b6d4e5a0a8d2dbe93d8d908888b. Jan 23 17:54:33.253729 containerd[1527]: time="2026-01-23T17:54:33.253669381Z" level=info msg="StartContainer for \"a0adf27b7340a8713b305a38dfac62a99e378b6d4e5a0a8d2dbe93d8d908888b\" returns successfully" Jan 23 17:54:33.382954 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 17:54:33.383091 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 17:54:33.637442 kubelet[2738]: I0123 17:54:33.637276 2738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdjbc\" (UniqueName: \"kubernetes.io/projected/119d6cb7-df4f-4666-99dc-b6b108e59837-kube-api-access-pdjbc\") pod \"119d6cb7-df4f-4666-99dc-b6b108e59837\" (UID: \"119d6cb7-df4f-4666-99dc-b6b108e59837\") " Jan 23 17:54:33.637442 kubelet[2738]: I0123 17:54:33.637332 2738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/119d6cb7-df4f-4666-99dc-b6b108e59837-whisker-backend-key-pair\") pod \"119d6cb7-df4f-4666-99dc-b6b108e59837\" (UID: \"119d6cb7-df4f-4666-99dc-b6b108e59837\") " Jan 23 17:54:33.637442 kubelet[2738]: I0123 17:54:33.637380 2738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/119d6cb7-df4f-4666-99dc-b6b108e59837-whisker-ca-bundle\") pod \"119d6cb7-df4f-4666-99dc-b6b108e59837\" (UID: \"119d6cb7-df4f-4666-99dc-b6b108e59837\") " Jan 23 17:54:33.641449 kubelet[2738]: I0123 17:54:33.640378 2738 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/119d6cb7-df4f-4666-99dc-b6b108e59837-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "119d6cb7-df4f-4666-99dc-b6b108e59837" (UID: "119d6cb7-df4f-4666-99dc-b6b108e59837"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 17:54:33.643450 kubelet[2738]: I0123 17:54:33.642705 2738 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/119d6cb7-df4f-4666-99dc-b6b108e59837-kube-api-access-pdjbc" (OuterVolumeSpecName: "kube-api-access-pdjbc") pod "119d6cb7-df4f-4666-99dc-b6b108e59837" (UID: "119d6cb7-df4f-4666-99dc-b6b108e59837"). InnerVolumeSpecName "kube-api-access-pdjbc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 17:54:33.644790 kubelet[2738]: I0123 17:54:33.644407 2738 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/119d6cb7-df4f-4666-99dc-b6b108e59837-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "119d6cb7-df4f-4666-99dc-b6b108e59837" (UID: "119d6cb7-df4f-4666-99dc-b6b108e59837"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 17:54:33.719035 systemd[1]: Removed slice kubepods-besteffort-pod119d6cb7_df4f_4666_99dc_b6b108e59837.slice - libcontainer container kubepods-besteffort-pod119d6cb7_df4f_4666_99dc_b6b108e59837.slice. Jan 23 17:54:33.740451 kubelet[2738]: I0123 17:54:33.740286 2738 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/119d6cb7-df4f-4666-99dc-b6b108e59837-whisker-ca-bundle\") on node \"ci-4459-2-3-1-a204a5ad1b\" DevicePath \"\"" Jan 23 17:54:33.740451 kubelet[2738]: I0123 17:54:33.740328 2738 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pdjbc\" (UniqueName: \"kubernetes.io/projected/119d6cb7-df4f-4666-99dc-b6b108e59837-kube-api-access-pdjbc\") on node \"ci-4459-2-3-1-a204a5ad1b\" DevicePath \"\"" Jan 23 17:54:33.740451 kubelet[2738]: I0123 17:54:33.740337 2738 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/119d6cb7-df4f-4666-99dc-b6b108e59837-whisker-backend-key-pair\") on node \"ci-4459-2-3-1-a204a5ad1b\" DevicePath \"\"" Jan 23 17:54:33.766205 kubelet[2738]: I0123 17:54:33.766067 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kzqjp" podStartSLOduration=1.5710003970000002 podStartE2EDuration="17.760563429s" podCreationTimestamp="2026-01-23 17:54:16 +0000 UTC" firstStartedPulling="2026-01-23 17:54:16.850504671 +0000 UTC m=+29.478180404" lastFinishedPulling="2026-01-23 17:54:33.040067743 +0000 UTC m=+45.667743436" observedRunningTime="2026-01-23 17:54:33.756489462 +0000 UTC m=+46.384165195" watchObservedRunningTime="2026-01-23 17:54:33.760563429 +0000 UTC m=+46.388239122" Jan 23 17:54:33.832627 systemd[1]: Created slice kubepods-besteffort-podd427e806_f7cc_4b74_be8f_94a08c7ee702.slice - libcontainer container kubepods-besteffort-podd427e806_f7cc_4b74_be8f_94a08c7ee702.slice. Jan 23 17:54:33.941930 kubelet[2738]: I0123 17:54:33.941778 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vsk7\" (UniqueName: \"kubernetes.io/projected/d427e806-f7cc-4b74-be8f-94a08c7ee702-kube-api-access-7vsk7\") pod \"whisker-79545dbc5f-lz9w4\" (UID: \"d427e806-f7cc-4b74-be8f-94a08c7ee702\") " pod="calico-system/whisker-79545dbc5f-lz9w4" Jan 23 17:54:33.941930 kubelet[2738]: I0123 17:54:33.941883 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d427e806-f7cc-4b74-be8f-94a08c7ee702-whisker-backend-key-pair\") pod \"whisker-79545dbc5f-lz9w4\" (UID: \"d427e806-f7cc-4b74-be8f-94a08c7ee702\") " pod="calico-system/whisker-79545dbc5f-lz9w4" Jan 23 17:54:33.943017 kubelet[2738]: I0123 17:54:33.942794 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d427e806-f7cc-4b74-be8f-94a08c7ee702-whisker-ca-bundle\") pod \"whisker-79545dbc5f-lz9w4\" (UID: \"d427e806-f7cc-4b74-be8f-94a08c7ee702\") " pod="calico-system/whisker-79545dbc5f-lz9w4" Jan 23 17:54:34.015701 systemd[1]: var-lib-kubelet-pods-119d6cb7\x2ddf4f\x2d4666\x2d99dc\x2db6b108e59837-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpdjbc.mount: Deactivated successfully. Jan 23 17:54:34.016181 systemd[1]: var-lib-kubelet-pods-119d6cb7\x2ddf4f\x2d4666\x2d99dc\x2db6b108e59837-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 17:54:34.141498 containerd[1527]: time="2026-01-23T17:54:34.141242673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79545dbc5f-lz9w4,Uid:d427e806-f7cc-4b74-be8f-94a08c7ee702,Namespace:calico-system,Attempt:0,}" Jan 23 17:54:34.327448 systemd-networkd[1422]: cali01b0f2ca05f: Link UP Jan 23 17:54:34.327969 systemd-networkd[1422]: cali01b0f2ca05f: Gained carrier Jan 23 17:54:34.350483 containerd[1527]: 2026-01-23 17:54:34.174 [INFO][3853] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 17:54:34.350483 containerd[1527]: 2026-01-23 17:54:34.211 [INFO][3853] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--1--a204a5ad1b-k8s-whisker--79545dbc5f--lz9w4-eth0 whisker-79545dbc5f- calico-system d427e806-f7cc-4b74-be8f-94a08c7ee702 939 0 2026-01-23 17:54:33 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:79545dbc5f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459-2-3-1-a204a5ad1b whisker-79545dbc5f-lz9w4 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali01b0f2ca05f [] [] }} ContainerID="eb872cec39a52f9ecef150112691797c4ac2aa7b3cd2b984026e98fcccbabd46" Namespace="calico-system" Pod="whisker-79545dbc5f-lz9w4" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-whisker--79545dbc5f--lz9w4-" Jan 23 17:54:34.350483 containerd[1527]: 2026-01-23 17:54:34.211 [INFO][3853] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eb872cec39a52f9ecef150112691797c4ac2aa7b3cd2b984026e98fcccbabd46" Namespace="calico-system" Pod="whisker-79545dbc5f-lz9w4" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-whisker--79545dbc5f--lz9w4-eth0" Jan 23 17:54:34.350483 containerd[1527]: 2026-01-23 17:54:34.262 [INFO][3866] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb872cec39a52f9ecef150112691797c4ac2aa7b3cd2b984026e98fcccbabd46" HandleID="k8s-pod-network.eb872cec39a52f9ecef150112691797c4ac2aa7b3cd2b984026e98fcccbabd46" Workload="ci--4459--2--3--1--a204a5ad1b-k8s-whisker--79545dbc5f--lz9w4-eth0" Jan 23 17:54:34.350754 containerd[1527]: 2026-01-23 17:54:34.262 [INFO][3866] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="eb872cec39a52f9ecef150112691797c4ac2aa7b3cd2b984026e98fcccbabd46" HandleID="k8s-pod-network.eb872cec39a52f9ecef150112691797c4ac2aa7b3cd2b984026e98fcccbabd46" Workload="ci--4459--2--3--1--a204a5ad1b-k8s-whisker--79545dbc5f--lz9w4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003181e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-3-1-a204a5ad1b", "pod":"whisker-79545dbc5f-lz9w4", "timestamp":"2026-01-23 17:54:34.262318783 +0000 UTC"}, Hostname:"ci-4459-2-3-1-a204a5ad1b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:54:34.350754 containerd[1527]: 2026-01-23 17:54:34.262 [INFO][3866] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:54:34.350754 containerd[1527]: 2026-01-23 17:54:34.262 [INFO][3866] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:54:34.350754 containerd[1527]: 2026-01-23 17:54:34.262 [INFO][3866] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-1-a204a5ad1b' Jan 23 17:54:34.350754 containerd[1527]: 2026-01-23 17:54:34.277 [INFO][3866] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eb872cec39a52f9ecef150112691797c4ac2aa7b3cd2b984026e98fcccbabd46" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:34.350754 containerd[1527]: 2026-01-23 17:54:34.287 [INFO][3866] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:34.350754 containerd[1527]: 2026-01-23 17:54:34.292 [INFO][3866] ipam/ipam.go 511: Trying affinity for 192.168.74.64/26 host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:34.350754 containerd[1527]: 2026-01-23 17:54:34.295 [INFO][3866] ipam/ipam.go 158: Attempting to load block cidr=192.168.74.64/26 host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:34.350754 containerd[1527]: 2026-01-23 17:54:34.300 [INFO][3866] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.74.64/26 host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:34.350946 containerd[1527]: 2026-01-23 17:54:34.300 [INFO][3866] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.74.64/26 handle="k8s-pod-network.eb872cec39a52f9ecef150112691797c4ac2aa7b3cd2b984026e98fcccbabd46" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:34.350946 containerd[1527]: 2026-01-23 17:54:34.302 [INFO][3866] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.eb872cec39a52f9ecef150112691797c4ac2aa7b3cd2b984026e98fcccbabd46 Jan 23 17:54:34.350946 containerd[1527]: 2026-01-23 17:54:34.307 [INFO][3866] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.74.64/26 handle="k8s-pod-network.eb872cec39a52f9ecef150112691797c4ac2aa7b3cd2b984026e98fcccbabd46" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:34.350946 containerd[1527]: 2026-01-23 17:54:34.315 [INFO][3866] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.74.65/26] block=192.168.74.64/26 handle="k8s-pod-network.eb872cec39a52f9ecef150112691797c4ac2aa7b3cd2b984026e98fcccbabd46" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:34.350946 containerd[1527]: 2026-01-23 17:54:34.315 [INFO][3866] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.74.65/26] handle="k8s-pod-network.eb872cec39a52f9ecef150112691797c4ac2aa7b3cd2b984026e98fcccbabd46" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:34.350946 containerd[1527]: 2026-01-23 17:54:34.315 [INFO][3866] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:54:34.350946 containerd[1527]: 2026-01-23 17:54:34.315 [INFO][3866] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.74.65/26] IPv6=[] ContainerID="eb872cec39a52f9ecef150112691797c4ac2aa7b3cd2b984026e98fcccbabd46" HandleID="k8s-pod-network.eb872cec39a52f9ecef150112691797c4ac2aa7b3cd2b984026e98fcccbabd46" Workload="ci--4459--2--3--1--a204a5ad1b-k8s-whisker--79545dbc5f--lz9w4-eth0" Jan 23 17:54:34.351074 containerd[1527]: 2026-01-23 17:54:34.318 [INFO][3853] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eb872cec39a52f9ecef150112691797c4ac2aa7b3cd2b984026e98fcccbabd46" Namespace="calico-system" Pod="whisker-79545dbc5f-lz9w4" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-whisker--79545dbc5f--lz9w4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--1--a204a5ad1b-k8s-whisker--79545dbc5f--lz9w4-eth0", GenerateName:"whisker-79545dbc5f-", Namespace:"calico-system", SelfLink:"", UID:"d427e806-f7cc-4b74-be8f-94a08c7ee702", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 54, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79545dbc5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-1-a204a5ad1b", ContainerID:"", Pod:"whisker-79545dbc5f-lz9w4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.74.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali01b0f2ca05f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:54:34.351074 containerd[1527]: 2026-01-23 17:54:34.319 [INFO][3853] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.65/32] ContainerID="eb872cec39a52f9ecef150112691797c4ac2aa7b3cd2b984026e98fcccbabd46" Namespace="calico-system" Pod="whisker-79545dbc5f-lz9w4" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-whisker--79545dbc5f--lz9w4-eth0" Jan 23 17:54:34.351142 containerd[1527]: 2026-01-23 17:54:34.319 [INFO][3853] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali01b0f2ca05f ContainerID="eb872cec39a52f9ecef150112691797c4ac2aa7b3cd2b984026e98fcccbabd46" Namespace="calico-system" Pod="whisker-79545dbc5f-lz9w4" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-whisker--79545dbc5f--lz9w4-eth0" Jan 23 17:54:34.351142 containerd[1527]: 2026-01-23 17:54:34.328 [INFO][3853] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb872cec39a52f9ecef150112691797c4ac2aa7b3cd2b984026e98fcccbabd46" Namespace="calico-system" Pod="whisker-79545dbc5f-lz9w4" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-whisker--79545dbc5f--lz9w4-eth0" Jan 23 17:54:34.351181 containerd[1527]: 2026-01-23 17:54:34.328 [INFO][3853] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eb872cec39a52f9ecef150112691797c4ac2aa7b3cd2b984026e98fcccbabd46" Namespace="calico-system" Pod="whisker-79545dbc5f-lz9w4" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-whisker--79545dbc5f--lz9w4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--1--a204a5ad1b-k8s-whisker--79545dbc5f--lz9w4-eth0", GenerateName:"whisker-79545dbc5f-", Namespace:"calico-system", SelfLink:"", UID:"d427e806-f7cc-4b74-be8f-94a08c7ee702", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 54, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79545dbc5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-1-a204a5ad1b", ContainerID:"eb872cec39a52f9ecef150112691797c4ac2aa7b3cd2b984026e98fcccbabd46", Pod:"whisker-79545dbc5f-lz9w4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.74.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali01b0f2ca05f", MAC:"42:ce:34:32:d1:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:54:34.351227 containerd[1527]: 2026-01-23 17:54:34.345 [INFO][3853] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eb872cec39a52f9ecef150112691797c4ac2aa7b3cd2b984026e98fcccbabd46" Namespace="calico-system" Pod="whisker-79545dbc5f-lz9w4" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-whisker--79545dbc5f--lz9w4-eth0" Jan 23 17:54:34.414033 containerd[1527]: time="2026-01-23T17:54:34.413881165Z" level=info msg="connecting to shim eb872cec39a52f9ecef150112691797c4ac2aa7b3cd2b984026e98fcccbabd46" address="unix:///run/containerd/s/f0c22880a425f55e5c74b1db264817e84535b94864fcb2f8e9e051b5206e5ec3" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:54:34.441832 systemd[1]: Started cri-containerd-eb872cec39a52f9ecef150112691797c4ac2aa7b3cd2b984026e98fcccbabd46.scope - libcontainer container eb872cec39a52f9ecef150112691797c4ac2aa7b3cd2b984026e98fcccbabd46. Jan 23 17:54:34.489026 containerd[1527]: time="2026-01-23T17:54:34.488986849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79545dbc5f-lz9w4,Uid:d427e806-f7cc-4b74-be8f-94a08c7ee702,Namespace:calico-system,Attempt:0,} returns sandbox id \"eb872cec39a52f9ecef150112691797c4ac2aa7b3cd2b984026e98fcccbabd46\"" Jan 23 17:54:34.490605 containerd[1527]: time="2026-01-23T17:54:34.490570399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 17:54:34.716571 kubelet[2738]: I0123 17:54:34.715417 2738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 17:54:34.841834 containerd[1527]: time="2026-01-23T17:54:34.841540758Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:54:34.843080 containerd[1527]: time="2026-01-23T17:54:34.843013500Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 17:54:34.843400 containerd[1527]: time="2026-01-23T17:54:34.843070224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 17:54:34.843599 kubelet[2738]: E0123 17:54:34.843549 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:54:34.844217 kubelet[2738]: E0123 17:54:34.843609 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:54:34.853501 kubelet[2738]: E0123 17:54:34.853393 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e2123739c92b4125985fdb77df2a34b0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7vsk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79545dbc5f-lz9w4_calico-system(d427e806-f7cc-4b74-be8f-94a08c7ee702): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 17:54:34.857612 containerd[1527]: time="2026-01-23T17:54:34.857484183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 17:54:35.214554 containerd[1527]: time="2026-01-23T17:54:35.214186876Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:54:35.215826 containerd[1527]: time="2026-01-23T17:54:35.215523007Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 17:54:35.215826 containerd[1527]: time="2026-01-23T17:54:35.215537688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 17:54:35.215927 kubelet[2738]: E0123 17:54:35.215867 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:54:35.215927 kubelet[2738]: E0123 17:54:35.215915 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:54:35.217450 kubelet[2738]: E0123 17:54:35.216031 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7vsk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79545dbc5f-lz9w4_calico-system(d427e806-f7cc-4b74-be8f-94a08c7ee702): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 17:54:35.217816 kubelet[2738]: E0123 17:54:35.217756 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79545dbc5f-lz9w4" podUID="d427e806-f7cc-4b74-be8f-94a08c7ee702" Jan 23 17:54:35.392575 systemd-networkd[1422]: cali01b0f2ca05f: Gained IPv6LL Jan 23 17:54:35.478958 systemd-networkd[1422]: vxlan.calico: Link UP Jan 23 17:54:35.478971 systemd-networkd[1422]: vxlan.calico: Gained carrier Jan 23 17:54:35.526958 kubelet[2738]: I0123 17:54:35.526900 2738 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="119d6cb7-df4f-4666-99dc-b6b108e59837" path="/var/lib/kubelet/pods/119d6cb7-df4f-4666-99dc-b6b108e59837/volumes" Jan 23 17:54:35.722467 kubelet[2738]: E0123 17:54:35.722366 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79545dbc5f-lz9w4" podUID="d427e806-f7cc-4b74-be8f-94a08c7ee702" Jan 23 17:54:36.864596 systemd-networkd[1422]: vxlan.calico: Gained IPv6LL Jan 23 17:54:37.505894 containerd[1527]: time="2026-01-23T17:54:37.505657932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mx8n8,Uid:52c7e4c9-38c2-4c47-9d6b-f34c305fc85d,Namespace:kube-system,Attempt:0,}" Jan 23 17:54:37.658883 systemd-networkd[1422]: cali1e63d393538: Link UP Jan 23 17:54:37.660320 systemd-networkd[1422]: cali1e63d393538: Gained carrier Jan 23 17:54:37.682415 containerd[1527]: 2026-01-23 17:54:37.550 [INFO][4116] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--1--a204a5ad1b-k8s-coredns--668d6bf9bc--mx8n8-eth0 coredns-668d6bf9bc- kube-system 52c7e4c9-38c2-4c47-9d6b-f34c305fc85d 856 0 2026-01-23 17:53:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-3-1-a204a5ad1b coredns-668d6bf9bc-mx8n8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1e63d393538 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="116e679fc4bda2d08b9d4a6bc195dfb268a369fdcd99dfe4e5faed07d91843dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-mx8n8" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-coredns--668d6bf9bc--mx8n8-" Jan 23 17:54:37.682415 containerd[1527]: 2026-01-23 17:54:37.551 [INFO][4116] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="116e679fc4bda2d08b9d4a6bc195dfb268a369fdcd99dfe4e5faed07d91843dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-mx8n8" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-coredns--668d6bf9bc--mx8n8-eth0" Jan 23 17:54:37.682415 containerd[1527]: 2026-01-23 17:54:37.586 [INFO][4128] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="116e679fc4bda2d08b9d4a6bc195dfb268a369fdcd99dfe4e5faed07d91843dd" HandleID="k8s-pod-network.116e679fc4bda2d08b9d4a6bc195dfb268a369fdcd99dfe4e5faed07d91843dd" Workload="ci--4459--2--3--1--a204a5ad1b-k8s-coredns--668d6bf9bc--mx8n8-eth0" Jan 23 17:54:37.683601 containerd[1527]: 2026-01-23 17:54:37.587 [INFO][4128] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="116e679fc4bda2d08b9d4a6bc195dfb268a369fdcd99dfe4e5faed07d91843dd" HandleID="k8s-pod-network.116e679fc4bda2d08b9d4a6bc195dfb268a369fdcd99dfe4e5faed07d91843dd" Workload="ci--4459--2--3--1--a204a5ad1b-k8s-coredns--668d6bf9bc--mx8n8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b590), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-3-1-a204a5ad1b", "pod":"coredns-668d6bf9bc-mx8n8", "timestamp":"2026-01-23 17:54:37.58686968 +0000 UTC"}, Hostname:"ci-4459-2-3-1-a204a5ad1b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:54:37.683601 containerd[1527]: 2026-01-23 17:54:37.587 [INFO][4128] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:54:37.683601 containerd[1527]: 2026-01-23 17:54:37.587 [INFO][4128] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:54:37.683601 containerd[1527]: 2026-01-23 17:54:37.587 [INFO][4128] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-1-a204a5ad1b' Jan 23 17:54:37.683601 containerd[1527]: 2026-01-23 17:54:37.598 [INFO][4128] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.116e679fc4bda2d08b9d4a6bc195dfb268a369fdcd99dfe4e5faed07d91843dd" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:37.683601 containerd[1527]: 2026-01-23 17:54:37.605 [INFO][4128] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:37.683601 containerd[1527]: 2026-01-23 17:54:37.615 [INFO][4128] ipam/ipam.go 511: Trying affinity for 192.168.74.64/26 host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:37.683601 containerd[1527]: 2026-01-23 17:54:37.618 [INFO][4128] ipam/ipam.go 158: Attempting to load block cidr=192.168.74.64/26 host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:37.683601 containerd[1527]: 2026-01-23 17:54:37.626 [INFO][4128] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.74.64/26 host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:37.684624 containerd[1527]: 2026-01-23 17:54:37.626 [INFO][4128] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.74.64/26 handle="k8s-pod-network.116e679fc4bda2d08b9d4a6bc195dfb268a369fdcd99dfe4e5faed07d91843dd" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:37.684624 containerd[1527]: 2026-01-23 17:54:37.628 [INFO][4128] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.116e679fc4bda2d08b9d4a6bc195dfb268a369fdcd99dfe4e5faed07d91843dd Jan 23 17:54:37.684624 containerd[1527]: 2026-01-23 17:54:37.635 [INFO][4128] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.74.64/26 handle="k8s-pod-network.116e679fc4bda2d08b9d4a6bc195dfb268a369fdcd99dfe4e5faed07d91843dd" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:37.684624 containerd[1527]: 2026-01-23 17:54:37.643 [INFO][4128] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.74.66/26] block=192.168.74.64/26 handle="k8s-pod-network.116e679fc4bda2d08b9d4a6bc195dfb268a369fdcd99dfe4e5faed07d91843dd" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:37.684624 containerd[1527]: 2026-01-23 17:54:37.643 [INFO][4128] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.74.66/26] handle="k8s-pod-network.116e679fc4bda2d08b9d4a6bc195dfb268a369fdcd99dfe4e5faed07d91843dd" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:37.684624 containerd[1527]: 2026-01-23 17:54:37.643 [INFO][4128] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:54:37.684624 containerd[1527]: 2026-01-23 17:54:37.643 [INFO][4128] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.74.66/26] IPv6=[] ContainerID="116e679fc4bda2d08b9d4a6bc195dfb268a369fdcd99dfe4e5faed07d91843dd" HandleID="k8s-pod-network.116e679fc4bda2d08b9d4a6bc195dfb268a369fdcd99dfe4e5faed07d91843dd" Workload="ci--4459--2--3--1--a204a5ad1b-k8s-coredns--668d6bf9bc--mx8n8-eth0" Jan 23 17:54:37.685168 containerd[1527]: 2026-01-23 17:54:37.651 [INFO][4116] cni-plugin/k8s.go 418: Populated endpoint ContainerID="116e679fc4bda2d08b9d4a6bc195dfb268a369fdcd99dfe4e5faed07d91843dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-mx8n8" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-coredns--668d6bf9bc--mx8n8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--1--a204a5ad1b-k8s-coredns--668d6bf9bc--mx8n8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"52c7e4c9-38c2-4c47-9d6b-f34c305fc85d", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 53, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-1-a204a5ad1b", ContainerID:"", Pod:"coredns-668d6bf9bc-mx8n8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e63d393538", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:54:37.685168 containerd[1527]: 2026-01-23 17:54:37.651 [INFO][4116] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.66/32] ContainerID="116e679fc4bda2d08b9d4a6bc195dfb268a369fdcd99dfe4e5faed07d91843dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-mx8n8" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-coredns--668d6bf9bc--mx8n8-eth0" Jan 23 17:54:37.685168 containerd[1527]: 2026-01-23 17:54:37.651 [INFO][4116] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e63d393538 ContainerID="116e679fc4bda2d08b9d4a6bc195dfb268a369fdcd99dfe4e5faed07d91843dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-mx8n8" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-coredns--668d6bf9bc--mx8n8-eth0" Jan 23 17:54:37.685168 containerd[1527]: 2026-01-23 17:54:37.658 [INFO][4116] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="116e679fc4bda2d08b9d4a6bc195dfb268a369fdcd99dfe4e5faed07d91843dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-mx8n8" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-coredns--668d6bf9bc--mx8n8-eth0" Jan 23 17:54:37.685168 containerd[1527]: 2026-01-23 17:54:37.659 [INFO][4116] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="116e679fc4bda2d08b9d4a6bc195dfb268a369fdcd99dfe4e5faed07d91843dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-mx8n8" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-coredns--668d6bf9bc--mx8n8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--1--a204a5ad1b-k8s-coredns--668d6bf9bc--mx8n8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"52c7e4c9-38c2-4c47-9d6b-f34c305fc85d", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 53, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-1-a204a5ad1b", ContainerID:"116e679fc4bda2d08b9d4a6bc195dfb268a369fdcd99dfe4e5faed07d91843dd", Pod:"coredns-668d6bf9bc-mx8n8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e63d393538", MAC:"5e:69:8c:ba:2f:f2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:54:37.685168 containerd[1527]: 2026-01-23 17:54:37.677 [INFO][4116] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="116e679fc4bda2d08b9d4a6bc195dfb268a369fdcd99dfe4e5faed07d91843dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-mx8n8" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-coredns--668d6bf9bc--mx8n8-eth0" Jan 23 17:54:37.755728 containerd[1527]: time="2026-01-23T17:54:37.755635758Z" level=info msg="connecting to shim 116e679fc4bda2d08b9d4a6bc195dfb268a369fdcd99dfe4e5faed07d91843dd" address="unix:///run/containerd/s/a609d5858c1f180d021986f74387a900e9c41e57f4247daf806125ba033f2b2f" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:54:37.788664 systemd[1]: Started cri-containerd-116e679fc4bda2d08b9d4a6bc195dfb268a369fdcd99dfe4e5faed07d91843dd.scope - libcontainer container 116e679fc4bda2d08b9d4a6bc195dfb268a369fdcd99dfe4e5faed07d91843dd. Jan 23 17:54:37.841116 containerd[1527]: time="2026-01-23T17:54:37.840984501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mx8n8,Uid:52c7e4c9-38c2-4c47-9d6b-f34c305fc85d,Namespace:kube-system,Attempt:0,} returns sandbox id \"116e679fc4bda2d08b9d4a6bc195dfb268a369fdcd99dfe4e5faed07d91843dd\"" Jan 23 17:54:37.845634 containerd[1527]: time="2026-01-23T17:54:37.845582967Z" level=info msg="CreateContainer within sandbox \"116e679fc4bda2d08b9d4a6bc195dfb268a369fdcd99dfe4e5faed07d91843dd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 17:54:37.881719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3490671955.mount: Deactivated successfully. Jan 23 17:54:37.882719 containerd[1527]: time="2026-01-23T17:54:37.882673468Z" level=info msg="Container 0f7e15960a4ed39e9a9e2da01af3f67883838f5e65cabac2a49f92fbbd2f6b7e: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:54:37.891510 containerd[1527]: time="2026-01-23T17:54:37.891422248Z" level=info msg="CreateContainer within sandbox \"116e679fc4bda2d08b9d4a6bc195dfb268a369fdcd99dfe4e5faed07d91843dd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0f7e15960a4ed39e9a9e2da01af3f67883838f5e65cabac2a49f92fbbd2f6b7e\"" Jan 23 17:54:37.892543 containerd[1527]: time="2026-01-23T17:54:37.892403193Z" level=info msg="StartContainer for \"0f7e15960a4ed39e9a9e2da01af3f67883838f5e65cabac2a49f92fbbd2f6b7e\"" Jan 23 17:54:37.894391 containerd[1527]: time="2026-01-23T17:54:37.894289638Z" level=info msg="connecting to shim 0f7e15960a4ed39e9a9e2da01af3f67883838f5e65cabac2a49f92fbbd2f6b7e" address="unix:///run/containerd/s/a609d5858c1f180d021986f74387a900e9c41e57f4247daf806125ba033f2b2f" protocol=ttrpc version=3 Jan 23 17:54:37.918839 systemd[1]: Started cri-containerd-0f7e15960a4ed39e9a9e2da01af3f67883838f5e65cabac2a49f92fbbd2f6b7e.scope - libcontainer container 0f7e15960a4ed39e9a9e2da01af3f67883838f5e65cabac2a49f92fbbd2f6b7e. Jan 23 17:54:37.963306 containerd[1527]: time="2026-01-23T17:54:37.963188530Z" level=info msg="StartContainer for \"0f7e15960a4ed39e9a9e2da01af3f67883838f5e65cabac2a49f92fbbd2f6b7e\" returns successfully" Jan 23 17:54:38.504991 containerd[1527]: time="2026-01-23T17:54:38.504765952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bfd8975d7-8wdqx,Uid:67b2de8c-adfd-41ce-a209-5eab9ae1e756,Namespace:calico-apiserver,Attempt:0,}" Jan 23 17:54:38.505591 containerd[1527]: time="2026-01-23T17:54:38.505020289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jkkz7,Uid:6e403bca-286c-4acf-bbf0-2ee7f3d0b56e,Namespace:calico-system,Attempt:0,}" Jan 23 17:54:38.507136 containerd[1527]: time="2026-01-23T17:54:38.506643635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69ff6445f8-4fhb4,Uid:8c709d5d-7113-42e9-bc41-af7907cc6116,Namespace:calico-system,Attempt:0,}" Jan 23 17:54:38.753038 kubelet[2738]: I0123 17:54:38.752235 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mx8n8" podStartSLOduration=45.752214758 podStartE2EDuration="45.752214758s" podCreationTimestamp="2026-01-23 17:53:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:54:38.751923779 +0000 UTC m=+51.379599472" watchObservedRunningTime="2026-01-23 17:54:38.752214758 +0000 UTC m=+51.379890451" Jan 23 17:54:38.826693 systemd-networkd[1422]: cali5ddf7fac118: Link UP Jan 23 17:54:38.829458 systemd-networkd[1422]: cali5ddf7fac118: Gained carrier Jan 23 17:54:38.860055 containerd[1527]: 2026-01-23 17:54:38.627 [INFO][4228] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--6bfd8975d7--8wdqx-eth0 calico-apiserver-6bfd8975d7- calico-apiserver 67b2de8c-adfd-41ce-a209-5eab9ae1e756 866 0 2026-01-23 17:54:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6bfd8975d7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-3-1-a204a5ad1b calico-apiserver-6bfd8975d7-8wdqx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5ddf7fac118 [] [] }} ContainerID="44f1b9fa7f50c985deab0bdc8538e2b72c1595f8e7e3232696f014f1c6f1fc2f" Namespace="calico-apiserver" Pod="calico-apiserver-6bfd8975d7-8wdqx" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--6bfd8975d7--8wdqx-" Jan 23 17:54:38.860055 containerd[1527]: 2026-01-23 17:54:38.628 [INFO][4228] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="44f1b9fa7f50c985deab0bdc8538e2b72c1595f8e7e3232696f014f1c6f1fc2f" Namespace="calico-apiserver" Pod="calico-apiserver-6bfd8975d7-8wdqx" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--6bfd8975d7--8wdqx-eth0" Jan 23 17:54:38.860055 containerd[1527]: 2026-01-23 17:54:38.694 [INFO][4257] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="44f1b9fa7f50c985deab0bdc8538e2b72c1595f8e7e3232696f014f1c6f1fc2f" HandleID="k8s-pod-network.44f1b9fa7f50c985deab0bdc8538e2b72c1595f8e7e3232696f014f1c6f1fc2f" Workload="ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--6bfd8975d7--8wdqx-eth0" Jan 23 17:54:38.860055 containerd[1527]: 2026-01-23 17:54:38.694 [INFO][4257] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="44f1b9fa7f50c985deab0bdc8538e2b72c1595f8e7e3232696f014f1c6f1fc2f" HandleID="k8s-pod-network.44f1b9fa7f50c985deab0bdc8538e2b72c1595f8e7e3232696f014f1c6f1fc2f" Workload="ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--6bfd8975d7--8wdqx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d770), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-2-3-1-a204a5ad1b", "pod":"calico-apiserver-6bfd8975d7-8wdqx", "timestamp":"2026-01-23 17:54:38.694131114 +0000 UTC"}, Hostname:"ci-4459-2-3-1-a204a5ad1b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:54:38.860055 containerd[1527]: 2026-01-23 17:54:38.694 [INFO][4257] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:54:38.860055 containerd[1527]: 2026-01-23 17:54:38.694 [INFO][4257] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:54:38.860055 containerd[1527]: 2026-01-23 17:54:38.694 [INFO][4257] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-1-a204a5ad1b' Jan 23 17:54:38.860055 containerd[1527]: 2026-01-23 17:54:38.730 [INFO][4257] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.44f1b9fa7f50c985deab0bdc8538e2b72c1595f8e7e3232696f014f1c6f1fc2f" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:38.860055 containerd[1527]: 2026-01-23 17:54:38.750 [INFO][4257] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:38.860055 containerd[1527]: 2026-01-23 17:54:38.775 [INFO][4257] ipam/ipam.go 511: Trying affinity for 192.168.74.64/26 host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:38.860055 containerd[1527]: 2026-01-23 17:54:38.780 [INFO][4257] ipam/ipam.go 158: Attempting to load block cidr=192.168.74.64/26 host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:38.860055 containerd[1527]: 2026-01-23 17:54:38.786 [INFO][4257] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.74.64/26 host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:38.860055 containerd[1527]: 2026-01-23 17:54:38.786 [INFO][4257] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.74.64/26 handle="k8s-pod-network.44f1b9fa7f50c985deab0bdc8538e2b72c1595f8e7e3232696f014f1c6f1fc2f" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:38.860055 containerd[1527]: 2026-01-23 17:54:38.789 [INFO][4257] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.44f1b9fa7f50c985deab0bdc8538e2b72c1595f8e7e3232696f014f1c6f1fc2f Jan 23 17:54:38.860055 containerd[1527]: 2026-01-23 17:54:38.796 [INFO][4257] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.74.64/26 handle="k8s-pod-network.44f1b9fa7f50c985deab0bdc8538e2b72c1595f8e7e3232696f014f1c6f1fc2f" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:38.860055 containerd[1527]: 2026-01-23 17:54:38.814 [INFO][4257] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.74.67/26] block=192.168.74.64/26 handle="k8s-pod-network.44f1b9fa7f50c985deab0bdc8538e2b72c1595f8e7e3232696f014f1c6f1fc2f" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:38.860055 containerd[1527]: 2026-01-23 17:54:38.814 [INFO][4257] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.74.67/26] handle="k8s-pod-network.44f1b9fa7f50c985deab0bdc8538e2b72c1595f8e7e3232696f014f1c6f1fc2f" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:38.860055 containerd[1527]: 2026-01-23 17:54:38.814 [INFO][4257] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:54:38.860055 containerd[1527]: 2026-01-23 17:54:38.814 [INFO][4257] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.74.67/26] IPv6=[] ContainerID="44f1b9fa7f50c985deab0bdc8538e2b72c1595f8e7e3232696f014f1c6f1fc2f" HandleID="k8s-pod-network.44f1b9fa7f50c985deab0bdc8538e2b72c1595f8e7e3232696f014f1c6f1fc2f" Workload="ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--6bfd8975d7--8wdqx-eth0" Jan 23 17:54:38.861118 containerd[1527]: 2026-01-23 17:54:38.817 [INFO][4228] cni-plugin/k8s.go 418: Populated endpoint ContainerID="44f1b9fa7f50c985deab0bdc8538e2b72c1595f8e7e3232696f014f1c6f1fc2f" Namespace="calico-apiserver" Pod="calico-apiserver-6bfd8975d7-8wdqx" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--6bfd8975d7--8wdqx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--6bfd8975d7--8wdqx-eth0", GenerateName:"calico-apiserver-6bfd8975d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"67b2de8c-adfd-41ce-a209-5eab9ae1e756", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 54, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bfd8975d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-1-a204a5ad1b", ContainerID:"", Pod:"calico-apiserver-6bfd8975d7-8wdqx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5ddf7fac118", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:54:38.861118 containerd[1527]: 2026-01-23 17:54:38.817 [INFO][4228] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.67/32] ContainerID="44f1b9fa7f50c985deab0bdc8538e2b72c1595f8e7e3232696f014f1c6f1fc2f" Namespace="calico-apiserver" Pod="calico-apiserver-6bfd8975d7-8wdqx" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--6bfd8975d7--8wdqx-eth0" Jan 23 17:54:38.861118 containerd[1527]: 2026-01-23 17:54:38.817 [INFO][4228] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ddf7fac118 ContainerID="44f1b9fa7f50c985deab0bdc8538e2b72c1595f8e7e3232696f014f1c6f1fc2f" Namespace="calico-apiserver" Pod="calico-apiserver-6bfd8975d7-8wdqx" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--6bfd8975d7--8wdqx-eth0" Jan 23 17:54:38.861118 containerd[1527]: 2026-01-23 17:54:38.831 [INFO][4228] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="44f1b9fa7f50c985deab0bdc8538e2b72c1595f8e7e3232696f014f1c6f1fc2f" Namespace="calico-apiserver" Pod="calico-apiserver-6bfd8975d7-8wdqx" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--6bfd8975d7--8wdqx-eth0" Jan 23 17:54:38.861118 containerd[1527]: 2026-01-23 17:54:38.832 [INFO][4228] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="44f1b9fa7f50c985deab0bdc8538e2b72c1595f8e7e3232696f014f1c6f1fc2f" Namespace="calico-apiserver" Pod="calico-apiserver-6bfd8975d7-8wdqx" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--6bfd8975d7--8wdqx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--6bfd8975d7--8wdqx-eth0", GenerateName:"calico-apiserver-6bfd8975d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"67b2de8c-adfd-41ce-a209-5eab9ae1e756", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 54, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bfd8975d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-1-a204a5ad1b", ContainerID:"44f1b9fa7f50c985deab0bdc8538e2b72c1595f8e7e3232696f014f1c6f1fc2f", Pod:"calico-apiserver-6bfd8975d7-8wdqx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5ddf7fac118", MAC:"66:d6:b0:71:f8:61", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:54:38.861118 containerd[1527]: 2026-01-23 17:54:38.856 [INFO][4228] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="44f1b9fa7f50c985deab0bdc8538e2b72c1595f8e7e3232696f014f1c6f1fc2f" Namespace="calico-apiserver" Pod="calico-apiserver-6bfd8975d7-8wdqx" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--6bfd8975d7--8wdqx-eth0" Jan 23 17:54:38.918833 containerd[1527]: time="2026-01-23T17:54:38.918752826Z" level=info msg="connecting to shim 44f1b9fa7f50c985deab0bdc8538e2b72c1595f8e7e3232696f014f1c6f1fc2f" address="unix:///run/containerd/s/8c48c66a8058fb46e1b9eded8cdef34a5b4141c335129657893e37972d7f17cb" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:54:38.931969 systemd-networkd[1422]: cali18bfc7a4ec2: Link UP Jan 23 17:54:38.933711 systemd-networkd[1422]: cali18bfc7a4ec2: Gained carrier Jan 23 17:54:38.969462 containerd[1527]: 2026-01-23 17:54:38.630 [INFO][4220] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--1--a204a5ad1b-k8s-csi--node--driver--jkkz7-eth0 csi-node-driver- calico-system 6e403bca-286c-4acf-bbf0-2ee7f3d0b56e 771 0 2026-01-23 17:54:16 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459-2-3-1-a204a5ad1b csi-node-driver-jkkz7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali18bfc7a4ec2 [] [] }} ContainerID="703be67dc0e65623a3a6572a0f3f2a91e39b97da109bac3de327e5101423ae6b" Namespace="calico-system" Pod="csi-node-driver-jkkz7" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-csi--node--driver--jkkz7-" Jan 23 17:54:38.969462 containerd[1527]: 2026-01-23 17:54:38.630 [INFO][4220] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="703be67dc0e65623a3a6572a0f3f2a91e39b97da109bac3de327e5101423ae6b" Namespace="calico-system" Pod="csi-node-driver-jkkz7" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-csi--node--driver--jkkz7-eth0" Jan 23 17:54:38.969462 containerd[1527]: 2026-01-23 17:54:38.726 [INFO][4265] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="703be67dc0e65623a3a6572a0f3f2a91e39b97da109bac3de327e5101423ae6b" HandleID="k8s-pod-network.703be67dc0e65623a3a6572a0f3f2a91e39b97da109bac3de327e5101423ae6b" Workload="ci--4459--2--3--1--a204a5ad1b-k8s-csi--node--driver--jkkz7-eth0" Jan 23 17:54:38.969462 containerd[1527]: 2026-01-23 17:54:38.728 [INFO][4265] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="703be67dc0e65623a3a6572a0f3f2a91e39b97da109bac3de327e5101423ae6b" HandleID="k8s-pod-network.703be67dc0e65623a3a6572a0f3f2a91e39b97da109bac3de327e5101423ae6b" Workload="ci--4459--2--3--1--a204a5ad1b-k8s-csi--node--driver--jkkz7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000307940), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-3-1-a204a5ad1b", "pod":"csi-node-driver-jkkz7", "timestamp":"2026-01-23 17:54:38.726518596 +0000 UTC"}, Hostname:"ci-4459-2-3-1-a204a5ad1b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:54:38.969462 containerd[1527]: 2026-01-23 17:54:38.729 [INFO][4265] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:54:38.969462 containerd[1527]: 2026-01-23 17:54:38.814 [INFO][4265] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:54:38.969462 containerd[1527]: 2026-01-23 17:54:38.815 [INFO][4265] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-1-a204a5ad1b' Jan 23 17:54:38.969462 containerd[1527]: 2026-01-23 17:54:38.856 [INFO][4265] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.703be67dc0e65623a3a6572a0f3f2a91e39b97da109bac3de327e5101423ae6b" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:38.969462 containerd[1527]: 2026-01-23 17:54:38.871 [INFO][4265] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:38.969462 containerd[1527]: 2026-01-23 17:54:38.879 [INFO][4265] ipam/ipam.go 511: Trying affinity for 192.168.74.64/26 host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:38.969462 containerd[1527]: 2026-01-23 17:54:38.881 [INFO][4265] ipam/ipam.go 158: Attempting to load block cidr=192.168.74.64/26 host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:38.969462 containerd[1527]: 2026-01-23 17:54:38.893 [INFO][4265] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.74.64/26 host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:38.969462 containerd[1527]: 2026-01-23 17:54:38.893 [INFO][4265] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.74.64/26 handle="k8s-pod-network.703be67dc0e65623a3a6572a0f3f2a91e39b97da109bac3de327e5101423ae6b" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:38.969462 containerd[1527]: 2026-01-23 17:54:38.897 [INFO][4265] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.703be67dc0e65623a3a6572a0f3f2a91e39b97da109bac3de327e5101423ae6b Jan 23 17:54:38.969462 containerd[1527]: 2026-01-23 17:54:38.901 [INFO][4265] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.74.64/26 handle="k8s-pod-network.703be67dc0e65623a3a6572a0f3f2a91e39b97da109bac3de327e5101423ae6b" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:38.969462 containerd[1527]: 2026-01-23 17:54:38.916 [INFO][4265] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.74.68/26] block=192.168.74.64/26 handle="k8s-pod-network.703be67dc0e65623a3a6572a0f3f2a91e39b97da109bac3de327e5101423ae6b" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:38.969462 containerd[1527]: 2026-01-23 17:54:38.918 [INFO][4265] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.74.68/26] handle="k8s-pod-network.703be67dc0e65623a3a6572a0f3f2a91e39b97da109bac3de327e5101423ae6b" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:38.969462 containerd[1527]: 2026-01-23 17:54:38.918 [INFO][4265] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:54:38.969462 containerd[1527]: 2026-01-23 17:54:38.918 [INFO][4265] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.74.68/26] IPv6=[] ContainerID="703be67dc0e65623a3a6572a0f3f2a91e39b97da109bac3de327e5101423ae6b" HandleID="k8s-pod-network.703be67dc0e65623a3a6572a0f3f2a91e39b97da109bac3de327e5101423ae6b" Workload="ci--4459--2--3--1--a204a5ad1b-k8s-csi--node--driver--jkkz7-eth0" Jan 23 17:54:38.970471 containerd[1527]: 2026-01-23 17:54:38.926 [INFO][4220] cni-plugin/k8s.go 418: Populated endpoint ContainerID="703be67dc0e65623a3a6572a0f3f2a91e39b97da109bac3de327e5101423ae6b" Namespace="calico-system" Pod="csi-node-driver-jkkz7" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-csi--node--driver--jkkz7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--1--a204a5ad1b-k8s-csi--node--driver--jkkz7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6e403bca-286c-4acf-bbf0-2ee7f3d0b56e", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 54, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-1-a204a5ad1b", ContainerID:"", Pod:"csi-node-driver-jkkz7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.74.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali18bfc7a4ec2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:54:38.970471 containerd[1527]: 2026-01-23 17:54:38.926 [INFO][4220] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.68/32] ContainerID="703be67dc0e65623a3a6572a0f3f2a91e39b97da109bac3de327e5101423ae6b" Namespace="calico-system" Pod="csi-node-driver-jkkz7" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-csi--node--driver--jkkz7-eth0" Jan 23 17:54:38.970471 containerd[1527]: 2026-01-23 17:54:38.926 [INFO][4220] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18bfc7a4ec2 ContainerID="703be67dc0e65623a3a6572a0f3f2a91e39b97da109bac3de327e5101423ae6b" Namespace="calico-system" Pod="csi-node-driver-jkkz7" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-csi--node--driver--jkkz7-eth0" Jan 23 17:54:38.970471 containerd[1527]: 2026-01-23 17:54:38.934 [INFO][4220] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="703be67dc0e65623a3a6572a0f3f2a91e39b97da109bac3de327e5101423ae6b" Namespace="calico-system" Pod="csi-node-driver-jkkz7" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-csi--node--driver--jkkz7-eth0" Jan 23 17:54:38.970471 containerd[1527]: 2026-01-23 17:54:38.937 [INFO][4220] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="703be67dc0e65623a3a6572a0f3f2a91e39b97da109bac3de327e5101423ae6b" Namespace="calico-system" Pod="csi-node-driver-jkkz7" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-csi--node--driver--jkkz7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--1--a204a5ad1b-k8s-csi--node--driver--jkkz7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6e403bca-286c-4acf-bbf0-2ee7f3d0b56e", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 54, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-1-a204a5ad1b", ContainerID:"703be67dc0e65623a3a6572a0f3f2a91e39b97da109bac3de327e5101423ae6b", Pod:"csi-node-driver-jkkz7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.74.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali18bfc7a4ec2", MAC:"12:dd:95:1b:27:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:54:38.970471 containerd[1527]: 2026-01-23 17:54:38.962 [INFO][4220] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="703be67dc0e65623a3a6572a0f3f2a91e39b97da109bac3de327e5101423ae6b" Namespace="calico-system" Pod="csi-node-driver-jkkz7" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-csi--node--driver--jkkz7-eth0" Jan 23 17:54:38.976633 systemd[1]: Started cri-containerd-44f1b9fa7f50c985deab0bdc8538e2b72c1595f8e7e3232696f014f1c6f1fc2f.scope - libcontainer container 44f1b9fa7f50c985deab0bdc8538e2b72c1595f8e7e3232696f014f1c6f1fc2f. Jan 23 17:54:39.019120 containerd[1527]: time="2026-01-23T17:54:39.018929652Z" level=info msg="connecting to shim 703be67dc0e65623a3a6572a0f3f2a91e39b97da109bac3de327e5101423ae6b" address="unix:///run/containerd/s/e4566dd7eb29369caddc1794e10f7a015e6a85ac1a716c5537759bbffee17129" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:54:39.029478 systemd-networkd[1422]: cali4436f3a338d: Link UP Jan 23 17:54:39.031401 systemd-networkd[1422]: cali4436f3a338d: Gained carrier Jan 23 17:54:39.063779 systemd[1]: Started cri-containerd-703be67dc0e65623a3a6572a0f3f2a91e39b97da109bac3de327e5101423ae6b.scope - libcontainer container 703be67dc0e65623a3a6572a0f3f2a91e39b97da109bac3de327e5101423ae6b. Jan 23 17:54:39.072687 containerd[1527]: 2026-01-23 17:54:38.628 [INFO][4231] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--1--a204a5ad1b-k8s-calico--kube--controllers--69ff6445f8--4fhb4-eth0 calico-kube-controllers-69ff6445f8- calico-system 8c709d5d-7113-42e9-bc41-af7907cc6116 871 0 2026-01-23 17:54:16 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:69ff6445f8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459-2-3-1-a204a5ad1b calico-kube-controllers-69ff6445f8-4fhb4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4436f3a338d [] [] }} ContainerID="f456bdc2249251094c287bf22a1a672c30e3f4287cd5ae19a969902b9adf1587" Namespace="calico-system" Pod="calico-kube-controllers-69ff6445f8-4fhb4" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-calico--kube--controllers--69ff6445f8--4fhb4-" Jan 23 17:54:39.072687 containerd[1527]: 2026-01-23 17:54:38.628 [INFO][4231] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f456bdc2249251094c287bf22a1a672c30e3f4287cd5ae19a969902b9adf1587" Namespace="calico-system" Pod="calico-kube-controllers-69ff6445f8-4fhb4" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-calico--kube--controllers--69ff6445f8--4fhb4-eth0" Jan 23 17:54:39.072687 containerd[1527]: 2026-01-23 17:54:38.740 [INFO][4260] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f456bdc2249251094c287bf22a1a672c30e3f4287cd5ae19a969902b9adf1587" HandleID="k8s-pod-network.f456bdc2249251094c287bf22a1a672c30e3f4287cd5ae19a969902b9adf1587" Workload="ci--4459--2--3--1--a204a5ad1b-k8s-calico--kube--controllers--69ff6445f8--4fhb4-eth0" Jan 23 17:54:39.072687 containerd[1527]: 2026-01-23 17:54:38.740 [INFO][4260] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f456bdc2249251094c287bf22a1a672c30e3f4287cd5ae19a969902b9adf1587" HandleID="k8s-pod-network.f456bdc2249251094c287bf22a1a672c30e3f4287cd5ae19a969902b9adf1587" Workload="ci--4459--2--3--1--a204a5ad1b-k8s-calico--kube--controllers--69ff6445f8--4fhb4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d950), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-3-1-a204a5ad1b", "pod":"calico-kube-controllers-69ff6445f8-4fhb4", "timestamp":"2026-01-23 17:54:38.740745927 +0000 UTC"}, Hostname:"ci-4459-2-3-1-a204a5ad1b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:54:39.072687 containerd[1527]: 2026-01-23 17:54:38.741 [INFO][4260] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:54:39.072687 containerd[1527]: 2026-01-23 17:54:38.918 [INFO][4260] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:54:39.072687 containerd[1527]: 2026-01-23 17:54:38.918 [INFO][4260] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-1-a204a5ad1b' Jan 23 17:54:39.072687 containerd[1527]: 2026-01-23 17:54:38.957 [INFO][4260] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f456bdc2249251094c287bf22a1a672c30e3f4287cd5ae19a969902b9adf1587" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:39.072687 containerd[1527]: 2026-01-23 17:54:38.972 [INFO][4260] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:39.072687 containerd[1527]: 2026-01-23 17:54:38.981 [INFO][4260] ipam/ipam.go 511: Trying affinity for 192.168.74.64/26 host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:39.072687 containerd[1527]: 2026-01-23 17:54:38.985 [INFO][4260] ipam/ipam.go 158: Attempting to load block cidr=192.168.74.64/26 host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:39.072687 containerd[1527]: 2026-01-23 17:54:38.989 [INFO][4260] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.74.64/26 host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:39.072687 containerd[1527]: 2026-01-23 17:54:38.989 [INFO][4260] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.74.64/26 handle="k8s-pod-network.f456bdc2249251094c287bf22a1a672c30e3f4287cd5ae19a969902b9adf1587" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:39.072687 containerd[1527]: 2026-01-23 17:54:38.992 [INFO][4260] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f456bdc2249251094c287bf22a1a672c30e3f4287cd5ae19a969902b9adf1587 Jan 23 17:54:39.072687 containerd[1527]: 2026-01-23 17:54:39.000 [INFO][4260] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.74.64/26 handle="k8s-pod-network.f456bdc2249251094c287bf22a1a672c30e3f4287cd5ae19a969902b9adf1587" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:39.072687 containerd[1527]: 2026-01-23 17:54:39.010 [INFO][4260] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.74.69/26] block=192.168.74.64/26 handle="k8s-pod-network.f456bdc2249251094c287bf22a1a672c30e3f4287cd5ae19a969902b9adf1587" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:39.072687 containerd[1527]: 2026-01-23 17:54:39.011 [INFO][4260] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.74.69/26] handle="k8s-pod-network.f456bdc2249251094c287bf22a1a672c30e3f4287cd5ae19a969902b9adf1587" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:39.072687 containerd[1527]: 2026-01-23 17:54:39.011 [INFO][4260] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:54:39.072687 containerd[1527]: 2026-01-23 17:54:39.011 [INFO][4260] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.74.69/26] IPv6=[] ContainerID="f456bdc2249251094c287bf22a1a672c30e3f4287cd5ae19a969902b9adf1587" HandleID="k8s-pod-network.f456bdc2249251094c287bf22a1a672c30e3f4287cd5ae19a969902b9adf1587" Workload="ci--4459--2--3--1--a204a5ad1b-k8s-calico--kube--controllers--69ff6445f8--4fhb4-eth0" Jan 23 17:54:39.073248 containerd[1527]: 2026-01-23 17:54:39.020 [INFO][4231] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f456bdc2249251094c287bf22a1a672c30e3f4287cd5ae19a969902b9adf1587" Namespace="calico-system" Pod="calico-kube-controllers-69ff6445f8-4fhb4" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-calico--kube--controllers--69ff6445f8--4fhb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--1--a204a5ad1b-k8s-calico--kube--controllers--69ff6445f8--4fhb4-eth0", GenerateName:"calico-kube-controllers-69ff6445f8-", Namespace:"calico-system", SelfLink:"", UID:"8c709d5d-7113-42e9-bc41-af7907cc6116", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 54, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69ff6445f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-1-a204a5ad1b", ContainerID:"", Pod:"calico-kube-controllers-69ff6445f8-4fhb4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4436f3a338d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:54:39.073248 containerd[1527]: 2026-01-23 17:54:39.021 [INFO][4231] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.69/32] ContainerID="f456bdc2249251094c287bf22a1a672c30e3f4287cd5ae19a969902b9adf1587" Namespace="calico-system" Pod="calico-kube-controllers-69ff6445f8-4fhb4" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-calico--kube--controllers--69ff6445f8--4fhb4-eth0" Jan 23 17:54:39.073248 containerd[1527]: 2026-01-23 17:54:39.021 [INFO][4231] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4436f3a338d ContainerID="f456bdc2249251094c287bf22a1a672c30e3f4287cd5ae19a969902b9adf1587" Namespace="calico-system" Pod="calico-kube-controllers-69ff6445f8-4fhb4" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-calico--kube--controllers--69ff6445f8--4fhb4-eth0" Jan 23 17:54:39.073248 containerd[1527]: 2026-01-23 17:54:39.031 [INFO][4231] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f456bdc2249251094c287bf22a1a672c30e3f4287cd5ae19a969902b9adf1587" Namespace="calico-system" Pod="calico-kube-controllers-69ff6445f8-4fhb4" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-calico--kube--controllers--69ff6445f8--4fhb4-eth0" Jan 23 17:54:39.073248 containerd[1527]: 2026-01-23 17:54:39.042 [INFO][4231] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f456bdc2249251094c287bf22a1a672c30e3f4287cd5ae19a969902b9adf1587" Namespace="calico-system" Pod="calico-kube-controllers-69ff6445f8-4fhb4" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-calico--kube--controllers--69ff6445f8--4fhb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--1--a204a5ad1b-k8s-calico--kube--controllers--69ff6445f8--4fhb4-eth0", GenerateName:"calico-kube-controllers-69ff6445f8-", Namespace:"calico-system", SelfLink:"", UID:"8c709d5d-7113-42e9-bc41-af7907cc6116", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 54, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69ff6445f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-1-a204a5ad1b", ContainerID:"f456bdc2249251094c287bf22a1a672c30e3f4287cd5ae19a969902b9adf1587", Pod:"calico-kube-controllers-69ff6445f8-4fhb4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.74.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4436f3a338d", MAC:"26:4a:b7:09:62:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:54:39.073248 containerd[1527]: 2026-01-23 17:54:39.069 [INFO][4231] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f456bdc2249251094c287bf22a1a672c30e3f4287cd5ae19a969902b9adf1587" Namespace="calico-system" Pod="calico-kube-controllers-69ff6445f8-4fhb4" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-calico--kube--controllers--69ff6445f8--4fhb4-eth0" Jan 23 17:54:39.085094 containerd[1527]: time="2026-01-23T17:54:39.084968844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bfd8975d7-8wdqx,Uid:67b2de8c-adfd-41ce-a209-5eab9ae1e756,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"44f1b9fa7f50c985deab0bdc8538e2b72c1595f8e7e3232696f014f1c6f1fc2f\"" Jan 23 17:54:39.091159 containerd[1527]: time="2026-01-23T17:54:39.091110241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:54:39.124081 containerd[1527]: time="2026-01-23T17:54:39.124039011Z" level=info msg="connecting to shim f456bdc2249251094c287bf22a1a672c30e3f4287cd5ae19a969902b9adf1587" address="unix:///run/containerd/s/b6115446e51f2307352bbcfab56e83dadbb5951f06a8bb4dfac3d7ca09c91bf2" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:54:39.128281 containerd[1527]: time="2026-01-23T17:54:39.128221962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jkkz7,Uid:6e403bca-286c-4acf-bbf0-2ee7f3d0b56e,Namespace:calico-system,Attempt:0,} returns sandbox id \"703be67dc0e65623a3a6572a0f3f2a91e39b97da109bac3de327e5101423ae6b\"" Jan 23 17:54:39.160162 systemd[1]: Started cri-containerd-f456bdc2249251094c287bf22a1a672c30e3f4287cd5ae19a969902b9adf1587.scope - libcontainer container f456bdc2249251094c287bf22a1a672c30e3f4287cd5ae19a969902b9adf1587. Jan 23 17:54:39.213817 containerd[1527]: time="2026-01-23T17:54:39.213762295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69ff6445f8-4fhb4,Uid:8c709d5d-7113-42e9-bc41-af7907cc6116,Namespace:calico-system,Attempt:0,} returns sandbox id \"f456bdc2249251094c287bf22a1a672c30e3f4287cd5ae19a969902b9adf1587\"" Jan 23 17:54:39.360754 systemd-networkd[1422]: cali1e63d393538: Gained IPv6LL Jan 23 17:54:39.436027 containerd[1527]: time="2026-01-23T17:54:39.435724213Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:54:39.437365 containerd[1527]: time="2026-01-23T17:54:39.437256713Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:54:39.437365 containerd[1527]: time="2026-01-23T17:54:39.437359479Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 17:54:39.437745 kubelet[2738]: E0123 17:54:39.437611 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:54:39.437745 kubelet[2738]: E0123 17:54:39.437670 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:54:39.439480 kubelet[2738]: E0123 17:54:39.437935 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k9p8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bfd8975d7-8wdqx_calico-apiserver(67b2de8c-adfd-41ce-a209-5eab9ae1e756): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:54:39.439652 containerd[1527]: time="2026-01-23T17:54:39.438413147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 17:54:39.439976 kubelet[2738]: E0123 17:54:39.439817 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bfd8975d7-8wdqx" podUID="67b2de8c-adfd-41ce-a209-5eab9ae1e756" Jan 23 17:54:39.740610 kubelet[2738]: E0123 17:54:39.740500 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bfd8975d7-8wdqx" podUID="67b2de8c-adfd-41ce-a209-5eab9ae1e756" Jan 23 17:54:39.784745 containerd[1527]: time="2026-01-23T17:54:39.784654585Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:54:39.786622 containerd[1527]: time="2026-01-23T17:54:39.786477983Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 17:54:39.786994 containerd[1527]: time="2026-01-23T17:54:39.786643713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 17:54:39.787136 kubelet[2738]: E0123 17:54:39.786861 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:54:39.787136 kubelet[2738]: E0123 17:54:39.787043 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:54:39.789102 containerd[1527]: time="2026-01-23T17:54:39.788307301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 17:54:39.789719 kubelet[2738]: E0123 17:54:39.788768 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7872c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jkkz7_calico-system(6e403bca-286c-4acf-bbf0-2ee7f3d0b56e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 17:54:40.126029 containerd[1527]: time="2026-01-23T17:54:40.125778121Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:54:40.127337 containerd[1527]: time="2026-01-23T17:54:40.127274536Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 17:54:40.127613 containerd[1527]: time="2026-01-23T17:54:40.127339540Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 17:54:40.128130 kubelet[2738]: E0123 17:54:40.127890 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:54:40.128130 kubelet[2738]: E0123 17:54:40.127950 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:54:40.128652 kubelet[2738]: E0123 17:54:40.128243 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2p6dt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-69ff6445f8-4fhb4_calico-system(8c709d5d-7113-42e9-bc41-af7907cc6116): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 17:54:40.128957 containerd[1527]: time="2026-01-23T17:54:40.128538497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 17:54:40.129180 systemd-networkd[1422]: cali5ddf7fac118: Gained IPv6LL Jan 23 17:54:40.130747 kubelet[2738]: E0123 17:54:40.130143 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69ff6445f8-4fhb4" podUID="8c709d5d-7113-42e9-bc41-af7907cc6116" Jan 23 17:54:40.385856 systemd-networkd[1422]: cali18bfc7a4ec2: Gained IPv6LL Jan 23 17:54:40.478308 containerd[1527]: time="2026-01-23T17:54:40.478218093Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:54:40.479874 containerd[1527]: time="2026-01-23T17:54:40.479762831Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 17:54:40.480583 containerd[1527]: time="2026-01-23T17:54:40.479895520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 17:54:40.480660 kubelet[2738]: E0123 17:54:40.480135 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:54:40.480660 kubelet[2738]: E0123 17:54:40.480211 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:54:40.481230 kubelet[2738]: E0123 17:54:40.480403 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7872c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jkkz7_calico-system(6e403bca-286c-4acf-bbf0-2ee7f3d0b56e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 17:54:40.489341 kubelet[2738]: E0123 17:54:40.489268 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jkkz7" podUID="6e403bca-286c-4acf-bbf0-2ee7f3d0b56e" Jan 23 17:54:40.497462 kubelet[2738]: I0123 17:54:40.497379 2738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 17:54:40.504665 containerd[1527]: time="2026-01-23T17:54:40.504584258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-zzc7s,Uid:25d835ef-f3bb-42c6-bc1f-07f8b7a82a66,Namespace:calico-system,Attempt:0,}" Jan 23 17:54:40.505494 containerd[1527]: time="2026-01-23T17:54:40.505386789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9fdf556b5-bgdgn,Uid:f2da6827-d5c3-485d-a17f-86ee3e12342c,Namespace:calico-apiserver,Attempt:0,}" Jan 23 17:54:40.726376 systemd-networkd[1422]: cali7d4d294ce5d: Link UP Jan 23 17:54:40.726857 systemd-networkd[1422]: cali7d4d294ce5d: Gained carrier Jan 23 17:54:40.760949 kubelet[2738]: E0123 17:54:40.760625 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69ff6445f8-4fhb4" podUID="8c709d5d-7113-42e9-bc41-af7907cc6116" Jan 23 17:54:40.760949 kubelet[2738]: E0123 17:54:40.760827 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bfd8975d7-8wdqx" podUID="67b2de8c-adfd-41ce-a209-5eab9ae1e756" Jan 23 17:54:40.763449 kubelet[2738]: E0123 17:54:40.763083 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jkkz7" podUID="6e403bca-286c-4acf-bbf0-2ee7f3d0b56e" Jan 23 17:54:40.769470 containerd[1527]: 2026-01-23 17:54:40.598 [INFO][4458] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--9fdf556b5--bgdgn-eth0 calico-apiserver-9fdf556b5- calico-apiserver f2da6827-d5c3-485d-a17f-86ee3e12342c 865 0 2026-01-23 17:54:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9fdf556b5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-3-1-a204a5ad1b calico-apiserver-9fdf556b5-bgdgn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7d4d294ce5d [] [] }} ContainerID="a1358df8b415586bbf4f20ae80c4c837fd92c55e205e2bc389f5bd6cffc669f6" Namespace="calico-apiserver" Pod="calico-apiserver-9fdf556b5-bgdgn" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--9fdf556b5--bgdgn-" Jan 23 17:54:40.769470 containerd[1527]: 2026-01-23 17:54:40.599 [INFO][4458] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a1358df8b415586bbf4f20ae80c4c837fd92c55e205e2bc389f5bd6cffc669f6" Namespace="calico-apiserver" Pod="calico-apiserver-9fdf556b5-bgdgn" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--9fdf556b5--bgdgn-eth0" Jan 23 17:54:40.769470 containerd[1527]: 2026-01-23 17:54:40.641 [INFO][4508] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a1358df8b415586bbf4f20ae80c4c837fd92c55e205e2bc389f5bd6cffc669f6" HandleID="k8s-pod-network.a1358df8b415586bbf4f20ae80c4c837fd92c55e205e2bc389f5bd6cffc669f6" Workload="ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--9fdf556b5--bgdgn-eth0" Jan 23 17:54:40.769470 containerd[1527]: 2026-01-23 17:54:40.641 [INFO][4508] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a1358df8b415586bbf4f20ae80c4c837fd92c55e205e2bc389f5bd6cffc669f6" HandleID="k8s-pod-network.a1358df8b415586bbf4f20ae80c4c837fd92c55e205e2bc389f5bd6cffc669f6" Workload="ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--9fdf556b5--bgdgn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3000), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-2-3-1-a204a5ad1b", "pod":"calico-apiserver-9fdf556b5-bgdgn", "timestamp":"2026-01-23 17:54:40.641659101 +0000 UTC"}, Hostname:"ci-4459-2-3-1-a204a5ad1b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:54:40.769470 containerd[1527]: 2026-01-23 17:54:40.642 [INFO][4508] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:54:40.769470 containerd[1527]: 2026-01-23 17:54:40.642 [INFO][4508] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:54:40.769470 containerd[1527]: 2026-01-23 17:54:40.642 [INFO][4508] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-1-a204a5ad1b' Jan 23 17:54:40.769470 containerd[1527]: 2026-01-23 17:54:40.656 [INFO][4508] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a1358df8b415586bbf4f20ae80c4c837fd92c55e205e2bc389f5bd6cffc669f6" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:40.769470 containerd[1527]: 2026-01-23 17:54:40.665 [INFO][4508] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:40.769470 containerd[1527]: 2026-01-23 17:54:40.676 [INFO][4508] ipam/ipam.go 511: Trying affinity for 192.168.74.64/26 host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:40.769470 containerd[1527]: 2026-01-23 17:54:40.679 [INFO][4508] ipam/ipam.go 158: Attempting to load block cidr=192.168.74.64/26 host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:40.769470 containerd[1527]: 2026-01-23 17:54:40.685 [INFO][4508] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.74.64/26 host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:40.769470 containerd[1527]: 2026-01-23 17:54:40.686 [INFO][4508] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.74.64/26 handle="k8s-pod-network.a1358df8b415586bbf4f20ae80c4c837fd92c55e205e2bc389f5bd6cffc669f6" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:40.769470 containerd[1527]: 2026-01-23 17:54:40.688 [INFO][4508] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a1358df8b415586bbf4f20ae80c4c837fd92c55e205e2bc389f5bd6cffc669f6 Jan 23 17:54:40.769470 containerd[1527]: 2026-01-23 17:54:40.697 [INFO][4508] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.74.64/26 handle="k8s-pod-network.a1358df8b415586bbf4f20ae80c4c837fd92c55e205e2bc389f5bd6cffc669f6" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:40.769470 containerd[1527]: 2026-01-23 17:54:40.715 [INFO][4508] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.74.70/26] block=192.168.74.64/26 handle="k8s-pod-network.a1358df8b415586bbf4f20ae80c4c837fd92c55e205e2bc389f5bd6cffc669f6" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:40.769470 containerd[1527]: 2026-01-23 17:54:40.715 [INFO][4508] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.74.70/26] handle="k8s-pod-network.a1358df8b415586bbf4f20ae80c4c837fd92c55e205e2bc389f5bd6cffc669f6" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:40.769470 containerd[1527]: 2026-01-23 17:54:40.715 [INFO][4508] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:54:40.769470 containerd[1527]: 2026-01-23 17:54:40.715 [INFO][4508] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.74.70/26] IPv6=[] ContainerID="a1358df8b415586bbf4f20ae80c4c837fd92c55e205e2bc389f5bd6cffc669f6" HandleID="k8s-pod-network.a1358df8b415586bbf4f20ae80c4c837fd92c55e205e2bc389f5bd6cffc669f6" Workload="ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--9fdf556b5--bgdgn-eth0" Jan 23 17:54:40.770241 containerd[1527]: 2026-01-23 17:54:40.723 [INFO][4458] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a1358df8b415586bbf4f20ae80c4c837fd92c55e205e2bc389f5bd6cffc669f6" Namespace="calico-apiserver" Pod="calico-apiserver-9fdf556b5-bgdgn" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--9fdf556b5--bgdgn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--9fdf556b5--bgdgn-eth0", GenerateName:"calico-apiserver-9fdf556b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"f2da6827-d5c3-485d-a17f-86ee3e12342c", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 54, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9fdf556b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-1-a204a5ad1b", ContainerID:"", Pod:"calico-apiserver-9fdf556b5-bgdgn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7d4d294ce5d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:54:40.770241 containerd[1527]: 2026-01-23 17:54:40.724 [INFO][4458] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.70/32] ContainerID="a1358df8b415586bbf4f20ae80c4c837fd92c55e205e2bc389f5bd6cffc669f6" Namespace="calico-apiserver" Pod="calico-apiserver-9fdf556b5-bgdgn" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--9fdf556b5--bgdgn-eth0" Jan 23 17:54:40.770241 containerd[1527]: 2026-01-23 17:54:40.724 [INFO][4458] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7d4d294ce5d ContainerID="a1358df8b415586bbf4f20ae80c4c837fd92c55e205e2bc389f5bd6cffc669f6" Namespace="calico-apiserver" Pod="calico-apiserver-9fdf556b5-bgdgn" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--9fdf556b5--bgdgn-eth0" Jan 23 17:54:40.770241 containerd[1527]: 2026-01-23 17:54:40.727 [INFO][4458] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a1358df8b415586bbf4f20ae80c4c837fd92c55e205e2bc389f5bd6cffc669f6" Namespace="calico-apiserver" Pod="calico-apiserver-9fdf556b5-bgdgn" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--9fdf556b5--bgdgn-eth0" Jan 23 17:54:40.770241 containerd[1527]: 2026-01-23 17:54:40.728 [INFO][4458] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a1358df8b415586bbf4f20ae80c4c837fd92c55e205e2bc389f5bd6cffc669f6" Namespace="calico-apiserver" Pod="calico-apiserver-9fdf556b5-bgdgn" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--9fdf556b5--bgdgn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--9fdf556b5--bgdgn-eth0", GenerateName:"calico-apiserver-9fdf556b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"f2da6827-d5c3-485d-a17f-86ee3e12342c", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 54, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9fdf556b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-1-a204a5ad1b", ContainerID:"a1358df8b415586bbf4f20ae80c4c837fd92c55e205e2bc389f5bd6cffc669f6", Pod:"calico-apiserver-9fdf556b5-bgdgn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7d4d294ce5d", MAC:"be:92:ee:04:39:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:54:40.770241 containerd[1527]: 2026-01-23 17:54:40.754 [INFO][4458] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a1358df8b415586bbf4f20ae80c4c837fd92c55e205e2bc389f5bd6cffc669f6" Namespace="calico-apiserver" Pod="calico-apiserver-9fdf556b5-bgdgn" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--9fdf556b5--bgdgn-eth0" Jan 23 17:54:40.812539 containerd[1527]: time="2026-01-23T17:54:40.811518681Z" level=info msg="connecting to shim a1358df8b415586bbf4f20ae80c4c837fd92c55e205e2bc389f5bd6cffc669f6" address="unix:///run/containerd/s/6c726c36a669a09a5c80b41c1d4f032dd4be62739c99dda8dfff18fe798319c0" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:54:40.864733 systemd[1]: Started cri-containerd-a1358df8b415586bbf4f20ae80c4c837fd92c55e205e2bc389f5bd6cffc669f6.scope - libcontainer container a1358df8b415586bbf4f20ae80c4c837fd92c55e205e2bc389f5bd6cffc669f6. Jan 23 17:54:40.892111 systemd-networkd[1422]: calid41fcd73356: Link UP Jan 23 17:54:40.895665 systemd-networkd[1422]: calid41fcd73356: Gained carrier Jan 23 17:54:40.919422 containerd[1527]: 2026-01-23 17:54:40.593 [INFO][4449] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--1--a204a5ad1b-k8s-goldmane--666569f655--zzc7s-eth0 goldmane-666569f655- calico-system 25d835ef-f3bb-42c6-bc1f-07f8b7a82a66 867 0 2026-01-23 17:54:15 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459-2-3-1-a204a5ad1b goldmane-666569f655-zzc7s eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid41fcd73356 [] [] }} ContainerID="b4c4063bc8a60f64fe2b1ce7aa73cc82fc4954c918d9592599c1e536621e5f57" Namespace="calico-system" Pod="goldmane-666569f655-zzc7s" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-goldmane--666569f655--zzc7s-" Jan 23 17:54:40.919422 containerd[1527]: 2026-01-23 17:54:40.593 [INFO][4449] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b4c4063bc8a60f64fe2b1ce7aa73cc82fc4954c918d9592599c1e536621e5f57" Namespace="calico-system" Pod="goldmane-666569f655-zzc7s" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-goldmane--666569f655--zzc7s-eth0" Jan 23 17:54:40.919422 containerd[1527]: 2026-01-23 17:54:40.648 [INFO][4502] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b4c4063bc8a60f64fe2b1ce7aa73cc82fc4954c918d9592599c1e536621e5f57" HandleID="k8s-pod-network.b4c4063bc8a60f64fe2b1ce7aa73cc82fc4954c918d9592599c1e536621e5f57" Workload="ci--4459--2--3--1--a204a5ad1b-k8s-goldmane--666569f655--zzc7s-eth0" Jan 23 17:54:40.919422 containerd[1527]: 2026-01-23 17:54:40.649 [INFO][4502] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b4c4063bc8a60f64fe2b1ce7aa73cc82fc4954c918d9592599c1e536621e5f57" HandleID="k8s-pod-network.b4c4063bc8a60f64fe2b1ce7aa73cc82fc4954c918d9592599c1e536621e5f57" Workload="ci--4459--2--3--1--a204a5ad1b-k8s-goldmane--666569f655--zzc7s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000255020), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-3-1-a204a5ad1b", "pod":"goldmane-666569f655-zzc7s", "timestamp":"2026-01-23 17:54:40.648656789 +0000 UTC"}, Hostname:"ci-4459-2-3-1-a204a5ad1b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:54:40.919422 containerd[1527]: 2026-01-23 17:54:40.649 [INFO][4502] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:54:40.919422 containerd[1527]: 2026-01-23 17:54:40.715 [INFO][4502] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:54:40.919422 containerd[1527]: 2026-01-23 17:54:40.716 [INFO][4502] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-1-a204a5ad1b' Jan 23 17:54:40.919422 containerd[1527]: 2026-01-23 17:54:40.767 [INFO][4502] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b4c4063bc8a60f64fe2b1ce7aa73cc82fc4954c918d9592599c1e536621e5f57" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:40.919422 containerd[1527]: 2026-01-23 17:54:40.809 [INFO][4502] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:40.919422 containerd[1527]: 2026-01-23 17:54:40.837 [INFO][4502] ipam/ipam.go 511: Trying affinity for 192.168.74.64/26 host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:40.919422 containerd[1527]: 2026-01-23 17:54:40.841 [INFO][4502] ipam/ipam.go 158: Attempting to load block cidr=192.168.74.64/26 host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:40.919422 containerd[1527]: 2026-01-23 17:54:40.857 [INFO][4502] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.74.64/26 host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:40.919422 containerd[1527]: 2026-01-23 17:54:40.857 [INFO][4502] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.74.64/26 handle="k8s-pod-network.b4c4063bc8a60f64fe2b1ce7aa73cc82fc4954c918d9592599c1e536621e5f57" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:40.919422 containerd[1527]: 2026-01-23 17:54:40.861 [INFO][4502] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b4c4063bc8a60f64fe2b1ce7aa73cc82fc4954c918d9592599c1e536621e5f57 Jan 23 17:54:40.919422 containerd[1527]: 2026-01-23 17:54:40.870 [INFO][4502] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.74.64/26 handle="k8s-pod-network.b4c4063bc8a60f64fe2b1ce7aa73cc82fc4954c918d9592599c1e536621e5f57" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:40.919422 containerd[1527]: 2026-01-23 17:54:40.878 [INFO][4502] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.74.71/26] block=192.168.74.64/26 handle="k8s-pod-network.b4c4063bc8a60f64fe2b1ce7aa73cc82fc4954c918d9592599c1e536621e5f57" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:40.919422 containerd[1527]: 2026-01-23 17:54:40.878 [INFO][4502] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.74.71/26] handle="k8s-pod-network.b4c4063bc8a60f64fe2b1ce7aa73cc82fc4954c918d9592599c1e536621e5f57" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:40.919422 containerd[1527]: 2026-01-23 17:54:40.878 [INFO][4502] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:54:40.919422 containerd[1527]: 2026-01-23 17:54:40.879 [INFO][4502] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.74.71/26] IPv6=[] ContainerID="b4c4063bc8a60f64fe2b1ce7aa73cc82fc4954c918d9592599c1e536621e5f57" HandleID="k8s-pod-network.b4c4063bc8a60f64fe2b1ce7aa73cc82fc4954c918d9592599c1e536621e5f57" Workload="ci--4459--2--3--1--a204a5ad1b-k8s-goldmane--666569f655--zzc7s-eth0" Jan 23 17:54:40.919939 containerd[1527]: 2026-01-23 17:54:40.887 [INFO][4449] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b4c4063bc8a60f64fe2b1ce7aa73cc82fc4954c918d9592599c1e536621e5f57" Namespace="calico-system" Pod="goldmane-666569f655-zzc7s" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-goldmane--666569f655--zzc7s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--1--a204a5ad1b-k8s-goldmane--666569f655--zzc7s-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"25d835ef-f3bb-42c6-bc1f-07f8b7a82a66", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 54, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-1-a204a5ad1b", ContainerID:"", Pod:"goldmane-666569f655-zzc7s", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.74.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid41fcd73356", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:54:40.919939 containerd[1527]: 2026-01-23 17:54:40.888 [INFO][4449] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.71/32] ContainerID="b4c4063bc8a60f64fe2b1ce7aa73cc82fc4954c918d9592599c1e536621e5f57" Namespace="calico-system" Pod="goldmane-666569f655-zzc7s" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-goldmane--666569f655--zzc7s-eth0" Jan 23 17:54:40.919939 containerd[1527]: 2026-01-23 17:54:40.888 [INFO][4449] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid41fcd73356 ContainerID="b4c4063bc8a60f64fe2b1ce7aa73cc82fc4954c918d9592599c1e536621e5f57" Namespace="calico-system" Pod="goldmane-666569f655-zzc7s" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-goldmane--666569f655--zzc7s-eth0" Jan 23 17:54:40.919939 containerd[1527]: 2026-01-23 17:54:40.894 [INFO][4449] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b4c4063bc8a60f64fe2b1ce7aa73cc82fc4954c918d9592599c1e536621e5f57" Namespace="calico-system" Pod="goldmane-666569f655-zzc7s" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-goldmane--666569f655--zzc7s-eth0" Jan 23 17:54:40.919939 containerd[1527]: 2026-01-23 17:54:40.896 [INFO][4449] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b4c4063bc8a60f64fe2b1ce7aa73cc82fc4954c918d9592599c1e536621e5f57" Namespace="calico-system" Pod="goldmane-666569f655-zzc7s" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-goldmane--666569f655--zzc7s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--1--a204a5ad1b-k8s-goldmane--666569f655--zzc7s-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"25d835ef-f3bb-42c6-bc1f-07f8b7a82a66", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 54, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-1-a204a5ad1b", ContainerID:"b4c4063bc8a60f64fe2b1ce7aa73cc82fc4954c918d9592599c1e536621e5f57", Pod:"goldmane-666569f655-zzc7s", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.74.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid41fcd73356", MAC:"ca:f0:e7:23:e5:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:54:40.919939 containerd[1527]: 2026-01-23 17:54:40.914 [INFO][4449] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b4c4063bc8a60f64fe2b1ce7aa73cc82fc4954c918d9592599c1e536621e5f57" Namespace="calico-system" Pod="goldmane-666569f655-zzc7s" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-goldmane--666569f655--zzc7s-eth0" Jan 23 17:54:40.974484 containerd[1527]: time="2026-01-23T17:54:40.974376693Z" level=info msg="connecting to shim b4c4063bc8a60f64fe2b1ce7aa73cc82fc4954c918d9592599c1e536621e5f57" address="unix:///run/containerd/s/f115ce53ae4c6b0323fcca1afcc03cc3f9e331aaaca77b3cf6dbca503dfb1f4c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:54:40.985097 containerd[1527]: time="2026-01-23T17:54:40.984971890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9fdf556b5-bgdgn,Uid:f2da6827-d5c3-485d-a17f-86ee3e12342c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a1358df8b415586bbf4f20ae80c4c837fd92c55e205e2bc389f5bd6cffc669f6\"" Jan 23 17:54:40.996717 containerd[1527]: time="2026-01-23T17:54:40.995575248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:54:41.026823 systemd[1]: Started cri-containerd-b4c4063bc8a60f64fe2b1ce7aa73cc82fc4954c918d9592599c1e536621e5f57.scope - libcontainer container b4c4063bc8a60f64fe2b1ce7aa73cc82fc4954c918d9592599c1e536621e5f57. Jan 23 17:54:41.089534 systemd-networkd[1422]: cali4436f3a338d: Gained IPv6LL Jan 23 17:54:41.115760 containerd[1527]: time="2026-01-23T17:54:41.115707527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-zzc7s,Uid:25d835ef-f3bb-42c6-bc1f-07f8b7a82a66,Namespace:calico-system,Attempt:0,} returns sandbox id \"b4c4063bc8a60f64fe2b1ce7aa73cc82fc4954c918d9592599c1e536621e5f57\"" Jan 23 17:54:41.339486 containerd[1527]: time="2026-01-23T17:54:41.339195497Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:54:41.341596 containerd[1527]: time="2026-01-23T17:54:41.341526604Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 17:54:41.341893 containerd[1527]: time="2026-01-23T17:54:41.341531844Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:54:41.342302 kubelet[2738]: E0123 17:54:41.342250 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:54:41.343311 kubelet[2738]: E0123 17:54:41.342564 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:54:41.343665 kubelet[2738]: E0123 17:54:41.343396 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nc9xz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9fdf556b5-bgdgn_calico-apiserver(f2da6827-d5c3-485d-a17f-86ee3e12342c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:54:41.344185 containerd[1527]: time="2026-01-23T17:54:41.343962998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 17:54:41.344759 kubelet[2738]: E0123 17:54:41.344703 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9fdf556b5-bgdgn" podUID="f2da6827-d5c3-485d-a17f-86ee3e12342c" Jan 23 17:54:41.504991 containerd[1527]: time="2026-01-23T17:54:41.504816848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9fdf556b5-xqkqz,Uid:fd167579-8a7a-45a7-a1f9-0788814a0466,Namespace:calico-apiserver,Attempt:0,}" Jan 23 17:54:41.505382 containerd[1527]: time="2026-01-23T17:54:41.505356442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8m2rz,Uid:dc92af43-aee3-432c-b980-ef838915552e,Namespace:kube-system,Attempt:0,}" Jan 23 17:54:41.690476 containerd[1527]: time="2026-01-23T17:54:41.689622972Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:54:41.693304 containerd[1527]: time="2026-01-23T17:54:41.692809533Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 17:54:41.694201 kubelet[2738]: E0123 17:54:41.694145 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:54:41.694201 kubelet[2738]: E0123 17:54:41.694209 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:54:41.694491 kubelet[2738]: E0123 17:54:41.694336 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-stgxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-zzc7s_calico-system(25d835ef-f3bb-42c6-bc1f-07f8b7a82a66): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 17:54:41.694731 containerd[1527]: time="2026-01-23T17:54:41.693230520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 17:54:41.695860 kubelet[2738]: E0123 17:54:41.695820 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zzc7s" podUID="25d835ef-f3bb-42c6-bc1f-07f8b7a82a66" Jan 23 17:54:41.703048 systemd-networkd[1422]: cali354c25c3b56: Link UP Jan 23 17:54:41.705604 systemd-networkd[1422]: cali354c25c3b56: Gained carrier Jan 23 17:54:41.726948 containerd[1527]: 2026-01-23 17:54:41.596 [INFO][4653] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--9fdf556b5--xqkqz-eth0 calico-apiserver-9fdf556b5- calico-apiserver fd167579-8a7a-45a7-a1f9-0788814a0466 868 0 2026-01-23 17:54:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9fdf556b5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-3-1-a204a5ad1b calico-apiserver-9fdf556b5-xqkqz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali354c25c3b56 [] [] }} ContainerID="13c1d74af9023446a8508edaacae54397cfce30e619b7f833493b3dd2ba61941" Namespace="calico-apiserver" Pod="calico-apiserver-9fdf556b5-xqkqz" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--9fdf556b5--xqkqz-" Jan 23 17:54:41.726948 containerd[1527]: 2026-01-23 17:54:41.596 [INFO][4653] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="13c1d74af9023446a8508edaacae54397cfce30e619b7f833493b3dd2ba61941" Namespace="calico-apiserver" Pod="calico-apiserver-9fdf556b5-xqkqz" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--9fdf556b5--xqkqz-eth0" Jan 23 17:54:41.726948 containerd[1527]: 2026-01-23 17:54:41.637 [INFO][4678] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="13c1d74af9023446a8508edaacae54397cfce30e619b7f833493b3dd2ba61941" HandleID="k8s-pod-network.13c1d74af9023446a8508edaacae54397cfce30e619b7f833493b3dd2ba61941" Workload="ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--9fdf556b5--xqkqz-eth0" Jan 23 17:54:41.726948 containerd[1527]: 2026-01-23 17:54:41.638 [INFO][4678] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="13c1d74af9023446a8508edaacae54397cfce30e619b7f833493b3dd2ba61941" HandleID="k8s-pod-network.13c1d74af9023446a8508edaacae54397cfce30e619b7f833493b3dd2ba61941" Workload="ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--9fdf556b5--xqkqz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab3a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-2-3-1-a204a5ad1b", "pod":"calico-apiserver-9fdf556b5-xqkqz", "timestamp":"2026-01-23 17:54:41.637903662 +0000 UTC"}, Hostname:"ci-4459-2-3-1-a204a5ad1b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:54:41.726948 containerd[1527]: 2026-01-23 17:54:41.638 [INFO][4678] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:54:41.726948 containerd[1527]: 2026-01-23 17:54:41.638 [INFO][4678] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:54:41.726948 containerd[1527]: 2026-01-23 17:54:41.638 [INFO][4678] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-1-a204a5ad1b' Jan 23 17:54:41.726948 containerd[1527]: 2026-01-23 17:54:41.649 [INFO][4678] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.13c1d74af9023446a8508edaacae54397cfce30e619b7f833493b3dd2ba61941" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:41.726948 containerd[1527]: 2026-01-23 17:54:41.655 [INFO][4678] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:41.726948 containerd[1527]: 2026-01-23 17:54:41.664 [INFO][4678] ipam/ipam.go 511: Trying affinity for 192.168.74.64/26 host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:41.726948 containerd[1527]: 2026-01-23 17:54:41.667 [INFO][4678] ipam/ipam.go 158: Attempting to load block cidr=192.168.74.64/26 host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:41.726948 containerd[1527]: 2026-01-23 17:54:41.672 [INFO][4678] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.74.64/26 host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:41.726948 containerd[1527]: 2026-01-23 17:54:41.672 [INFO][4678] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.74.64/26 handle="k8s-pod-network.13c1d74af9023446a8508edaacae54397cfce30e619b7f833493b3dd2ba61941" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:41.726948 containerd[1527]: 2026-01-23 17:54:41.674 [INFO][4678] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.13c1d74af9023446a8508edaacae54397cfce30e619b7f833493b3dd2ba61941 Jan 23 17:54:41.726948 containerd[1527]: 2026-01-23 17:54:41.681 [INFO][4678] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.74.64/26 handle="k8s-pod-network.13c1d74af9023446a8508edaacae54397cfce30e619b7f833493b3dd2ba61941" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:41.726948 containerd[1527]: 2026-01-23 17:54:41.689 [INFO][4678] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.74.72/26] block=192.168.74.64/26 handle="k8s-pod-network.13c1d74af9023446a8508edaacae54397cfce30e619b7f833493b3dd2ba61941" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:41.726948 containerd[1527]: 2026-01-23 17:54:41.689 [INFO][4678] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.74.72/26] handle="k8s-pod-network.13c1d74af9023446a8508edaacae54397cfce30e619b7f833493b3dd2ba61941" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:41.726948 containerd[1527]: 2026-01-23 17:54:41.690 [INFO][4678] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:54:41.726948 containerd[1527]: 2026-01-23 17:54:41.690 [INFO][4678] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.74.72/26] IPv6=[] ContainerID="13c1d74af9023446a8508edaacae54397cfce30e619b7f833493b3dd2ba61941" HandleID="k8s-pod-network.13c1d74af9023446a8508edaacae54397cfce30e619b7f833493b3dd2ba61941" Workload="ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--9fdf556b5--xqkqz-eth0" Jan 23 17:54:41.728955 containerd[1527]: 2026-01-23 17:54:41.695 [INFO][4653] cni-plugin/k8s.go 418: Populated endpoint ContainerID="13c1d74af9023446a8508edaacae54397cfce30e619b7f833493b3dd2ba61941" Namespace="calico-apiserver" Pod="calico-apiserver-9fdf556b5-xqkqz" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--9fdf556b5--xqkqz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--9fdf556b5--xqkqz-eth0", GenerateName:"calico-apiserver-9fdf556b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"fd167579-8a7a-45a7-a1f9-0788814a0466", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 54, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9fdf556b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-1-a204a5ad1b", ContainerID:"", Pod:"calico-apiserver-9fdf556b5-xqkqz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali354c25c3b56", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:54:41.728955 containerd[1527]: 2026-01-23 17:54:41.695 [INFO][4653] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.72/32] ContainerID="13c1d74af9023446a8508edaacae54397cfce30e619b7f833493b3dd2ba61941" Namespace="calico-apiserver" Pod="calico-apiserver-9fdf556b5-xqkqz" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--9fdf556b5--xqkqz-eth0" Jan 23 17:54:41.728955 containerd[1527]: 2026-01-23 17:54:41.695 [INFO][4653] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali354c25c3b56 ContainerID="13c1d74af9023446a8508edaacae54397cfce30e619b7f833493b3dd2ba61941" Namespace="calico-apiserver" Pod="calico-apiserver-9fdf556b5-xqkqz" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--9fdf556b5--xqkqz-eth0" Jan 23 17:54:41.728955 containerd[1527]: 2026-01-23 17:54:41.709 [INFO][4653] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="13c1d74af9023446a8508edaacae54397cfce30e619b7f833493b3dd2ba61941" Namespace="calico-apiserver" Pod="calico-apiserver-9fdf556b5-xqkqz" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--9fdf556b5--xqkqz-eth0" Jan 23 17:54:41.728955 containerd[1527]: 2026-01-23 17:54:41.709 [INFO][4653] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="13c1d74af9023446a8508edaacae54397cfce30e619b7f833493b3dd2ba61941" Namespace="calico-apiserver" Pod="calico-apiserver-9fdf556b5-xqkqz" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--9fdf556b5--xqkqz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--9fdf556b5--xqkqz-eth0", GenerateName:"calico-apiserver-9fdf556b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"fd167579-8a7a-45a7-a1f9-0788814a0466", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 54, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9fdf556b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-1-a204a5ad1b", ContainerID:"13c1d74af9023446a8508edaacae54397cfce30e619b7f833493b3dd2ba61941", Pod:"calico-apiserver-9fdf556b5-xqkqz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.74.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali354c25c3b56", MAC:"de:d2:f7:e3:bf:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:54:41.728955 containerd[1527]: 2026-01-23 17:54:41.722 [INFO][4653] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="13c1d74af9023446a8508edaacae54397cfce30e619b7f833493b3dd2ba61941" Namespace="calico-apiserver" Pod="calico-apiserver-9fdf556b5-xqkqz" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-calico--apiserver--9fdf556b5--xqkqz-eth0" Jan 23 17:54:41.758633 kubelet[2738]: E0123 17:54:41.757933 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zzc7s" podUID="25d835ef-f3bb-42c6-bc1f-07f8b7a82a66" Jan 23 17:54:41.769192 kubelet[2738]: E0123 17:54:41.769111 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9fdf556b5-bgdgn" podUID="f2da6827-d5c3-485d-a17f-86ee3e12342c" Jan 23 17:54:41.778792 containerd[1527]: time="2026-01-23T17:54:41.778673522Z" level=info msg="connecting to shim 13c1d74af9023446a8508edaacae54397cfce30e619b7f833493b3dd2ba61941" address="unix:///run/containerd/s/a5df534bacbd78e822f526c6e36b2adc531b1cfbc9694edde2906e6d5f966fa6" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:54:41.834912 systemd[1]: Started cri-containerd-13c1d74af9023446a8508edaacae54397cfce30e619b7f833493b3dd2ba61941.scope - libcontainer container 13c1d74af9023446a8508edaacae54397cfce30e619b7f833493b3dd2ba61941. Jan 23 17:54:41.860382 systemd-networkd[1422]: cali7658d0391fc: Link UP Jan 23 17:54:41.861557 systemd-networkd[1422]: cali7658d0391fc: Gained carrier Jan 23 17:54:41.887622 containerd[1527]: 2026-01-23 17:54:41.601 [INFO][4659] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--1--a204a5ad1b-k8s-coredns--668d6bf9bc--8m2rz-eth0 coredns-668d6bf9bc- kube-system dc92af43-aee3-432c-b980-ef838915552e 870 0 2026-01-23 17:53:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-3-1-a204a5ad1b coredns-668d6bf9bc-8m2rz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7658d0391fc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="44ff2e2664957240c4611cb1376e4707edb7289794902591715f0c85c8836962" Namespace="kube-system" Pod="coredns-668d6bf9bc-8m2rz" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-coredns--668d6bf9bc--8m2rz-" Jan 23 17:54:41.887622 containerd[1527]: 2026-01-23 17:54:41.601 [INFO][4659] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="44ff2e2664957240c4611cb1376e4707edb7289794902591715f0c85c8836962" Namespace="kube-system" Pod="coredns-668d6bf9bc-8m2rz" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-coredns--668d6bf9bc--8m2rz-eth0" Jan 23 17:54:41.887622 containerd[1527]: 2026-01-23 17:54:41.643 [INFO][4680] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="44ff2e2664957240c4611cb1376e4707edb7289794902591715f0c85c8836962" HandleID="k8s-pod-network.44ff2e2664957240c4611cb1376e4707edb7289794902591715f0c85c8836962" Workload="ci--4459--2--3--1--a204a5ad1b-k8s-coredns--668d6bf9bc--8m2rz-eth0" Jan 23 17:54:41.887622 containerd[1527]: 2026-01-23 17:54:41.644 [INFO][4680] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="44ff2e2664957240c4611cb1376e4707edb7289794902591715f0c85c8836962" HandleID="k8s-pod-network.44ff2e2664957240c4611cb1376e4707edb7289794902591715f0c85c8836962" Workload="ci--4459--2--3--1--a204a5ad1b-k8s-coredns--668d6bf9bc--8m2rz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b0f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-3-1-a204a5ad1b", "pod":"coredns-668d6bf9bc-8m2rz", "timestamp":"2026-01-23 17:54:41.643956324 +0000 UTC"}, Hostname:"ci-4459-2-3-1-a204a5ad1b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:54:41.887622 containerd[1527]: 2026-01-23 17:54:41.644 [INFO][4680] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:54:41.887622 containerd[1527]: 2026-01-23 17:54:41.689 [INFO][4680] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:54:41.887622 containerd[1527]: 2026-01-23 17:54:41.689 [INFO][4680] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-1-a204a5ad1b' Jan 23 17:54:41.887622 containerd[1527]: 2026-01-23 17:54:41.749 [INFO][4680] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.44ff2e2664957240c4611cb1376e4707edb7289794902591715f0c85c8836962" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:41.887622 containerd[1527]: 2026-01-23 17:54:41.770 [INFO][4680] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:41.887622 containerd[1527]: 2026-01-23 17:54:41.798 [INFO][4680] ipam/ipam.go 511: Trying affinity for 192.168.74.64/26 host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:41.887622 containerd[1527]: 2026-01-23 17:54:41.804 [INFO][4680] ipam/ipam.go 158: Attempting to load block cidr=192.168.74.64/26 host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:41.887622 containerd[1527]: 2026-01-23 17:54:41.814 [INFO][4680] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.74.64/26 host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:41.887622 containerd[1527]: 2026-01-23 17:54:41.817 [INFO][4680] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.74.64/26 handle="k8s-pod-network.44ff2e2664957240c4611cb1376e4707edb7289794902591715f0c85c8836962" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:41.887622 containerd[1527]: 2026-01-23 17:54:41.821 [INFO][4680] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.44ff2e2664957240c4611cb1376e4707edb7289794902591715f0c85c8836962 Jan 23 17:54:41.887622 containerd[1527]: 2026-01-23 17:54:41.834 [INFO][4680] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.74.64/26 handle="k8s-pod-network.44ff2e2664957240c4611cb1376e4707edb7289794902591715f0c85c8836962" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:41.887622 containerd[1527]: 2026-01-23 17:54:41.847 [INFO][4680] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.74.73/26] block=192.168.74.64/26 handle="k8s-pod-network.44ff2e2664957240c4611cb1376e4707edb7289794902591715f0c85c8836962" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:41.887622 containerd[1527]: 2026-01-23 17:54:41.847 [INFO][4680] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.74.73/26] handle="k8s-pod-network.44ff2e2664957240c4611cb1376e4707edb7289794902591715f0c85c8836962" host="ci-4459-2-3-1-a204a5ad1b" Jan 23 17:54:41.887622 containerd[1527]: 2026-01-23 17:54:41.847 [INFO][4680] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:54:41.887622 containerd[1527]: 2026-01-23 17:54:41.847 [INFO][4680] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.74.73/26] IPv6=[] ContainerID="44ff2e2664957240c4611cb1376e4707edb7289794902591715f0c85c8836962" HandleID="k8s-pod-network.44ff2e2664957240c4611cb1376e4707edb7289794902591715f0c85c8836962" Workload="ci--4459--2--3--1--a204a5ad1b-k8s-coredns--668d6bf9bc--8m2rz-eth0" Jan 23 17:54:41.890380 containerd[1527]: 2026-01-23 17:54:41.851 [INFO][4659] cni-plugin/k8s.go 418: Populated endpoint ContainerID="44ff2e2664957240c4611cb1376e4707edb7289794902591715f0c85c8836962" Namespace="kube-system" Pod="coredns-668d6bf9bc-8m2rz" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-coredns--668d6bf9bc--8m2rz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--1--a204a5ad1b-k8s-coredns--668d6bf9bc--8m2rz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"dc92af43-aee3-432c-b980-ef838915552e", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 53, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-1-a204a5ad1b", ContainerID:"", Pod:"coredns-668d6bf9bc-8m2rz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7658d0391fc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:54:41.890380 containerd[1527]: 2026-01-23 17:54:41.853 [INFO][4659] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.74.73/32] ContainerID="44ff2e2664957240c4611cb1376e4707edb7289794902591715f0c85c8836962" Namespace="kube-system" Pod="coredns-668d6bf9bc-8m2rz" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-coredns--668d6bf9bc--8m2rz-eth0" Jan 23 17:54:41.890380 containerd[1527]: 2026-01-23 17:54:41.853 [INFO][4659] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7658d0391fc ContainerID="44ff2e2664957240c4611cb1376e4707edb7289794902591715f0c85c8836962" Namespace="kube-system" Pod="coredns-668d6bf9bc-8m2rz" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-coredns--668d6bf9bc--8m2rz-eth0" Jan 23 17:54:41.890380 containerd[1527]: 2026-01-23 17:54:41.862 [INFO][4659] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="44ff2e2664957240c4611cb1376e4707edb7289794902591715f0c85c8836962" Namespace="kube-system" Pod="coredns-668d6bf9bc-8m2rz" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-coredns--668d6bf9bc--8m2rz-eth0" Jan 23 17:54:41.890380 containerd[1527]: 2026-01-23 17:54:41.866 [INFO][4659] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="44ff2e2664957240c4611cb1376e4707edb7289794902591715f0c85c8836962" Namespace="kube-system" Pod="coredns-668d6bf9bc-8m2rz" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-coredns--668d6bf9bc--8m2rz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--1--a204a5ad1b-k8s-coredns--668d6bf9bc--8m2rz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"dc92af43-aee3-432c-b980-ef838915552e", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 53, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-1-a204a5ad1b", ContainerID:"44ff2e2664957240c4611cb1376e4707edb7289794902591715f0c85c8836962", Pod:"coredns-668d6bf9bc-8m2rz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.74.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7658d0391fc", MAC:"66:8c:d4:fe:2b:87", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:54:41.890380 containerd[1527]: 2026-01-23 17:54:41.884 [INFO][4659] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="44ff2e2664957240c4611cb1376e4707edb7289794902591715f0c85c8836962" Namespace="kube-system" Pod="coredns-668d6bf9bc-8m2rz" WorkloadEndpoint="ci--4459--2--3--1--a204a5ad1b-k8s-coredns--668d6bf9bc--8m2rz-eth0" Jan 23 17:54:41.930662 containerd[1527]: time="2026-01-23T17:54:41.930592886Z" level=info msg="connecting to shim 44ff2e2664957240c4611cb1376e4707edb7289794902591715f0c85c8836962" address="unix:///run/containerd/s/b9ce0a107842abce71d31f5e7403586cbfd6c78d9048e9df0e87d554fef6c707" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:54:41.983167 systemd[1]: Started cri-containerd-44ff2e2664957240c4611cb1376e4707edb7289794902591715f0c85c8836962.scope - libcontainer container 44ff2e2664957240c4611cb1376e4707edb7289794902591715f0c85c8836962. Jan 23 17:54:42.047856 containerd[1527]: time="2026-01-23T17:54:42.047807906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9fdf556b5-xqkqz,Uid:fd167579-8a7a-45a7-a1f9-0788814a0466,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"13c1d74af9023446a8508edaacae54397cfce30e619b7f833493b3dd2ba61941\"" Jan 23 17:54:42.048685 systemd-networkd[1422]: calid41fcd73356: Gained IPv6LL Jan 23 17:54:42.061459 containerd[1527]: time="2026-01-23T17:54:42.060495260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:54:42.116504 containerd[1527]: time="2026-01-23T17:54:42.115686233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8m2rz,Uid:dc92af43-aee3-432c-b980-ef838915552e,Namespace:kube-system,Attempt:0,} returns sandbox id \"44ff2e2664957240c4611cb1376e4707edb7289794902591715f0c85c8836962\"" Jan 23 17:54:42.124079 containerd[1527]: time="2026-01-23T17:54:42.124028674Z" level=info msg="CreateContainer within sandbox \"44ff2e2664957240c4611cb1376e4707edb7289794902591715f0c85c8836962\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 17:54:42.140632 containerd[1527]: time="2026-01-23T17:54:42.140593151Z" level=info msg="Container b84ff3959e3952dfce049a81cd4ac5d2cd74a382528b472444014aca3fcce2a3: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:54:42.159779 containerd[1527]: time="2026-01-23T17:54:42.159738708Z" level=info msg="CreateContainer within sandbox \"44ff2e2664957240c4611cb1376e4707edb7289794902591715f0c85c8836962\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b84ff3959e3952dfce049a81cd4ac5d2cd74a382528b472444014aca3fcce2a3\"" Jan 23 17:54:42.160628 containerd[1527]: time="2026-01-23T17:54:42.160582281Z" level=info msg="StartContainer for \"b84ff3959e3952dfce049a81cd4ac5d2cd74a382528b472444014aca3fcce2a3\"" Jan 23 17:54:42.163183 containerd[1527]: time="2026-01-23T17:54:42.163082358Z" level=info msg="connecting to shim b84ff3959e3952dfce049a81cd4ac5d2cd74a382528b472444014aca3fcce2a3" address="unix:///run/containerd/s/b9ce0a107842abce71d31f5e7403586cbfd6c78d9048e9df0e87d554fef6c707" protocol=ttrpc version=3 Jan 23 17:54:42.192679 systemd[1]: Started cri-containerd-b84ff3959e3952dfce049a81cd4ac5d2cd74a382528b472444014aca3fcce2a3.scope - libcontainer container b84ff3959e3952dfce049a81cd4ac5d2cd74a382528b472444014aca3fcce2a3. Jan 23 17:54:42.243577 containerd[1527]: time="2026-01-23T17:54:42.242901271Z" level=info msg="StartContainer for \"b84ff3959e3952dfce049a81cd4ac5d2cd74a382528b472444014aca3fcce2a3\" returns successfully" Jan 23 17:54:42.436314 containerd[1527]: time="2026-01-23T17:54:42.436133599Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:54:42.438456 containerd[1527]: time="2026-01-23T17:54:42.437784903Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:54:42.438456 containerd[1527]: time="2026-01-23T17:54:42.437839106Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 17:54:42.438623 kubelet[2738]: E0123 17:54:42.438016 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:54:42.438623 kubelet[2738]: E0123 17:54:42.438061 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:54:42.438623 kubelet[2738]: E0123 17:54:42.438220 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tn85d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9fdf556b5-xqkqz_calico-apiserver(fd167579-8a7a-45a7-a1f9-0788814a0466): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:54:42.439678 kubelet[2738]: E0123 17:54:42.439619 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9fdf556b5-xqkqz" podUID="fd167579-8a7a-45a7-a1f9-0788814a0466" Jan 23 17:54:42.560630 systemd-networkd[1422]: cali7d4d294ce5d: Gained IPv6LL Jan 23 17:54:42.778898 kubelet[2738]: E0123 17:54:42.778857 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zzc7s" podUID="25d835ef-f3bb-42c6-bc1f-07f8b7a82a66" Jan 23 17:54:42.781106 kubelet[2738]: E0123 17:54:42.780994 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9fdf556b5-bgdgn" podUID="f2da6827-d5c3-485d-a17f-86ee3e12342c" Jan 23 17:54:42.781106 kubelet[2738]: E0123 17:54:42.781056 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9fdf556b5-xqkqz" podUID="fd167579-8a7a-45a7-a1f9-0788814a0466" Jan 23 17:54:42.818061 kubelet[2738]: I0123 17:54:42.817898 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8m2rz" podStartSLOduration=49.817877081 podStartE2EDuration="49.817877081s" podCreationTimestamp="2026-01-23 17:53:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:54:42.796210725 +0000 UTC m=+55.423886418" watchObservedRunningTime="2026-01-23 17:54:42.817877081 +0000 UTC m=+55.445552734" Jan 23 17:54:43.008587 systemd-networkd[1422]: cali354c25c3b56: Gained IPv6LL Jan 23 17:54:43.393118 systemd-networkd[1422]: cali7658d0391fc: Gained IPv6LL Jan 23 17:54:43.781451 kubelet[2738]: E0123 17:54:43.781245 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9fdf556b5-xqkqz" podUID="fd167579-8a7a-45a7-a1f9-0788814a0466" Jan 23 17:54:49.224805 update_engine[1512]: I20260123 17:54:49.223892 1512 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 23 17:54:49.224805 update_engine[1512]: I20260123 17:54:49.223959 1512 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 23 17:54:49.224805 update_engine[1512]: I20260123 17:54:49.224279 1512 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 23 17:54:49.226594 update_engine[1512]: I20260123 17:54:49.226558 1512 omaha_request_params.cc:62] Current group set to stable Jan 23 17:54:49.227776 update_engine[1512]: I20260123 17:54:49.227632 1512 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 23 17:54:49.228491 update_engine[1512]: I20260123 17:54:49.228021 1512 update_attempter.cc:643] Scheduling an action processor start. Jan 23 17:54:49.228491 update_engine[1512]: I20260123 17:54:49.228058 1512 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 23 17:54:49.228491 update_engine[1512]: I20260123 17:54:49.228117 1512 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 23 17:54:49.228491 update_engine[1512]: I20260123 17:54:49.228199 1512 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 23 17:54:49.228491 update_engine[1512]: I20260123 17:54:49.228210 1512 omaha_request_action.cc:272] Request: Jan 23 17:54:49.228491 update_engine[1512]: Jan 23 17:54:49.228491 update_engine[1512]: Jan 23 17:54:49.228491 update_engine[1512]: Jan 23 17:54:49.228491 update_engine[1512]: Jan 23 17:54:49.228491 update_engine[1512]: Jan 23 17:54:49.228491 update_engine[1512]: Jan 23 17:54:49.228491 update_engine[1512]: Jan 23 17:54:49.228491 update_engine[1512]: Jan 23 17:54:49.228491 update_engine[1512]: I20260123 17:54:49.228216 1512 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 17:54:49.236668 update_engine[1512]: I20260123 17:54:49.235674 1512 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 17:54:49.236668 update_engine[1512]: I20260123 17:54:49.236366 1512 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 17:54:49.237346 update_engine[1512]: E20260123 17:54:49.237315 1512 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 17:54:49.237507 update_engine[1512]: I20260123 17:54:49.237486 1512 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 23 17:54:49.239061 locksmithd[1544]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 23 17:54:49.508598 containerd[1527]: time="2026-01-23T17:54:49.508035574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 17:54:49.840154 containerd[1527]: time="2026-01-23T17:54:49.840068815Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:54:49.842293 containerd[1527]: time="2026-01-23T17:54:49.842206593Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 17:54:49.842469 containerd[1527]: time="2026-01-23T17:54:49.842311104Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 17:54:49.842751 kubelet[2738]: E0123 17:54:49.842668 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:54:49.842751 kubelet[2738]: E0123 17:54:49.842731 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:54:49.843650 kubelet[2738]: E0123 17:54:49.843488 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e2123739c92b4125985fdb77df2a34b0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7vsk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79545dbc5f-lz9w4_calico-system(d427e806-f7cc-4b74-be8f-94a08c7ee702): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 17:54:49.846969 containerd[1527]: time="2026-01-23T17:54:49.846541945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 17:54:50.189354 containerd[1527]: time="2026-01-23T17:54:50.189216843Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:54:50.192734 containerd[1527]: time="2026-01-23T17:54:50.192609888Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 17:54:50.192734 containerd[1527]: time="2026-01-23T17:54:50.192669323Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 17:54:50.193067 kubelet[2738]: E0123 17:54:50.193027 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:54:50.193646 kubelet[2738]: E0123 17:54:50.193159 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:54:50.195141 kubelet[2738]: E0123 17:54:50.193702 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7vsk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79545dbc5f-lz9w4_calico-system(d427e806-f7cc-4b74-be8f-94a08c7ee702): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 17:54:50.196311 kubelet[2738]: E0123 17:54:50.196250 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79545dbc5f-lz9w4" podUID="d427e806-f7cc-4b74-be8f-94a08c7ee702" Jan 23 17:54:51.506972 containerd[1527]: time="2026-01-23T17:54:51.506931799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:54:51.856709 containerd[1527]: time="2026-01-23T17:54:51.856616090Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:54:51.858929 containerd[1527]: time="2026-01-23T17:54:51.858384393Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:54:51.858929 containerd[1527]: time="2026-01-23T17:54:51.858462067Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 17:54:51.859340 kubelet[2738]: E0123 17:54:51.859274 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:54:51.860406 kubelet[2738]: E0123 17:54:51.859421 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:54:51.860406 kubelet[2738]: E0123 17:54:51.859759 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k9p8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bfd8975d7-8wdqx_calico-apiserver(67b2de8c-adfd-41ce-a209-5eab9ae1e756): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:54:51.861064 kubelet[2738]: E0123 17:54:51.860971 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bfd8975d7-8wdqx" podUID="67b2de8c-adfd-41ce-a209-5eab9ae1e756" Jan 23 17:54:51.862531 containerd[1527]: time="2026-01-23T17:54:51.862219377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 17:54:52.226926 containerd[1527]: time="2026-01-23T17:54:52.226798127Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:54:52.228751 containerd[1527]: time="2026-01-23T17:54:52.228603194Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 17:54:52.229681 containerd[1527]: time="2026-01-23T17:54:52.228651310Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 17:54:52.229739 kubelet[2738]: E0123 17:54:52.229318 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:54:52.229739 kubelet[2738]: E0123 17:54:52.229364 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:54:52.231225 kubelet[2738]: E0123 17:54:52.231128 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7872c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jkkz7_calico-system(6e403bca-286c-4acf-bbf0-2ee7f3d0b56e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 17:54:52.234340 containerd[1527]: time="2026-01-23T17:54:52.234242057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 17:54:52.598720 containerd[1527]: time="2026-01-23T17:54:52.598661035Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:54:52.600389 containerd[1527]: time="2026-01-23T17:54:52.600320993Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 17:54:52.600389 containerd[1527]: time="2026-01-23T17:54:52.600340231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 17:54:52.600849 kubelet[2738]: E0123 17:54:52.600798 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:54:52.600949 kubelet[2738]: E0123 17:54:52.600856 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:54:52.601626 kubelet[2738]: E0123 17:54:52.600975 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7872c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jkkz7_calico-system(6e403bca-286c-4acf-bbf0-2ee7f3d0b56e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 17:54:52.603028 kubelet[2738]: E0123 17:54:52.602987 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jkkz7" podUID="6e403bca-286c-4acf-bbf0-2ee7f3d0b56e" Jan 23 17:54:53.508267 containerd[1527]: time="2026-01-23T17:54:53.507928910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 17:54:53.863491 containerd[1527]: time="2026-01-23T17:54:53.863293511Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:54:53.865368 containerd[1527]: time="2026-01-23T17:54:53.865282371Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 17:54:53.865474 containerd[1527]: time="2026-01-23T17:54:53.865381404Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 17:54:53.865687 kubelet[2738]: E0123 17:54:53.865648 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:54:53.866615 kubelet[2738]: E0123 17:54:53.865985 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:54:53.866615 kubelet[2738]: E0123 17:54:53.866156 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2p6dt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-69ff6445f8-4fhb4_calico-system(8c709d5d-7113-42e9-bc41-af7907cc6116): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 17:54:53.867694 kubelet[2738]: E0123 17:54:53.867655 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69ff6445f8-4fhb4" podUID="8c709d5d-7113-42e9-bc41-af7907cc6116" Jan 23 17:54:54.506842 containerd[1527]: time="2026-01-23T17:54:54.506800658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:54:54.838408 containerd[1527]: time="2026-01-23T17:54:54.838332691Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:54:54.839948 containerd[1527]: time="2026-01-23T17:54:54.839904665Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:54:54.840276 containerd[1527]: time="2026-01-23T17:54:54.840175167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 17:54:54.841603 kubelet[2738]: E0123 17:54:54.840472 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:54:54.841603 kubelet[2738]: E0123 17:54:54.840517 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:54:54.841603 kubelet[2738]: E0123 17:54:54.840649 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nc9xz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9fdf556b5-bgdgn_calico-apiserver(f2da6827-d5c3-485d-a17f-86ee3e12342c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:54:54.841948 kubelet[2738]: E0123 17:54:54.841904 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9fdf556b5-bgdgn" podUID="f2da6827-d5c3-485d-a17f-86ee3e12342c" Jan 23 17:54:57.510200 containerd[1527]: time="2026-01-23T17:54:57.510022908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:54:57.862370 containerd[1527]: time="2026-01-23T17:54:57.862292170Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:54:57.863954 containerd[1527]: time="2026-01-23T17:54:57.863886438Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:54:57.864109 containerd[1527]: time="2026-01-23T17:54:57.863984952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 17:54:57.864828 kubelet[2738]: E0123 17:54:57.864500 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:54:57.864828 kubelet[2738]: E0123 17:54:57.864590 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:54:57.866911 kubelet[2738]: E0123 17:54:57.865697 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tn85d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9fdf556b5-xqkqz_calico-apiserver(fd167579-8a7a-45a7-a1f9-0788814a0466): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:54:57.868993 kubelet[2738]: E0123 17:54:57.867557 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9fdf556b5-xqkqz" podUID="fd167579-8a7a-45a7-a1f9-0788814a0466" Jan 23 17:54:57.869446 containerd[1527]: time="2026-01-23T17:54:57.869397001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 17:54:58.212568 containerd[1527]: time="2026-01-23T17:54:58.212446979Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:54:58.214440 containerd[1527]: time="2026-01-23T17:54:58.214389233Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 17:54:58.214661 containerd[1527]: time="2026-01-23T17:54:58.214566143Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 17:54:58.214958 kubelet[2738]: E0123 17:54:58.214924 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:54:58.216453 kubelet[2738]: E0123 17:54:58.215067 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:54:58.216453 kubelet[2738]: E0123 17:54:58.215204 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-stgxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-zzc7s_calico-system(25d835ef-f3bb-42c6-bc1f-07f8b7a82a66): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 17:54:58.216763 kubelet[2738]: E0123 17:54:58.216694 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zzc7s" podUID="25d835ef-f3bb-42c6-bc1f-07f8b7a82a66" Jan 23 17:54:59.224593 update_engine[1512]: I20260123 17:54:59.224520 1512 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 17:54:59.224946 update_engine[1512]: I20260123 17:54:59.224622 1512 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 17:54:59.225053 update_engine[1512]: I20260123 17:54:59.225018 1512 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 17:54:59.225555 update_engine[1512]: E20260123 17:54:59.225521 1512 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 17:54:59.225628 update_engine[1512]: I20260123 17:54:59.225607 1512 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 23 17:55:05.509943 kubelet[2738]: E0123 17:55:05.509838 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79545dbc5f-lz9w4" podUID="d427e806-f7cc-4b74-be8f-94a08c7ee702" Jan 23 17:55:05.511089 kubelet[2738]: E0123 17:55:05.510658 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jkkz7" podUID="6e403bca-286c-4acf-bbf0-2ee7f3d0b56e" Jan 23 17:55:07.508446 kubelet[2738]: E0123 17:55:07.507617 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bfd8975d7-8wdqx" podUID="67b2de8c-adfd-41ce-a209-5eab9ae1e756" Jan 23 17:55:08.505548 kubelet[2738]: E0123 17:55:08.505458 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69ff6445f8-4fhb4" podUID="8c709d5d-7113-42e9-bc41-af7907cc6116" Jan 23 17:55:09.226560 update_engine[1512]: I20260123 17:55:09.226480 1512 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 17:55:09.226560 update_engine[1512]: I20260123 17:55:09.226565 1512 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 17:55:09.227378 update_engine[1512]: I20260123 17:55:09.226883 1512 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 17:55:09.227494 update_engine[1512]: E20260123 17:55:09.227392 1512 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 17:55:09.227494 update_engine[1512]: I20260123 17:55:09.227478 1512 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 23 17:55:09.509106 kubelet[2738]: E0123 17:55:09.508840 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9fdf556b5-bgdgn" podUID="f2da6827-d5c3-485d-a17f-86ee3e12342c" Jan 23 17:55:09.512461 kubelet[2738]: E0123 17:55:09.512364 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9fdf556b5-xqkqz" podUID="fd167579-8a7a-45a7-a1f9-0788814a0466" Jan 23 17:55:12.507240 kubelet[2738]: E0123 17:55:12.506911 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zzc7s" podUID="25d835ef-f3bb-42c6-bc1f-07f8b7a82a66" Jan 23 17:55:16.506708 containerd[1527]: time="2026-01-23T17:55:16.505530413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 17:55:16.863532 containerd[1527]: time="2026-01-23T17:55:16.863471404Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:55:16.865276 containerd[1527]: time="2026-01-23T17:55:16.865086700Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 17:55:16.865276 containerd[1527]: time="2026-01-23T17:55:16.865195459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 17:55:16.865540 kubelet[2738]: E0123 17:55:16.865386 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:55:16.865540 kubelet[2738]: E0123 17:55:16.865444 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:55:16.865928 kubelet[2738]: E0123 17:55:16.865554 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7872c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jkkz7_calico-system(6e403bca-286c-4acf-bbf0-2ee7f3d0b56e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 17:55:16.868395 containerd[1527]: time="2026-01-23T17:55:16.868150816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 17:55:17.199896 containerd[1527]: time="2026-01-23T17:55:17.198868484Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:55:17.202840 containerd[1527]: time="2026-01-23T17:55:17.202684355Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 17:55:17.202840 containerd[1527]: time="2026-01-23T17:55:17.202793634Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 17:55:17.203054 kubelet[2738]: E0123 17:55:17.203001 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:55:17.203137 kubelet[2738]: E0123 17:55:17.203058 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:55:17.203374 kubelet[2738]: E0123 17:55:17.203326 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7872c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jkkz7_calico-system(6e403bca-286c-4acf-bbf0-2ee7f3d0b56e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 17:55:17.205447 kubelet[2738]: E0123 17:55:17.204762 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jkkz7" podUID="6e403bca-286c-4acf-bbf0-2ee7f3d0b56e" Jan 23 17:55:19.227301 update_engine[1512]: I20260123 17:55:19.226599 1512 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 17:55:19.227301 update_engine[1512]: I20260123 17:55:19.226688 1512 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 17:55:19.227301 update_engine[1512]: I20260123 17:55:19.227039 1512 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 17:55:19.228344 update_engine[1512]: E20260123 17:55:19.228308 1512 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 17:55:19.228517 update_engine[1512]: I20260123 17:55:19.228498 1512 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 23 17:55:19.228593 update_engine[1512]: I20260123 17:55:19.228577 1512 omaha_request_action.cc:617] Omaha request response: Jan 23 17:55:19.228730 update_engine[1512]: E20260123 17:55:19.228708 1512 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 23 17:55:19.228795 update_engine[1512]: I20260123 17:55:19.228781 1512 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 23 17:55:19.230106 update_engine[1512]: I20260123 17:55:19.228828 1512 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 23 17:55:19.230106 update_engine[1512]: I20260123 17:55:19.228839 1512 update_attempter.cc:306] Processing Done. Jan 23 17:55:19.230106 update_engine[1512]: E20260123 17:55:19.228854 1512 update_attempter.cc:619] Update failed. Jan 23 17:55:19.230106 update_engine[1512]: I20260123 17:55:19.228859 1512 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 23 17:55:19.230106 update_engine[1512]: I20260123 17:55:19.228863 1512 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 23 17:55:19.230106 update_engine[1512]: I20260123 17:55:19.228869 1512 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 23 17:55:19.230106 update_engine[1512]: I20260123 17:55:19.228996 1512 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 23 17:55:19.230106 update_engine[1512]: I20260123 17:55:19.229022 1512 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 23 17:55:19.230106 update_engine[1512]: I20260123 17:55:19.229027 1512 omaha_request_action.cc:272] Request: Jan 23 17:55:19.230106 update_engine[1512]: Jan 23 17:55:19.230106 update_engine[1512]: Jan 23 17:55:19.230106 update_engine[1512]: Jan 23 17:55:19.230106 update_engine[1512]: Jan 23 17:55:19.230106 update_engine[1512]: Jan 23 17:55:19.230106 update_engine[1512]: Jan 23 17:55:19.230106 update_engine[1512]: I20260123 17:55:19.229032 1512 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 23 17:55:19.230106 update_engine[1512]: I20260123 17:55:19.229053 1512 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 23 17:55:19.230106 update_engine[1512]: I20260123 17:55:19.229362 1512 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 23 17:55:19.231218 locksmithd[1544]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 23 17:55:19.231551 update_engine[1512]: E20260123 17:55:19.230901 1512 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 23 17:55:19.231551 update_engine[1512]: I20260123 17:55:19.231009 1512 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 23 17:55:19.231551 update_engine[1512]: I20260123 17:55:19.231019 1512 omaha_request_action.cc:617] Omaha request response: Jan 23 17:55:19.231551 update_engine[1512]: I20260123 17:55:19.231026 1512 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 23 17:55:19.231551 update_engine[1512]: I20260123 17:55:19.231030 1512 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 23 17:55:19.231551 update_engine[1512]: I20260123 17:55:19.231035 1512 update_attempter.cc:306] Processing Done. Jan 23 17:55:19.231551 update_engine[1512]: I20260123 17:55:19.231040 1512 update_attempter.cc:310] Error event sent. Jan 23 17:55:19.231551 update_engine[1512]: I20260123 17:55:19.231048 1512 update_check_scheduler.cc:74] Next update check in 42m22s Jan 23 17:55:19.232588 locksmithd[1544]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 23 17:55:19.505830 containerd[1527]: time="2026-01-23T17:55:19.505297986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 17:55:19.848249 containerd[1527]: time="2026-01-23T17:55:19.848127173Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:55:19.850241 containerd[1527]: time="2026-01-23T17:55:19.850148793Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 17:55:19.850419 containerd[1527]: time="2026-01-23T17:55:19.850298831Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 17:55:19.850755 kubelet[2738]: E0123 17:55:19.850600 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:55:19.850755 kubelet[2738]: E0123 17:55:19.850682 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:55:19.850755 kubelet[2738]: E0123 17:55:19.850864 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e2123739c92b4125985fdb77df2a34b0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7vsk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79545dbc5f-lz9w4_calico-system(d427e806-f7cc-4b74-be8f-94a08c7ee702): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 17:55:19.853759 containerd[1527]: time="2026-01-23T17:55:19.853687398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 17:55:20.194532 containerd[1527]: time="2026-01-23T17:55:20.194278411Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:55:20.196357 containerd[1527]: time="2026-01-23T17:55:20.196228794Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 17:55:20.196357 containerd[1527]: time="2026-01-23T17:55:20.196275754Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 17:55:20.196682 kubelet[2738]: E0123 17:55:20.196591 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:55:20.196779 kubelet[2738]: E0123 17:55:20.196740 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:55:20.197682 kubelet[2738]: E0123 17:55:20.197037 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7vsk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79545dbc5f-lz9w4_calico-system(d427e806-f7cc-4b74-be8f-94a08c7ee702): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 17:55:20.198992 kubelet[2738]: E0123 17:55:20.198949 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79545dbc5f-lz9w4" podUID="d427e806-f7cc-4b74-be8f-94a08c7ee702" Jan 23 17:55:20.507488 containerd[1527]: time="2026-01-23T17:55:20.506259720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 17:55:20.842673 containerd[1527]: time="2026-01-23T17:55:20.842624866Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:55:20.844538 containerd[1527]: time="2026-01-23T17:55:20.844478090Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 17:55:20.845043 containerd[1527]: time="2026-01-23T17:55:20.844594289Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 17:55:20.845113 kubelet[2738]: E0123 17:55:20.844743 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:55:20.845113 kubelet[2738]: E0123 17:55:20.844801 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:55:20.845113 kubelet[2738]: E0123 17:55:20.845036 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2p6dt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-69ff6445f8-4fhb4_calico-system(8c709d5d-7113-42e9-bc41-af7907cc6116): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 17:55:20.845698 containerd[1527]: time="2026-01-23T17:55:20.845447562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:55:20.846276 kubelet[2738]: E0123 17:55:20.846226 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69ff6445f8-4fhb4" podUID="8c709d5d-7113-42e9-bc41-af7907cc6116" Jan 23 17:55:21.199893 containerd[1527]: time="2026-01-23T17:55:21.199766081Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:55:21.201575 containerd[1527]: time="2026-01-23T17:55:21.201495749Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:55:21.201668 containerd[1527]: time="2026-01-23T17:55:21.201646428Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 17:55:21.202010 kubelet[2738]: E0123 17:55:21.201874 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:55:21.202010 kubelet[2738]: E0123 17:55:21.201958 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:55:21.203684 kubelet[2738]: E0123 17:55:21.203595 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k9p8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bfd8975d7-8wdqx_calico-apiserver(67b2de8c-adfd-41ce-a209-5eab9ae1e756): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:55:21.205405 kubelet[2738]: E0123 17:55:21.205341 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bfd8975d7-8wdqx" podUID="67b2de8c-adfd-41ce-a209-5eab9ae1e756" Jan 23 17:55:24.506349 containerd[1527]: time="2026-01-23T17:55:24.506121831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:55:24.844894 containerd[1527]: time="2026-01-23T17:55:24.844796843Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:55:24.846479 containerd[1527]: time="2026-01-23T17:55:24.846400318Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:55:24.846803 containerd[1527]: time="2026-01-23T17:55:24.846527958Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 17:55:24.846841 kubelet[2738]: E0123 17:55:24.846671 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:55:24.846841 kubelet[2738]: E0123 17:55:24.846716 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:55:24.848642 kubelet[2738]: E0123 17:55:24.847749 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nc9xz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9fdf556b5-bgdgn_calico-apiserver(f2da6827-d5c3-485d-a17f-86ee3e12342c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:55:24.848833 containerd[1527]: time="2026-01-23T17:55:24.848277153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:55:24.850632 kubelet[2738]: E0123 17:55:24.850582 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9fdf556b5-bgdgn" podUID="f2da6827-d5c3-485d-a17f-86ee3e12342c" Jan 23 17:55:25.213562 containerd[1527]: time="2026-01-23T17:55:25.213374954Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:55:25.215514 containerd[1527]: time="2026-01-23T17:55:25.215455470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 17:55:25.215514 containerd[1527]: time="2026-01-23T17:55:25.215480830Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:55:25.215803 kubelet[2738]: E0123 17:55:25.215717 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:55:25.215803 kubelet[2738]: E0123 17:55:25.215763 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:55:25.215965 kubelet[2738]: E0123 17:55:25.215912 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tn85d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9fdf556b5-xqkqz_calico-apiserver(fd167579-8a7a-45a7-a1f9-0788814a0466): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:55:25.217472 kubelet[2738]: E0123 17:55:25.217311 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9fdf556b5-xqkqz" podUID="fd167579-8a7a-45a7-a1f9-0788814a0466" Jan 23 17:55:25.507809 containerd[1527]: time="2026-01-23T17:55:25.507676705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 17:55:25.857832 containerd[1527]: time="2026-01-23T17:55:25.857688724Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:55:25.859362 containerd[1527]: time="2026-01-23T17:55:25.859281601Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 17:55:25.859362 containerd[1527]: time="2026-01-23T17:55:25.859323121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 17:55:25.859665 kubelet[2738]: E0123 17:55:25.859616 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:55:25.860042 kubelet[2738]: E0123 17:55:25.859667 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:55:25.860157 kubelet[2738]: E0123 17:55:25.859958 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-stgxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-zzc7s_calico-system(25d835ef-f3bb-42c6-bc1f-07f8b7a82a66): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 17:55:25.861272 kubelet[2738]: E0123 17:55:25.861225 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zzc7s" podUID="25d835ef-f3bb-42c6-bc1f-07f8b7a82a66" Jan 23 17:55:30.506746 kubelet[2738]: E0123 17:55:30.506235 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jkkz7" podUID="6e403bca-286c-4acf-bbf0-2ee7f3d0b56e" Jan 23 17:55:31.510410 kubelet[2738]: E0123 17:55:31.508692 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69ff6445f8-4fhb4" podUID="8c709d5d-7113-42e9-bc41-af7907cc6116" Jan 23 17:55:34.505036 kubelet[2738]: E0123 17:55:34.504973 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bfd8975d7-8wdqx" podUID="67b2de8c-adfd-41ce-a209-5eab9ae1e756" Jan 23 17:55:35.509696 kubelet[2738]: E0123 17:55:35.509328 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79545dbc5f-lz9w4" podUID="d427e806-f7cc-4b74-be8f-94a08c7ee702" Jan 23 17:55:37.508032 kubelet[2738]: E0123 17:55:37.507406 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9fdf556b5-bgdgn" podUID="f2da6827-d5c3-485d-a17f-86ee3e12342c" Jan 23 17:55:37.511803 kubelet[2738]: E0123 17:55:37.511752 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9fdf556b5-xqkqz" podUID="fd167579-8a7a-45a7-a1f9-0788814a0466" Jan 23 17:55:40.507978 kubelet[2738]: E0123 17:55:40.507568 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zzc7s" podUID="25d835ef-f3bb-42c6-bc1f-07f8b7a82a66" Jan 23 17:55:41.508632 kubelet[2738]: E0123 17:55:41.508570 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jkkz7" podUID="6e403bca-286c-4acf-bbf0-2ee7f3d0b56e" Jan 23 17:55:45.515077 kubelet[2738]: E0123 17:55:45.514737 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69ff6445f8-4fhb4" podUID="8c709d5d-7113-42e9-bc41-af7907cc6116" Jan 23 17:55:47.511455 kubelet[2738]: E0123 17:55:47.511193 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bfd8975d7-8wdqx" podUID="67b2de8c-adfd-41ce-a209-5eab9ae1e756" Jan 23 17:55:48.505524 kubelet[2738]: E0123 17:55:48.505484 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9fdf556b5-bgdgn" podUID="f2da6827-d5c3-485d-a17f-86ee3e12342c" Jan 23 17:55:49.508002 kubelet[2738]: E0123 17:55:49.507143 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79545dbc5f-lz9w4" podUID="d427e806-f7cc-4b74-be8f-94a08c7ee702" Jan 23 17:55:52.505517 kubelet[2738]: E0123 17:55:52.505386 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9fdf556b5-xqkqz" podUID="fd167579-8a7a-45a7-a1f9-0788814a0466" Jan 23 17:55:53.507981 kubelet[2738]: E0123 17:55:53.507596 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zzc7s" podUID="25d835ef-f3bb-42c6-bc1f-07f8b7a82a66" Jan 23 17:55:55.509107 kubelet[2738]: E0123 17:55:55.509018 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jkkz7" podUID="6e403bca-286c-4acf-bbf0-2ee7f3d0b56e" Jan 23 17:55:57.505712 kubelet[2738]: E0123 17:55:57.504996 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69ff6445f8-4fhb4" podUID="8c709d5d-7113-42e9-bc41-af7907cc6116" Jan 23 17:55:58.505470 kubelet[2738]: E0123 17:55:58.505020 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bfd8975d7-8wdqx" podUID="67b2de8c-adfd-41ce-a209-5eab9ae1e756" Jan 23 17:56:02.504883 kubelet[2738]: E0123 17:56:02.504815 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9fdf556b5-bgdgn" podUID="f2da6827-d5c3-485d-a17f-86ee3e12342c" Jan 23 17:56:03.504624 kubelet[2738]: E0123 17:56:03.504532 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9fdf556b5-xqkqz" podUID="fd167579-8a7a-45a7-a1f9-0788814a0466" Jan 23 17:56:04.505522 containerd[1527]: time="2026-01-23T17:56:04.505414678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 17:56:04.887467 containerd[1527]: time="2026-01-23T17:56:04.887403777Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:56:04.889397 containerd[1527]: time="2026-01-23T17:56:04.889339587Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 17:56:04.889553 containerd[1527]: time="2026-01-23T17:56:04.889457950Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 17:56:04.889638 kubelet[2738]: E0123 17:56:04.889594 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:56:04.890041 kubelet[2738]: E0123 17:56:04.889650 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:56:04.890041 kubelet[2738]: E0123 17:56:04.889761 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e2123739c92b4125985fdb77df2a34b0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7vsk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79545dbc5f-lz9w4_calico-system(d427e806-f7cc-4b74-be8f-94a08c7ee702): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 17:56:04.894673 containerd[1527]: time="2026-01-23T17:56:04.894624405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 17:56:05.262610 containerd[1527]: time="2026-01-23T17:56:05.261521923Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:56:05.263583 containerd[1527]: time="2026-01-23T17:56:05.263485894Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 17:56:05.263583 containerd[1527]: time="2026-01-23T17:56:05.263552896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 17:56:05.263874 kubelet[2738]: E0123 17:56:05.263801 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:56:05.263874 kubelet[2738]: E0123 17:56:05.263856 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:56:05.264912 kubelet[2738]: E0123 17:56:05.264854 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7vsk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79545dbc5f-lz9w4_calico-system(d427e806-f7cc-4b74-be8f-94a08c7ee702): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 17:56:05.266536 kubelet[2738]: E0123 17:56:05.266483 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79545dbc5f-lz9w4" podUID="d427e806-f7cc-4b74-be8f-94a08c7ee702" Jan 23 17:56:05.506599 kubelet[2738]: E0123 17:56:05.506550 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zzc7s" podUID="25d835ef-f3bb-42c6-bc1f-07f8b7a82a66" Jan 23 17:56:10.505760 containerd[1527]: time="2026-01-23T17:56:10.505669181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:56:10.859983 containerd[1527]: time="2026-01-23T17:56:10.859918412Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:56:10.861359 containerd[1527]: time="2026-01-23T17:56:10.861302891Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:56:10.861669 containerd[1527]: time="2026-01-23T17:56:10.861397774Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 17:56:10.862714 kubelet[2738]: E0123 17:56:10.862581 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:56:10.862714 kubelet[2738]: E0123 17:56:10.862670 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:56:10.864639 kubelet[2738]: E0123 17:56:10.862920 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k9p8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bfd8975d7-8wdqx_calico-apiserver(67b2de8c-adfd-41ce-a209-5eab9ae1e756): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:56:10.864639 kubelet[2738]: E0123 17:56:10.864130 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bfd8975d7-8wdqx" podUID="67b2de8c-adfd-41ce-a209-5eab9ae1e756" Jan 23 17:56:10.864783 containerd[1527]: time="2026-01-23T17:56:10.863550194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 17:56:11.231092 containerd[1527]: time="2026-01-23T17:56:11.230853257Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:56:11.233466 containerd[1527]: time="2026-01-23T17:56:11.232477303Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 17:56:11.233631 containerd[1527]: time="2026-01-23T17:56:11.232516424Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 17:56:11.233882 kubelet[2738]: E0123 17:56:11.233823 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:56:11.233948 kubelet[2738]: E0123 17:56:11.233902 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:56:11.234216 kubelet[2738]: E0123 17:56:11.234137 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7872c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jkkz7_calico-system(6e403bca-286c-4acf-bbf0-2ee7f3d0b56e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 17:56:11.237655 containerd[1527]: time="2026-01-23T17:56:11.237620049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 17:56:11.589592 containerd[1527]: time="2026-01-23T17:56:11.589544997Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:56:11.591239 containerd[1527]: time="2026-01-23T17:56:11.591193324Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 17:56:11.591379 containerd[1527]: time="2026-01-23T17:56:11.591287127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 17:56:11.591785 kubelet[2738]: E0123 17:56:11.591552 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:56:11.591785 kubelet[2738]: E0123 17:56:11.591667 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:56:11.592088 kubelet[2738]: E0123 17:56:11.592034 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7872c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jkkz7_calico-system(6e403bca-286c-4acf-bbf0-2ee7f3d0b56e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 17:56:11.593263 kubelet[2738]: E0123 17:56:11.593224 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jkkz7" podUID="6e403bca-286c-4acf-bbf0-2ee7f3d0b56e" Jan 23 17:56:12.506819 containerd[1527]: time="2026-01-23T17:56:12.506738669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 17:56:12.904575 containerd[1527]: time="2026-01-23T17:56:12.904381783Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:56:12.906071 containerd[1527]: time="2026-01-23T17:56:12.905956028Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 17:56:12.906071 containerd[1527]: time="2026-01-23T17:56:12.906022270Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 17:56:12.906257 kubelet[2738]: E0123 17:56:12.906181 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:56:12.906561 kubelet[2738]: E0123 17:56:12.906266 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 17:56:12.906561 kubelet[2738]: E0123 17:56:12.906422 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2p6dt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-69ff6445f8-4fhb4_calico-system(8c709d5d-7113-42e9-bc41-af7907cc6116): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 17:56:12.907886 kubelet[2738]: E0123 17:56:12.907830 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69ff6445f8-4fhb4" podUID="8c709d5d-7113-42e9-bc41-af7907cc6116" Jan 23 17:56:13.509368 containerd[1527]: time="2026-01-23T17:56:13.509032586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:56:13.854020 containerd[1527]: time="2026-01-23T17:56:13.853941648Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:56:13.857220 containerd[1527]: time="2026-01-23T17:56:13.857092819Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:56:13.858406 containerd[1527]: time="2026-01-23T17:56:13.857195102Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 17:56:13.858576 kubelet[2738]: E0123 17:56:13.857763 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:56:13.858576 kubelet[2738]: E0123 17:56:13.857835 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:56:13.858576 kubelet[2738]: E0123 17:56:13.858033 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nc9xz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9fdf556b5-bgdgn_calico-apiserver(f2da6827-d5c3-485d-a17f-86ee3e12342c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:56:13.859727 kubelet[2738]: E0123 17:56:13.859545 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9fdf556b5-bgdgn" podUID="f2da6827-d5c3-485d-a17f-86ee3e12342c" Jan 23 17:56:15.507810 kubelet[2738]: E0123 17:56:15.507741 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79545dbc5f-lz9w4" podUID="d427e806-f7cc-4b74-be8f-94a08c7ee702" Jan 23 17:56:16.507035 containerd[1527]: time="2026-01-23T17:56:16.506990064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:56:16.861653 containerd[1527]: time="2026-01-23T17:56:16.861386113Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:56:16.863251 containerd[1527]: time="2026-01-23T17:56:16.863105524Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:56:16.863251 containerd[1527]: time="2026-01-23T17:56:16.863207927Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 17:56:16.863849 kubelet[2738]: E0123 17:56:16.863724 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:56:16.863849 kubelet[2738]: E0123 17:56:16.863793 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:56:16.864546 kubelet[2738]: E0123 17:56:16.864215 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tn85d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9fdf556b5-xqkqz_calico-apiserver(fd167579-8a7a-45a7-a1f9-0788814a0466): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:56:16.865137 containerd[1527]: time="2026-01-23T17:56:16.865068942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 17:56:16.866246 kubelet[2738]: E0123 17:56:16.866170 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9fdf556b5-xqkqz" podUID="fd167579-8a7a-45a7-a1f9-0788814a0466" Jan 23 17:56:17.229138 containerd[1527]: time="2026-01-23T17:56:17.228953607Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:56:17.231507 containerd[1527]: time="2026-01-23T17:56:17.231380560Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 17:56:17.231507 containerd[1527]: time="2026-01-23T17:56:17.231452802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 17:56:17.231852 kubelet[2738]: E0123 17:56:17.231731 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:56:17.231852 kubelet[2738]: E0123 17:56:17.231798 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 17:56:17.232074 kubelet[2738]: E0123 17:56:17.231985 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-stgxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-zzc7s_calico-system(25d835ef-f3bb-42c6-bc1f-07f8b7a82a66): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 17:56:17.233535 kubelet[2738]: E0123 17:56:17.233381 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zzc7s" podUID="25d835ef-f3bb-42c6-bc1f-07f8b7a82a66" Jan 23 17:56:19.198823 systemd[1]: Started sshd@7-49.13.3.65:22-68.220.241.50:37078.service - OpenSSH per-connection server daemon (68.220.241.50:37078). Jan 23 17:56:19.872313 sshd[4997]: Accepted publickey for core from 68.220.241.50 port 37078 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:56:19.875001 sshd-session[4997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:56:19.880842 systemd-logind[1511]: New session 8 of user core. Jan 23 17:56:19.887698 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 17:56:20.445644 sshd[5000]: Connection closed by 68.220.241.50 port 37078 Jan 23 17:56:20.446202 sshd-session[4997]: pam_unix(sshd:session): session closed for user core Jan 23 17:56:20.453122 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 17:56:20.456213 systemd[1]: sshd@7-49.13.3.65:22-68.220.241.50:37078.service: Deactivated successfully. Jan 23 17:56:20.456261 systemd-logind[1511]: Session 8 logged out. Waiting for processes to exit. Jan 23 17:56:20.467622 systemd-logind[1511]: Removed session 8. Jan 23 17:56:24.504797 kubelet[2738]: E0123 17:56:24.504730 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9fdf556b5-bgdgn" podUID="f2da6827-d5c3-485d-a17f-86ee3e12342c" Jan 23 17:56:24.504797 kubelet[2738]: E0123 17:56:24.504732 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bfd8975d7-8wdqx" podUID="67b2de8c-adfd-41ce-a209-5eab9ae1e756" Jan 23 17:56:25.510449 kubelet[2738]: E0123 17:56:25.509964 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jkkz7" podUID="6e403bca-286c-4acf-bbf0-2ee7f3d0b56e" Jan 23 17:56:25.555538 systemd[1]: Started sshd@8-49.13.3.65:22-68.220.241.50:57094.service - OpenSSH per-connection server daemon (68.220.241.50:57094). Jan 23 17:56:26.202452 sshd[5017]: Accepted publickey for core from 68.220.241.50 port 57094 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:56:26.203509 sshd-session[5017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:56:26.208497 systemd-logind[1511]: New session 9 of user core. Jan 23 17:56:26.212789 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 17:56:26.507036 kubelet[2738]: E0123 17:56:26.506899 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79545dbc5f-lz9w4" podUID="d427e806-f7cc-4b74-be8f-94a08c7ee702" Jan 23 17:56:26.734292 sshd[5020]: Connection closed by 68.220.241.50 port 57094 Jan 23 17:56:26.735833 sshd-session[5017]: pam_unix(sshd:session): session closed for user core Jan 23 17:56:26.743060 systemd[1]: sshd@8-49.13.3.65:22-68.220.241.50:57094.service: Deactivated successfully. Jan 23 17:56:26.748848 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 17:56:26.753698 systemd-logind[1511]: Session 9 logged out. Waiting for processes to exit. Jan 23 17:56:26.755610 systemd-logind[1511]: Removed session 9. Jan 23 17:56:27.506906 kubelet[2738]: E0123 17:56:27.505981 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69ff6445f8-4fhb4" podUID="8c709d5d-7113-42e9-bc41-af7907cc6116" Jan 23 17:56:29.506880 kubelet[2738]: E0123 17:56:29.506811 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zzc7s" podUID="25d835ef-f3bb-42c6-bc1f-07f8b7a82a66" Jan 23 17:56:30.504248 kubelet[2738]: E0123 17:56:30.504160 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9fdf556b5-xqkqz" podUID="fd167579-8a7a-45a7-a1f9-0788814a0466" Jan 23 17:56:31.842697 systemd[1]: Started sshd@9-49.13.3.65:22-68.220.241.50:57104.service - OpenSSH per-connection server daemon (68.220.241.50:57104). Jan 23 17:56:32.475502 sshd[5033]: Accepted publickey for core from 68.220.241.50 port 57104 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:56:32.478643 sshd-session[5033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:56:32.485823 systemd-logind[1511]: New session 10 of user core. Jan 23 17:56:32.493744 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 17:56:33.051528 sshd[5036]: Connection closed by 68.220.241.50 port 57104 Jan 23 17:56:33.052053 sshd-session[5033]: pam_unix(sshd:session): session closed for user core Jan 23 17:56:33.057263 systemd[1]: sshd@9-49.13.3.65:22-68.220.241.50:57104.service: Deactivated successfully. Jan 23 17:56:33.061317 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 17:56:33.062534 systemd-logind[1511]: Session 10 logged out. Waiting for processes to exit. Jan 23 17:56:33.064767 systemd-logind[1511]: Removed session 10. Jan 23 17:56:33.166725 systemd[1]: Started sshd@10-49.13.3.65:22-68.220.241.50:49814.service - OpenSSH per-connection server daemon (68.220.241.50:49814). Jan 23 17:56:33.796380 sshd[5049]: Accepted publickey for core from 68.220.241.50 port 49814 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:56:33.799309 sshd-session[5049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:56:33.804706 systemd-logind[1511]: New session 11 of user core. Jan 23 17:56:33.811625 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 17:56:34.376184 sshd[5052]: Connection closed by 68.220.241.50 port 49814 Jan 23 17:56:34.379287 sshd-session[5049]: pam_unix(sshd:session): session closed for user core Jan 23 17:56:34.385343 systemd[1]: sshd@10-49.13.3.65:22-68.220.241.50:49814.service: Deactivated successfully. Jan 23 17:56:34.385928 systemd-logind[1511]: Session 11 logged out. Waiting for processes to exit. Jan 23 17:56:34.390109 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 17:56:34.392764 systemd-logind[1511]: Removed session 11. Jan 23 17:56:34.494563 systemd[1]: Started sshd@11-49.13.3.65:22-68.220.241.50:49824.service - OpenSSH per-connection server daemon (68.220.241.50:49824). Jan 23 17:56:35.151212 sshd[5062]: Accepted publickey for core from 68.220.241.50 port 49824 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:56:35.154162 sshd-session[5062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:56:35.161755 systemd-logind[1511]: New session 12 of user core. Jan 23 17:56:35.169811 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 17:56:35.506741 kubelet[2738]: E0123 17:56:35.505829 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9fdf556b5-bgdgn" podUID="f2da6827-d5c3-485d-a17f-86ee3e12342c" Jan 23 17:56:35.732985 sshd[5065]: Connection closed by 68.220.241.50 port 49824 Jan 23 17:56:35.733582 sshd-session[5062]: pam_unix(sshd:session): session closed for user core Jan 23 17:56:35.738382 systemd[1]: sshd@11-49.13.3.65:22-68.220.241.50:49824.service: Deactivated successfully. Jan 23 17:56:35.744287 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 17:56:35.745998 systemd-logind[1511]: Session 12 logged out. Waiting for processes to exit. Jan 23 17:56:35.748574 systemd-logind[1511]: Removed session 12. Jan 23 17:56:36.507172 kubelet[2738]: E0123 17:56:36.507085 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jkkz7" podUID="6e403bca-286c-4acf-bbf0-2ee7f3d0b56e" Jan 23 17:56:38.505476 kubelet[2738]: E0123 17:56:38.505055 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bfd8975d7-8wdqx" podUID="67b2de8c-adfd-41ce-a209-5eab9ae1e756" Jan 23 17:56:40.845044 systemd[1]: Started sshd@12-49.13.3.65:22-68.220.241.50:49834.service - OpenSSH per-connection server daemon (68.220.241.50:49834). Jan 23 17:56:41.502391 sshd[5105]: Accepted publickey for core from 68.220.241.50 port 49834 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:56:41.509849 kubelet[2738]: E0123 17:56:41.509785 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9fdf556b5-xqkqz" podUID="fd167579-8a7a-45a7-a1f9-0788814a0466" Jan 23 17:56:41.511376 sshd-session[5105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:56:41.515890 kubelet[2738]: E0123 17:56:41.515527 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79545dbc5f-lz9w4" podUID="d427e806-f7cc-4b74-be8f-94a08c7ee702" Jan 23 17:56:41.520711 systemd-logind[1511]: New session 13 of user core. Jan 23 17:56:41.526375 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 17:56:42.048553 sshd[5108]: Connection closed by 68.220.241.50 port 49834 Jan 23 17:56:42.049175 sshd-session[5105]: pam_unix(sshd:session): session closed for user core Jan 23 17:56:42.053815 systemd[1]: sshd@12-49.13.3.65:22-68.220.241.50:49834.service: Deactivated successfully. Jan 23 17:56:42.058138 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 17:56:42.061483 systemd-logind[1511]: Session 13 logged out. Waiting for processes to exit. Jan 23 17:56:42.063748 systemd-logind[1511]: Removed session 13. Jan 23 17:56:42.505894 kubelet[2738]: E0123 17:56:42.505692 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69ff6445f8-4fhb4" podUID="8c709d5d-7113-42e9-bc41-af7907cc6116" Jan 23 17:56:44.505456 kubelet[2738]: E0123 17:56:44.503854 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zzc7s" podUID="25d835ef-f3bb-42c6-bc1f-07f8b7a82a66" Jan 23 17:56:47.161753 systemd[1]: Started sshd@13-49.13.3.65:22-68.220.241.50:35962.service - OpenSSH per-connection server daemon (68.220.241.50:35962). Jan 23 17:56:47.510695 kubelet[2738]: E0123 17:56:47.510568 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jkkz7" podUID="6e403bca-286c-4acf-bbf0-2ee7f3d0b56e" Jan 23 17:56:47.781494 sshd[5120]: Accepted publickey for core from 68.220.241.50 port 35962 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:56:47.783930 sshd-session[5120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:56:47.791496 systemd-logind[1511]: New session 14 of user core. Jan 23 17:56:47.796711 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 17:56:48.307101 sshd[5125]: Connection closed by 68.220.241.50 port 35962 Jan 23 17:56:48.307004 sshd-session[5120]: pam_unix(sshd:session): session closed for user core Jan 23 17:56:48.314005 systemd[1]: sshd@13-49.13.3.65:22-68.220.241.50:35962.service: Deactivated successfully. Jan 23 17:56:48.317848 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 17:56:48.319314 systemd-logind[1511]: Session 14 logged out. Waiting for processes to exit. Jan 23 17:56:48.322937 systemd-logind[1511]: Removed session 14. Jan 23 17:56:48.505712 kubelet[2738]: E0123 17:56:48.505406 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9fdf556b5-bgdgn" podUID="f2da6827-d5c3-485d-a17f-86ee3e12342c" Jan 23 17:56:53.420300 systemd[1]: Started sshd@14-49.13.3.65:22-68.220.241.50:44810.service - OpenSSH per-connection server daemon (68.220.241.50:44810). Jan 23 17:56:53.508445 kubelet[2738]: E0123 17:56:53.507635 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69ff6445f8-4fhb4" podUID="8c709d5d-7113-42e9-bc41-af7907cc6116" Jan 23 17:56:53.508445 kubelet[2738]: E0123 17:56:53.507740 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bfd8975d7-8wdqx" podUID="67b2de8c-adfd-41ce-a209-5eab9ae1e756" Jan 23 17:56:53.509292 kubelet[2738]: E0123 17:56:53.509188 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79545dbc5f-lz9w4" podUID="d427e806-f7cc-4b74-be8f-94a08c7ee702" Jan 23 17:56:54.043158 sshd[5137]: Accepted publickey for core from 68.220.241.50 port 44810 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:56:54.046627 sshd-session[5137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:56:54.052808 systemd-logind[1511]: New session 15 of user core. Jan 23 17:56:54.056683 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 17:56:54.505659 kubelet[2738]: E0123 17:56:54.504349 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9fdf556b5-xqkqz" podUID="fd167579-8a7a-45a7-a1f9-0788814a0466" Jan 23 17:56:54.554680 sshd[5140]: Connection closed by 68.220.241.50 port 44810 Jan 23 17:56:54.554550 sshd-session[5137]: pam_unix(sshd:session): session closed for user core Jan 23 17:56:54.562238 systemd[1]: sshd@14-49.13.3.65:22-68.220.241.50:44810.service: Deactivated successfully. Jan 23 17:56:54.565224 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 17:56:54.567637 systemd-logind[1511]: Session 15 logged out. Waiting for processes to exit. Jan 23 17:56:54.568969 systemd-logind[1511]: Removed session 15. Jan 23 17:56:54.667861 systemd[1]: Started sshd@15-49.13.3.65:22-68.220.241.50:44818.service - OpenSSH per-connection server daemon (68.220.241.50:44818). Jan 23 17:56:55.304448 sshd[5152]: Accepted publickey for core from 68.220.241.50 port 44818 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:56:55.307268 sshd-session[5152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:56:55.313405 systemd-logind[1511]: New session 16 of user core. Jan 23 17:56:55.319642 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 17:56:55.995565 sshd[5157]: Connection closed by 68.220.241.50 port 44818 Jan 23 17:56:55.996039 sshd-session[5152]: pam_unix(sshd:session): session closed for user core Jan 23 17:56:56.003334 systemd[1]: sshd@15-49.13.3.65:22-68.220.241.50:44818.service: Deactivated successfully. Jan 23 17:56:56.007846 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 17:56:56.009823 systemd-logind[1511]: Session 16 logged out. Waiting for processes to exit. Jan 23 17:56:56.012346 systemd-logind[1511]: Removed session 16. Jan 23 17:56:56.114262 systemd[1]: Started sshd@16-49.13.3.65:22-68.220.241.50:44830.service - OpenSSH per-connection server daemon (68.220.241.50:44830). Jan 23 17:56:56.506748 kubelet[2738]: E0123 17:56:56.504882 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zzc7s" podUID="25d835ef-f3bb-42c6-bc1f-07f8b7a82a66" Jan 23 17:56:56.757617 sshd[5167]: Accepted publickey for core from 68.220.241.50 port 44830 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:56:56.760341 sshd-session[5167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:56:56.771204 systemd-logind[1511]: New session 17 of user core. Jan 23 17:56:56.774700 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 17:56:57.964749 sshd[5170]: Connection closed by 68.220.241.50 port 44830 Jan 23 17:56:57.965619 sshd-session[5167]: pam_unix(sshd:session): session closed for user core Jan 23 17:56:57.971792 systemd-logind[1511]: Session 17 logged out. Waiting for processes to exit. Jan 23 17:56:57.972494 systemd[1]: sshd@16-49.13.3.65:22-68.220.241.50:44830.service: Deactivated successfully. Jan 23 17:56:57.976289 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 17:56:57.982534 systemd-logind[1511]: Removed session 17. Jan 23 17:56:58.069934 systemd[1]: Started sshd@17-49.13.3.65:22-68.220.241.50:44840.service - OpenSSH per-connection server daemon (68.220.241.50:44840). Jan 23 17:56:58.710269 sshd[5188]: Accepted publickey for core from 68.220.241.50 port 44840 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:56:58.712361 sshd-session[5188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:56:58.719139 systemd-logind[1511]: New session 18 of user core. Jan 23 17:56:58.726352 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 17:56:59.416722 sshd[5191]: Connection closed by 68.220.241.50 port 44840 Jan 23 17:56:59.417606 sshd-session[5188]: pam_unix(sshd:session): session closed for user core Jan 23 17:56:59.424282 systemd[1]: sshd@17-49.13.3.65:22-68.220.241.50:44840.service: Deactivated successfully. Jan 23 17:56:59.430573 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 17:56:59.432672 systemd-logind[1511]: Session 18 logged out. Waiting for processes to exit. Jan 23 17:56:59.436237 systemd-logind[1511]: Removed session 18. Jan 23 17:56:59.534983 systemd[1]: Started sshd@18-49.13.3.65:22-68.220.241.50:44848.service - OpenSSH per-connection server daemon (68.220.241.50:44848). Jan 23 17:57:00.194947 sshd[5203]: Accepted publickey for core from 68.220.241.50 port 44848 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:57:00.197949 sshd-session[5203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:57:00.205265 systemd-logind[1511]: New session 19 of user core. Jan 23 17:57:00.210796 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 17:57:00.507237 kubelet[2738]: E0123 17:57:00.506986 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jkkz7" podUID="6e403bca-286c-4acf-bbf0-2ee7f3d0b56e" Jan 23 17:57:00.730853 sshd[5206]: Connection closed by 68.220.241.50 port 44848 Jan 23 17:57:00.730043 sshd-session[5203]: pam_unix(sshd:session): session closed for user core Jan 23 17:57:00.736789 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 17:57:00.737188 systemd-logind[1511]: Session 19 logged out. Waiting for processes to exit. Jan 23 17:57:00.738350 systemd[1]: sshd@18-49.13.3.65:22-68.220.241.50:44848.service: Deactivated successfully. Jan 23 17:57:00.745083 systemd-logind[1511]: Removed session 19. Jan 23 17:57:03.511534 kubelet[2738]: E0123 17:57:03.510318 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9fdf556b5-bgdgn" podUID="f2da6827-d5c3-485d-a17f-86ee3e12342c" Jan 23 17:57:05.506916 kubelet[2738]: E0123 17:57:05.506686 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9fdf556b5-xqkqz" podUID="fd167579-8a7a-45a7-a1f9-0788814a0466" Jan 23 17:57:05.509596 kubelet[2738]: E0123 17:57:05.509536 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79545dbc5f-lz9w4" podUID="d427e806-f7cc-4b74-be8f-94a08c7ee702" Jan 23 17:57:05.845641 systemd[1]: Started sshd@19-49.13.3.65:22-68.220.241.50:44454.service - OpenSSH per-connection server daemon (68.220.241.50:44454). Jan 23 17:57:06.488726 sshd[5220]: Accepted publickey for core from 68.220.241.50 port 44454 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:57:06.490744 sshd-session[5220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:57:06.496934 systemd-logind[1511]: New session 20 of user core. Jan 23 17:57:06.501936 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 17:57:06.504968 kubelet[2738]: E0123 17:57:06.504902 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bfd8975d7-8wdqx" podUID="67b2de8c-adfd-41ce-a209-5eab9ae1e756" Jan 23 17:57:07.032463 sshd[5223]: Connection closed by 68.220.241.50 port 44454 Jan 23 17:57:07.033014 sshd-session[5220]: pam_unix(sshd:session): session closed for user core Jan 23 17:57:07.038532 systemd[1]: sshd@19-49.13.3.65:22-68.220.241.50:44454.service: Deactivated successfully. Jan 23 17:57:07.043852 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 17:57:07.047613 systemd-logind[1511]: Session 20 logged out. Waiting for processes to exit. Jan 23 17:57:07.051034 systemd-logind[1511]: Removed session 20. Jan 23 17:57:07.506096 kubelet[2738]: E0123 17:57:07.505028 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69ff6445f8-4fhb4" podUID="8c709d5d-7113-42e9-bc41-af7907cc6116" Jan 23 17:57:08.506065 kubelet[2738]: E0123 17:57:08.505613 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zzc7s" podUID="25d835ef-f3bb-42c6-bc1f-07f8b7a82a66" Jan 23 17:57:12.138995 systemd[1]: Started sshd@20-49.13.3.65:22-68.220.241.50:44466.service - OpenSSH per-connection server daemon (68.220.241.50:44466). Jan 23 17:57:12.776263 sshd[5259]: Accepted publickey for core from 68.220.241.50 port 44466 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:57:12.779369 sshd-session[5259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:57:12.788564 systemd-logind[1511]: New session 21 of user core. Jan 23 17:57:12.793850 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 17:57:13.302458 sshd[5262]: Connection closed by 68.220.241.50 port 44466 Jan 23 17:57:13.300748 sshd-session[5259]: pam_unix(sshd:session): session closed for user core Jan 23 17:57:13.305524 systemd[1]: sshd@20-49.13.3.65:22-68.220.241.50:44466.service: Deactivated successfully. Jan 23 17:57:13.312333 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 17:57:13.314204 systemd-logind[1511]: Session 21 logged out. Waiting for processes to exit. Jan 23 17:57:13.317300 systemd-logind[1511]: Removed session 21. Jan 23 17:57:13.510840 kubelet[2738]: E0123 17:57:13.510782 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jkkz7" podUID="6e403bca-286c-4acf-bbf0-2ee7f3d0b56e" Jan 23 17:57:14.504099 kubelet[2738]: E0123 17:57:14.504004 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9fdf556b5-bgdgn" podUID="f2da6827-d5c3-485d-a17f-86ee3e12342c" Jan 23 17:57:17.506257 kubelet[2738]: E0123 17:57:17.505758 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79545dbc5f-lz9w4" podUID="d427e806-f7cc-4b74-be8f-94a08c7ee702" Jan 23 17:57:18.414057 systemd[1]: Started sshd@21-49.13.3.65:22-68.220.241.50:33994.service - OpenSSH per-connection server daemon (68.220.241.50:33994). Jan 23 17:57:19.068207 sshd[5280]: Accepted publickey for core from 68.220.241.50 port 33994 ssh2: RSA SHA256:B41eFehLrFiB1TLq33xEWe4xG0Kg5UZxPTPxVefD7iE Jan 23 17:57:19.068993 sshd-session[5280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:57:19.074143 systemd-logind[1511]: New session 22 of user core. Jan 23 17:57:19.080954 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 17:57:19.506540 kubelet[2738]: E0123 17:57:19.505557 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9fdf556b5-xqkqz" podUID="fd167579-8a7a-45a7-a1f9-0788814a0466" Jan 23 17:57:19.508459 kubelet[2738]: E0123 17:57:19.507651 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bfd8975d7-8wdqx" podUID="67b2de8c-adfd-41ce-a209-5eab9ae1e756" Jan 23 17:57:19.509460 kubelet[2738]: E0123 17:57:19.508986 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69ff6445f8-4fhb4" podUID="8c709d5d-7113-42e9-bc41-af7907cc6116" Jan 23 17:57:19.602771 sshd[5283]: Connection closed by 68.220.241.50 port 33994 Jan 23 17:57:19.601548 sshd-session[5280]: pam_unix(sshd:session): session closed for user core Jan 23 17:57:19.608124 systemd-logind[1511]: Session 22 logged out. Waiting for processes to exit. Jan 23 17:57:19.608486 systemd[1]: sshd@21-49.13.3.65:22-68.220.241.50:33994.service: Deactivated successfully. Jan 23 17:57:19.612393 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 17:57:19.619738 systemd-logind[1511]: Removed session 22. Jan 23 17:57:22.504573 kubelet[2738]: E0123 17:57:22.504183 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zzc7s" podUID="25d835ef-f3bb-42c6-bc1f-07f8b7a82a66" Jan 23 17:57:24.504934 kubelet[2738]: E0123 17:57:24.504849 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jkkz7" podUID="6e403bca-286c-4acf-bbf0-2ee7f3d0b56e" Jan 23 17:57:27.506814 kubelet[2738]: E0123 17:57:27.506742 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9fdf556b5-bgdgn" podUID="f2da6827-d5c3-485d-a17f-86ee3e12342c" Jan 23 17:57:29.504851 containerd[1527]: time="2026-01-23T17:57:29.504328336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 17:57:29.861129 containerd[1527]: time="2026-01-23T17:57:29.861057011Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:57:29.862637 containerd[1527]: time="2026-01-23T17:57:29.862555321Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 17:57:29.862783 containerd[1527]: time="2026-01-23T17:57:29.862682240Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 17:57:29.862943 kubelet[2738]: E0123 17:57:29.862896 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:57:29.863788 kubelet[2738]: E0123 17:57:29.863497 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:57:29.867163 kubelet[2738]: E0123 17:57:29.867096 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e2123739c92b4125985fdb77df2a34b0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7vsk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79545dbc5f-lz9w4_calico-system(d427e806-f7cc-4b74-be8f-94a08c7ee702): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 17:57:29.869230 containerd[1527]: time="2026-01-23T17:57:29.869180117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 17:57:30.215477 containerd[1527]: time="2026-01-23T17:57:30.215118263Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:57:30.216670 containerd[1527]: time="2026-01-23T17:57:30.216553295Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 17:57:30.216670 containerd[1527]: time="2026-01-23T17:57:30.216641414Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 17:57:30.217157 kubelet[2738]: E0123 17:57:30.216921 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:57:30.217157 kubelet[2738]: E0123 17:57:30.216972 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:57:30.217157 kubelet[2738]: E0123 17:57:30.217079 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7vsk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79545dbc5f-lz9w4_calico-system(d427e806-f7cc-4b74-be8f-94a08c7ee702): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 17:57:30.218330 kubelet[2738]: E0123 17:57:30.218278 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79545dbc5f-lz9w4" podUID="d427e806-f7cc-4b74-be8f-94a08c7ee702" Jan 23 17:57:31.505163 kubelet[2738]: E0123 17:57:31.504988 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9fdf556b5-xqkqz" podUID="fd167579-8a7a-45a7-a1f9-0788814a0466" Jan 23 17:57:31.505163 kubelet[2738]: E0123 17:57:31.505085 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69ff6445f8-4fhb4" podUID="8c709d5d-7113-42e9-bc41-af7907cc6116" Jan 23 17:57:32.505489 containerd[1527]: time="2026-01-23T17:57:32.505397099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 17:57:32.842203 containerd[1527]: time="2026-01-23T17:57:32.842119312Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:57:32.843687 containerd[1527]: time="2026-01-23T17:57:32.843596225Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 17:57:32.843811 containerd[1527]: time="2026-01-23T17:57:32.843756304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 17:57:32.844058 kubelet[2738]: E0123 17:57:32.843976 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:57:32.844058 kubelet[2738]: E0123 17:57:32.844045 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 17:57:32.844748 kubelet[2738]: E0123 17:57:32.844180 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k9p8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bfd8975d7-8wdqx_calico-apiserver(67b2de8c-adfd-41ce-a209-5eab9ae1e756): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 17:57:32.845403 kubelet[2738]: E0123 17:57:32.845342 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bfd8975d7-8wdqx" podUID="67b2de8c-adfd-41ce-a209-5eab9ae1e756" Jan 23 17:57:34.321836 systemd[1]: cri-containerd-f9a1cb1ad673c27f09e1014848ec09bbb9a4595ff38959caf3871f059de2f080.scope: Deactivated successfully. Jan 23 17:57:34.322162 systemd[1]: cri-containerd-f9a1cb1ad673c27f09e1014848ec09bbb9a4595ff38959caf3871f059de2f080.scope: Consumed 46.635s CPU time, 116.1M memory peak. Jan 23 17:57:34.325933 containerd[1527]: time="2026-01-23T17:57:34.325892899Z" level=info msg="received container exit event container_id:\"f9a1cb1ad673c27f09e1014848ec09bbb9a4595ff38959caf3871f059de2f080\" id:\"f9a1cb1ad673c27f09e1014848ec09bbb9a4595ff38959caf3871f059de2f080\" pid:3057 exit_status:1 exited_at:{seconds:1769191054 nanos:325162982}" Jan 23 17:57:34.356243 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9a1cb1ad673c27f09e1014848ec09bbb9a4595ff38959caf3871f059de2f080-rootfs.mount: Deactivated successfully. Jan 23 17:57:34.792117 kubelet[2738]: E0123 17:57:34.791404 2738 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:46096->10.0.0.2:2379: read: connection timed out" Jan 23 17:57:34.880960 systemd[1]: cri-containerd-715bf934593dfdaa329b6d99e3765b4df72163079d485934f3a628297cedc0cf.scope: Deactivated successfully. Jan 23 17:57:34.882124 systemd[1]: cri-containerd-715bf934593dfdaa329b6d99e3765b4df72163079d485934f3a628297cedc0cf.scope: Consumed 4.860s CPU time, 60.1M memory peak, 3.2M read from disk. Jan 23 17:57:34.885797 containerd[1527]: time="2026-01-23T17:57:34.885747051Z" level=info msg="received container exit event container_id:\"715bf934593dfdaa329b6d99e3765b4df72163079d485934f3a628297cedc0cf\" id:\"715bf934593dfdaa329b6d99e3765b4df72163079d485934f3a628297cedc0cf\" pid:2599 exit_status:1 exited_at:{seconds:1769191054 nanos:885366013}" Jan 23 17:57:34.912514 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-715bf934593dfdaa329b6d99e3765b4df72163079d485934f3a628297cedc0cf-rootfs.mount: Deactivated successfully. Jan 23 17:57:35.312853 kubelet[2738]: I0123 17:57:35.312752 2738 scope.go:117] "RemoveContainer" containerID="715bf934593dfdaa329b6d99e3765b4df72163079d485934f3a628297cedc0cf" Jan 23 17:57:35.315950 kubelet[2738]: I0123 17:57:35.315918 2738 scope.go:117] "RemoveContainer" containerID="f9a1cb1ad673c27f09e1014848ec09bbb9a4595ff38959caf3871f059de2f080" Jan 23 17:57:35.323941 containerd[1527]: time="2026-01-23T17:57:35.323872244Z" level=info msg="CreateContainer within sandbox \"21247af450b15aa7e89182938653e34d9021cf7a0158656dd1f7d4541c85a53e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 23 17:57:35.324211 containerd[1527]: time="2026-01-23T17:57:35.323876724Z" level=info msg="CreateContainer within sandbox \"9cf1c0561aa3b9b4621797710ce8b830d3b1fa478b27839b41afaa8a2f110b53\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 23 17:57:35.337338 containerd[1527]: time="2026-01-23T17:57:35.337281879Z" level=info msg="Container eb9a2a14cc02601798078b3ad5b090f42d9f5593e0b4cba567c572c1b4ee9281: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:57:35.342462 containerd[1527]: time="2026-01-23T17:57:35.342036023Z" level=info msg="Container be29599a1bce1026addbeac2a081bf8353b7054d0acb050af5998d84d3a283a9: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:57:35.349100 containerd[1527]: time="2026-01-23T17:57:35.349052399Z" level=info msg="CreateContainer within sandbox \"21247af450b15aa7e89182938653e34d9021cf7a0158656dd1f7d4541c85a53e\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"eb9a2a14cc02601798078b3ad5b090f42d9f5593e0b4cba567c572c1b4ee9281\"" Jan 23 17:57:35.354053 containerd[1527]: time="2026-01-23T17:57:35.353960703Z" level=info msg="StartContainer for \"eb9a2a14cc02601798078b3ad5b090f42d9f5593e0b4cba567c572c1b4ee9281\"" Jan 23 17:57:35.355976 containerd[1527]: time="2026-01-23T17:57:35.355941736Z" level=info msg="connecting to shim eb9a2a14cc02601798078b3ad5b090f42d9f5593e0b4cba567c572c1b4ee9281" address="unix:///run/containerd/s/51bd3dac0fa830fa513539dd3f70f61e59dc7b79d41de59498d0fba2335ff905" protocol=ttrpc version=3 Jan 23 17:57:35.359702 containerd[1527]: time="2026-01-23T17:57:35.359624324Z" level=info msg="CreateContainer within sandbox \"9cf1c0561aa3b9b4621797710ce8b830d3b1fa478b27839b41afaa8a2f110b53\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"be29599a1bce1026addbeac2a081bf8353b7054d0acb050af5998d84d3a283a9\"" Jan 23 17:57:35.363836 containerd[1527]: time="2026-01-23T17:57:35.362675594Z" level=info msg="StartContainer for \"be29599a1bce1026addbeac2a081bf8353b7054d0acb050af5998d84d3a283a9\"" Jan 23 17:57:35.366689 containerd[1527]: time="2026-01-23T17:57:35.366642141Z" level=info msg="connecting to shim be29599a1bce1026addbeac2a081bf8353b7054d0acb050af5998d84d3a283a9" address="unix:///run/containerd/s/5174e7c5a55727a4b25a90bc737ab6938c5f8683d93d9b54a25557bf5f72a27f" protocol=ttrpc version=3 Jan 23 17:57:35.387801 systemd[1]: Started cri-containerd-eb9a2a14cc02601798078b3ad5b090f42d9f5593e0b4cba567c572c1b4ee9281.scope - libcontainer container eb9a2a14cc02601798078b3ad5b090f42d9f5593e0b4cba567c572c1b4ee9281. Jan 23 17:57:35.405773 systemd[1]: Started cri-containerd-be29599a1bce1026addbeac2a081bf8353b7054d0acb050af5998d84d3a283a9.scope - libcontainer container be29599a1bce1026addbeac2a081bf8353b7054d0acb050af5998d84d3a283a9. Jan 23 17:57:35.447449 containerd[1527]: time="2026-01-23T17:57:35.447155471Z" level=info msg="StartContainer for \"eb9a2a14cc02601798078b3ad5b090f42d9f5593e0b4cba567c572c1b4ee9281\" returns successfully" Jan 23 17:57:35.468232 containerd[1527]: time="2026-01-23T17:57:35.468188641Z" level=info msg="StartContainer for \"be29599a1bce1026addbeac2a081bf8353b7054d0acb050af5998d84d3a283a9\" returns successfully" Jan 23 17:57:35.508766 containerd[1527]: time="2026-01-23T17:57:35.508718785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 17:57:35.867478 containerd[1527]: time="2026-01-23T17:57:35.867397465Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:57:35.868917 containerd[1527]: time="2026-01-23T17:57:35.868853140Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 17:57:35.869030 containerd[1527]: time="2026-01-23T17:57:35.868883620Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 17:57:35.869195 kubelet[2738]: E0123 17:57:35.869148 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:57:35.869504 kubelet[2738]: E0123 17:57:35.869208 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 17:57:35.869504 kubelet[2738]: E0123 17:57:35.869366 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7872c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jkkz7_calico-system(6e403bca-286c-4acf-bbf0-2ee7f3d0b56e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 17:57:35.871935 containerd[1527]: time="2026-01-23T17:57:35.871896330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 17:57:36.068941 kubelet[2738]: E0123 17:57:36.067811 2738 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:45896->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{calico-apiserver-9fdf556b5-bgdgn.188d6dbbbfaf51f4 calico-apiserver 1836 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-apiserver,Name:calico-apiserver-9fdf556b5-bgdgn,UID:f2da6827-d5c3-485d-a17f-86ee3e12342c,APIVersion:v1,ResourceVersion:849,FieldPath:spec.containers{calico-apiserver},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4459-2-3-1-a204a5ad1b,},FirstTimestamp:2026-01-23 17:54:41 +0000 UTC,LastTimestamp:2026-01-23 17:57:27.506676257 +0000 UTC m=+220.134351990,Count:12,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-3-1-a204a5ad1b,}" Jan 23 17:57:36.069944 kubelet[2738]: I0123 17:57:36.069811 2738 status_manager.go:890] "Failed to get status for pod" podUID="528e801571ceda7163e55fc08446afd3" pod="kube-system/kube-controller-manager-ci-4459-2-3-1-a204a5ad1b" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:46012->10.0.0.2:2379: read: connection timed out" Jan 23 17:57:36.219400 containerd[1527]: time="2026-01-23T17:57:36.218641442Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:57:36.221384 containerd[1527]: time="2026-01-23T17:57:36.221245874Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 17:57:36.221384 containerd[1527]: time="2026-01-23T17:57:36.221286474Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 17:57:36.221799 kubelet[2738]: E0123 17:57:36.221757 2738 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:57:36.221932 kubelet[2738]: E0123 17:57:36.221912 2738 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 17:57:36.222259 kubelet[2738]: E0123 17:57:36.222204 2738 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7872c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jkkz7_calico-system(6e403bca-286c-4acf-bbf0-2ee7f3d0b56e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 17:57:36.223528 kubelet[2738]: E0123 17:57:36.223474 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jkkz7" podUID="6e403bca-286c-4acf-bbf0-2ee7f3d0b56e" Jan 23 17:57:36.504513 kubelet[2738]: E0123 17:57:36.504272 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zzc7s" podUID="25d835ef-f3bb-42c6-bc1f-07f8b7a82a66"