Jan 23 00:07:05.808288 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 23 00:07:05.808312 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Thu Jan 22 22:21:53 -00 2026 Jan 23 00:07:05.808322 kernel: KASLR enabled Jan 23 00:07:05.808327 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 23 00:07:05.808333 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390bb018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Jan 23 00:07:05.808339 kernel: random: crng init done Jan 23 00:07:05.808345 kernel: secureboot: Secure boot disabled Jan 23 00:07:05.808351 kernel: ACPI: Early table checksum verification disabled Jan 23 00:07:05.808357 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jan 23 00:07:05.808363 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 23 00:07:05.808370 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:07:05.808376 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:07:05.808382 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:07:05.808388 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:07:05.808395 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:07:05.808403 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:07:05.808409 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:07:05.808430 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:07:05.808437 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 00:07:05.808443 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 23 00:07:05.808449 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 23 00:07:05.808455 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 23 00:07:05.808462 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 23 00:07:05.808468 kernel: NODE_DATA(0) allocated [mem 0x13967da00-0x139684fff] Jan 23 00:07:05.808473 kernel: Zone ranges: Jan 23 00:07:05.808479 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 23 00:07:05.808488 kernel: DMA32 empty Jan 23 00:07:05.808494 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 23 00:07:05.808500 kernel: Device empty Jan 23 00:07:05.808506 kernel: Movable zone start for each node Jan 23 00:07:05.808512 kernel: Early memory node ranges Jan 23 00:07:05.808518 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Jan 23 00:07:05.808524 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Jan 23 00:07:05.808530 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Jan 23 00:07:05.808536 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jan 23 00:07:05.808542 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jan 23 00:07:05.808548 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jan 23 00:07:05.808554 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jan 23 00:07:05.808561 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jan 23 00:07:05.808567 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jan 23 00:07:05.808576 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 23 00:07:05.808583 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 23 00:07:05.808589 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Jan 23 00:07:05.808597 kernel: psci: probing for conduit method from ACPI. Jan 23 00:07:05.808603 kernel: psci: PSCIv1.1 detected in firmware. Jan 23 00:07:05.808610 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 00:07:05.808616 kernel: psci: Trusted OS migration not required Jan 23 00:07:05.808622 kernel: psci: SMC Calling Convention v1.1 Jan 23 00:07:05.808629 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 23 00:07:05.808635 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 23 00:07:05.808641 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 23 00:07:05.808648 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 00:07:05.808654 kernel: Detected PIPT I-cache on CPU0 Jan 23 00:07:05.808661 kernel: CPU features: detected: GIC system register CPU interface Jan 23 00:07:05.808669 kernel: CPU features: detected: Spectre-v4 Jan 23 00:07:05.808675 kernel: CPU features: detected: Spectre-BHB Jan 23 00:07:05.808682 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 23 00:07:05.809760 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 23 00:07:05.809770 kernel: CPU features: detected: ARM erratum 1418040 Jan 23 00:07:05.809776 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 23 00:07:05.809783 kernel: alternatives: applying boot alternatives Jan 23 00:07:05.809791 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=38aa0560e146398cb8c3378a56d449784f1c7652139d7b61279d764fcc4c793a Jan 23 00:07:05.809799 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 00:07:05.809805 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 00:07:05.809812 kernel: Fallback order for Node 0: 0 Jan 23 00:07:05.809824 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1024000 Jan 23 00:07:05.809831 kernel: Policy zone: Normal Jan 23 00:07:05.809837 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 00:07:05.809844 kernel: software IO TLB: area num 2. Jan 23 00:07:05.809850 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Jan 23 00:07:05.809857 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 00:07:05.809863 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 00:07:05.809870 kernel: rcu: RCU event tracing is enabled. Jan 23 00:07:05.809877 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 00:07:05.809884 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 00:07:05.809890 kernel: Tracing variant of Tasks RCU enabled. Jan 23 00:07:05.809897 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 00:07:05.809905 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 00:07:05.809912 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 00:07:05.809918 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 00:07:05.809925 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 00:07:05.809932 kernel: GICv3: 256 SPIs implemented Jan 23 00:07:05.809938 kernel: GICv3: 0 Extended SPIs implemented Jan 23 00:07:05.809944 kernel: Root IRQ handler: gic_handle_irq Jan 23 00:07:05.809951 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 23 00:07:05.809957 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jan 23 00:07:05.809964 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 23 00:07:05.809970 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 23 00:07:05.809978 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100100000 (indirect, esz 8, psz 64K, shr 1) Jan 23 00:07:05.809985 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100110000 (flat, esz 8, psz 64K, shr 1) Jan 23 00:07:05.809992 kernel: GICv3: using LPI property table @0x0000000100120000 Jan 23 00:07:05.809998 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100130000 Jan 23 00:07:05.810005 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 00:07:05.810011 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 00:07:05.810018 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 23 00:07:05.810024 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 23 00:07:05.810031 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 23 00:07:05.810037 kernel: Console: colour dummy device 80x25 Jan 23 00:07:05.810044 kernel: ACPI: Core revision 20240827 Jan 23 00:07:05.810053 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 23 00:07:05.810060 kernel: pid_max: default: 32768 minimum: 301 Jan 23 00:07:05.810066 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 00:07:05.810073 kernel: landlock: Up and running. Jan 23 00:07:05.810080 kernel: SELinux: Initializing. Jan 23 00:07:05.810086 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 00:07:05.810093 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 00:07:05.810100 kernel: rcu: Hierarchical SRCU implementation. Jan 23 00:07:05.810107 kernel: rcu: Max phase no-delay instances is 400. Jan 23 00:07:05.810115 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 00:07:05.810122 kernel: Remapping and enabling EFI services. Jan 23 00:07:05.810129 kernel: smp: Bringing up secondary CPUs ... Jan 23 00:07:05.810135 kernel: Detected PIPT I-cache on CPU1 Jan 23 00:07:05.810142 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 23 00:07:05.810149 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100140000 Jan 23 00:07:05.810156 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 23 00:07:05.810162 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 23 00:07:05.810169 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 00:07:05.810176 kernel: SMP: Total of 2 processors activated. Jan 23 00:07:05.810190 kernel: CPU: All CPU(s) started at EL1 Jan 23 00:07:05.810197 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 00:07:05.810205 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 23 00:07:05.810212 kernel: CPU features: detected: Common not Private translations Jan 23 00:07:05.810219 kernel: CPU features: detected: CRC32 instructions Jan 23 00:07:05.810226 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 23 00:07:05.810233 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 23 00:07:05.810242 kernel: CPU features: detected: LSE atomic instructions Jan 23 00:07:05.810249 kernel: CPU features: detected: Privileged Access Never Jan 23 00:07:05.810256 kernel: CPU features: detected: RAS Extension Support Jan 23 00:07:05.810263 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 23 00:07:05.810270 kernel: alternatives: applying system-wide alternatives Jan 23 00:07:05.810277 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jan 23 00:07:05.810284 kernel: Memory: 3858852K/4096000K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 215668K reserved, 16384K cma-reserved) Jan 23 00:07:05.810292 kernel: devtmpfs: initialized Jan 23 00:07:05.810299 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 00:07:05.810307 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 00:07:05.810314 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 23 00:07:05.810322 kernel: 0 pages in range for non-PLT usage Jan 23 00:07:05.810329 kernel: 508400 pages in range for PLT usage Jan 23 00:07:05.810336 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 00:07:05.810343 kernel: SMBIOS 3.0.0 present. Jan 23 00:07:05.810350 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 23 00:07:05.810357 kernel: DMI: Memory slots populated: 1/1 Jan 23 00:07:05.810364 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 00:07:05.810372 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 00:07:05.810379 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 00:07:05.810386 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 00:07:05.810393 kernel: audit: initializing netlink subsys (disabled) Jan 23 00:07:05.810401 kernel: audit: type=2000 audit(0.016:1): state=initialized audit_enabled=0 res=1 Jan 23 00:07:05.810408 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 00:07:05.810426 kernel: cpuidle: using governor menu Jan 23 00:07:05.810433 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 00:07:05.810440 kernel: ASID allocator initialised with 32768 entries Jan 23 00:07:05.810450 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 00:07:05.810457 kernel: Serial: AMBA PL011 UART driver Jan 23 00:07:05.810464 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 00:07:05.810471 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 00:07:05.810478 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 00:07:05.810485 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 00:07:05.810492 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 00:07:05.810499 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 00:07:05.810506 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 00:07:05.810514 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 00:07:05.810521 kernel: ACPI: Added _OSI(Module Device) Jan 23 00:07:05.810528 kernel: ACPI: Added _OSI(Processor Device) Jan 23 00:07:05.810535 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 00:07:05.810542 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 00:07:05.810549 kernel: ACPI: Interpreter enabled Jan 23 00:07:05.810556 kernel: ACPI: Using GIC for interrupt routing Jan 23 00:07:05.810564 kernel: ACPI: MCFG table detected, 1 entries Jan 23 00:07:05.810571 kernel: ACPI: CPU0 has been hot-added Jan 23 00:07:05.810579 kernel: ACPI: CPU1 has been hot-added Jan 23 00:07:05.810586 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 23 00:07:05.810593 kernel: printk: legacy console [ttyAMA0] enabled Jan 23 00:07:05.810600 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 00:07:05.811829 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 00:07:05.811917 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 23 00:07:05.811976 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 23 00:07:05.812032 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 23 00:07:05.812093 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 23 00:07:05.812102 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 23 00:07:05.812110 kernel: PCI host bridge to bus 0000:00 Jan 23 00:07:05.812176 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 23 00:07:05.812230 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 23 00:07:05.812282 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 23 00:07:05.812334 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 00:07:05.812462 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jan 23 00:07:05.812546 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 conventional PCI endpoint Jan 23 00:07:05.812608 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11289000-0x11289fff] Jan 23 00:07:05.812668 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref] Jan 23 00:07:05.813872 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:07:05.813952 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11288000-0x11288fff] Jan 23 00:07:05.814022 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 23 00:07:05.814083 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Jan 23 00:07:05.814148 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Jan 23 00:07:05.814216 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:07:05.814277 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11287000-0x11287fff] Jan 23 00:07:05.814352 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 23 00:07:05.814430 kernel: pci 0000:00:02.1: bridge window [mem 0x10e00000-0x10ffffff] Jan 23 00:07:05.814513 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:07:05.814575 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11286000-0x11286fff] Jan 23 00:07:05.814634 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 23 00:07:05.814706 kernel: pci 0000:00:02.2: bridge window [mem 0x10c00000-0x10dfffff] Jan 23 00:07:05.814780 kernel: pci 0000:00:02.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Jan 23 00:07:05.814848 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:07:05.814907 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11285000-0x11285fff] Jan 23 00:07:05.814969 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 23 00:07:05.815027 kernel: pci 0000:00:02.3: bridge window [mem 0x10a00000-0x10bfffff] Jan 23 00:07:05.815085 kernel: pci 0000:00:02.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Jan 23 00:07:05.815151 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:07:05.815210 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11284000-0x11284fff] Jan 23 00:07:05.815268 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 23 00:07:05.815326 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 23 00:07:05.815385 kernel: pci 0000:00:02.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Jan 23 00:07:05.815502 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:07:05.815567 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11283000-0x11283fff] Jan 23 00:07:05.815627 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 23 00:07:05.817811 kernel: pci 0000:00:02.5: bridge window [mem 0x10600000-0x107fffff] Jan 23 00:07:05.817960 kernel: pci 0000:00:02.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Jan 23 00:07:05.818039 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:07:05.818109 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11282000-0x11282fff] Jan 23 00:07:05.818169 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 23 00:07:05.818228 kernel: pci 0000:00:02.6: bridge window [mem 0x10400000-0x105fffff] Jan 23 00:07:05.818287 kernel: pci 0000:00:02.6: bridge window [mem 0x8000500000-0x80005fffff 64bit pref] Jan 23 00:07:05.818354 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:07:05.818433 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11281000-0x11281fff] Jan 23 00:07:05.818507 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 23 00:07:05.818573 kernel: pci 0000:00:02.7: bridge window [mem 0x10200000-0x103fffff] Jan 23 00:07:05.818646 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 00:07:05.818740 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11280000-0x11280fff] Jan 23 00:07:05.818805 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 23 00:07:05.818868 kernel: pci 0000:00:03.0: bridge window [mem 0x10000000-0x101fffff] Jan 23 00:07:05.818938 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 conventional PCI endpoint Jan 23 00:07:05.819000 kernel: pci 0000:00:04.0: BAR 0 [io 0x0000-0x0007] Jan 23 00:07:05.819072 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 23 00:07:05.819135 kernel: pci 0000:01:00.0: BAR 1 [mem 0x11000000-0x11000fff] Jan 23 00:07:05.819203 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jan 23 00:07:05.819268 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 23 00:07:05.819348 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jan 23 00:07:05.819419 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10e00000-0x10e03fff 64bit] Jan 23 00:07:05.819504 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Jan 23 00:07:05.819570 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10c00000-0x10c00fff] Jan 23 00:07:05.819634 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Jan 23 00:07:05.820849 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Jan 23 00:07:05.820942 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Jan 23 00:07:05.821016 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 23 00:07:05.821089 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff] Jan 23 00:07:05.821152 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Jan 23 00:07:05.821227 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Jan 23 00:07:05.821290 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10600000-0x10600fff] Jan 23 00:07:05.821351 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Jan 23 00:07:05.821439 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 23 00:07:05.821506 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10400000-0x10400fff] Jan 23 00:07:05.821571 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000500000-0x8000503fff 64bit pref] Jan 23 00:07:05.821632 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 23 00:07:05.823736 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 23 00:07:05.823845 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 23 00:07:05.823908 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 23 00:07:05.823973 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 23 00:07:05.824033 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 23 00:07:05.824097 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 23 00:07:05.824160 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 23 00:07:05.824220 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 23 00:07:05.824279 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 23 00:07:05.824342 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 23 00:07:05.824401 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 23 00:07:05.824480 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 23 00:07:05.824554 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 23 00:07:05.824616 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 23 00:07:05.824674 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 23 00:07:05.824834 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 23 00:07:05.824903 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 23 00:07:05.824962 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 23 00:07:05.825031 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 23 00:07:05.825091 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 23 00:07:05.825149 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 23 00:07:05.825213 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 23 00:07:05.825273 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 23 00:07:05.825333 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 23 00:07:05.825396 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 23 00:07:05.825512 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 23 00:07:05.825576 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 23 00:07:05.825639 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff]: assigned Jan 23 00:07:05.825721 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Jan 23 00:07:05.825786 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff]: assigned Jan 23 00:07:05.825845 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Jan 23 00:07:05.825907 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff]: assigned Jan 23 00:07:05.825970 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Jan 23 00:07:05.826032 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff]: assigned Jan 23 00:07:05.826092 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Jan 23 00:07:05.826155 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff]: assigned Jan 23 00:07:05.826214 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Jan 23 00:07:05.826274 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Jan 23 00:07:05.826332 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Jan 23 00:07:05.826393 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Jan 23 00:07:05.826470 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Jan 23 00:07:05.826533 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Jan 23 00:07:05.826592 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Jan 23 00:07:05.826652 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff]: assigned Jan 23 00:07:05.826724 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Jan 23 00:07:05.826792 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8001200000-0x8001203fff 64bit pref]: assigned Jan 23 00:07:05.826851 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11200000-0x11200fff]: assigned Jan 23 00:07:05.826911 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11201000-0x11201fff]: assigned Jan 23 00:07:05.826981 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Jan 23 00:07:05.827054 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11202000-0x11202fff]: assigned Jan 23 00:07:05.827116 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Jan 23 00:07:05.827175 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11203000-0x11203fff]: assigned Jan 23 00:07:05.827235 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Jan 23 00:07:05.827294 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11204000-0x11204fff]: assigned Jan 23 00:07:05.827353 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Jan 23 00:07:05.827424 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11205000-0x11205fff]: assigned Jan 23 00:07:05.827492 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Jan 23 00:07:05.827555 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11206000-0x11206fff]: assigned Jan 23 00:07:05.827616 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Jan 23 00:07:05.827675 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11207000-0x11207fff]: assigned Jan 23 00:07:05.828109 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Jan 23 00:07:05.828176 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11208000-0x11208fff]: assigned Jan 23 00:07:05.828237 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Jan 23 00:07:05.828300 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11209000-0x11209fff]: assigned Jan 23 00:07:05.828360 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff]: assigned Jan 23 00:07:05.828474 kernel: pci 0000:00:04.0: BAR 0 [io 0xa000-0xa007]: assigned Jan 23 00:07:05.828555 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Jan 23 00:07:05.828618 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jan 23 00:07:05.828773 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Jan 23 00:07:05.828855 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 23 00:07:05.828917 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 23 00:07:05.828976 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 23 00:07:05.829035 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 23 00:07:05.829101 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Jan 23 00:07:05.829162 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 23 00:07:05.829227 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 23 00:07:05.829285 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 23 00:07:05.829344 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 23 00:07:05.829410 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Jan 23 00:07:05.829497 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Jan 23 00:07:05.829559 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 23 00:07:05.829618 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 23 00:07:05.829680 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 23 00:07:05.829775 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 23 00:07:05.829844 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Jan 23 00:07:05.829906 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 23 00:07:05.829965 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 23 00:07:05.830024 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 23 00:07:05.830082 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 23 00:07:05.830154 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Jan 23 00:07:05.830215 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff]: assigned Jan 23 00:07:05.830275 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 23 00:07:05.830334 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 23 00:07:05.830396 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 23 00:07:05.830470 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 23 00:07:05.830540 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Jan 23 00:07:05.830604 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Jan 23 00:07:05.830665 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 23 00:07:05.830758 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 23 00:07:05.830819 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 23 00:07:05.830878 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 23 00:07:05.830945 kernel: pci 0000:07:00.0: ROM [mem 0x10c00000-0x10c7ffff pref]: assigned Jan 23 00:07:05.831007 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000c00000-0x8000c03fff 64bit pref]: assigned Jan 23 00:07:05.831071 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10c80000-0x10c80fff]: assigned Jan 23 00:07:05.831131 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 23 00:07:05.831194 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 23 00:07:05.831253 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 23 00:07:05.831313 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 23 00:07:05.831374 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 23 00:07:05.831571 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 23 00:07:05.831644 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 23 00:07:05.831738 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 23 00:07:05.831807 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 23 00:07:05.831868 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 23 00:07:05.831934 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 23 00:07:05.831994 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 23 00:07:05.832058 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 23 00:07:05.832112 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 23 00:07:05.832165 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 23 00:07:05.832232 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 23 00:07:05.832289 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 23 00:07:05.832347 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 23 00:07:05.832410 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 23 00:07:05.832484 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 23 00:07:05.832540 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 23 00:07:05.832609 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 23 00:07:05.832666 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 23 00:07:05.833043 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 23 00:07:05.833122 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 23 00:07:05.833178 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 23 00:07:05.833233 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 23 00:07:05.833295 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 23 00:07:05.833351 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 23 00:07:05.833406 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 23 00:07:05.833531 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 23 00:07:05.833595 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 23 00:07:05.833651 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 23 00:07:05.833762 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 23 00:07:05.833821 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 23 00:07:05.833878 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 23 00:07:05.833941 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 23 00:07:05.834002 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 23 00:07:05.834057 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 23 00:07:05.834120 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 23 00:07:05.834175 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 23 00:07:05.834235 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 23 00:07:05.834244 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 23 00:07:05.834253 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 23 00:07:05.834262 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 23 00:07:05.834270 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 23 00:07:05.834277 kernel: iommu: Default domain type: Translated Jan 23 00:07:05.834285 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 00:07:05.834293 kernel: efivars: Registered efivars operations Jan 23 00:07:05.834300 kernel: vgaarb: loaded Jan 23 00:07:05.834308 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 00:07:05.834315 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 00:07:05.834322 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 00:07:05.834331 kernel: pnp: PnP ACPI init Jan 23 00:07:05.834406 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 23 00:07:05.834429 kernel: pnp: PnP ACPI: found 1 devices Jan 23 00:07:05.834437 kernel: NET: Registered PF_INET protocol family Jan 23 00:07:05.834445 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 00:07:05.834452 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 00:07:05.834460 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 00:07:05.834467 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 00:07:05.834477 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 00:07:05.834485 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 00:07:05.834492 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 00:07:05.834500 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 00:07:05.834508 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 00:07:05.834585 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 23 00:07:05.834597 kernel: PCI: CLS 0 bytes, default 64 Jan 23 00:07:05.834604 kernel: kvm [1]: HYP mode not available Jan 23 00:07:05.834612 kernel: Initialise system trusted keyrings Jan 23 00:07:05.834621 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 00:07:05.834629 kernel: Key type asymmetric registered Jan 23 00:07:05.834636 kernel: Asymmetric key parser 'x509' registered Jan 23 00:07:05.834643 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 23 00:07:05.834651 kernel: io scheduler mq-deadline registered Jan 23 00:07:05.834658 kernel: io scheduler kyber registered Jan 23 00:07:05.834665 kernel: io scheduler bfq registered Jan 23 00:07:05.834673 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 23 00:07:05.834786 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 23 00:07:05.835545 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 23 00:07:05.835611 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 00:07:05.835675 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 23 00:07:05.835757 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 23 00:07:05.835851 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 00:07:05.835918 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 23 00:07:05.835980 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 23 00:07:05.836040 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 00:07:05.836106 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 23 00:07:05.836166 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 23 00:07:05.836226 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 00:07:05.836290 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 23 00:07:05.836350 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 23 00:07:05.836410 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 00:07:05.836527 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 23 00:07:05.836591 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 23 00:07:05.836654 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 00:07:05.838841 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 23 00:07:05.838930 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 23 00:07:05.838992 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 00:07:05.839056 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 23 00:07:05.839117 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 23 00:07:05.839178 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 00:07:05.839196 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 23 00:07:05.839260 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 23 00:07:05.839321 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 23 00:07:05.839382 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 00:07:05.839392 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 23 00:07:05.839400 kernel: ACPI: button: Power Button [PWRB] Jan 23 00:07:05.839410 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 23 00:07:05.839500 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 23 00:07:05.839569 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 23 00:07:05.839583 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 00:07:05.839591 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 23 00:07:05.839654 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 23 00:07:05.839665 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 23 00:07:05.839672 kernel: thunder_xcv, ver 1.0 Jan 23 00:07:05.839680 kernel: thunder_bgx, ver 1.0 Jan 23 00:07:05.840370 kernel: nicpf, ver 1.0 Jan 23 00:07:05.840382 kernel: nicvf, ver 1.0 Jan 23 00:07:05.840516 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 00:07:05.840589 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T00:07:05 UTC (1769126825) Jan 23 00:07:05.840599 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 00:07:05.840607 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jan 23 00:07:05.840615 kernel: NET: Registered PF_INET6 protocol family Jan 23 00:07:05.840622 kernel: watchdog: NMI not fully supported Jan 23 00:07:05.840630 kernel: watchdog: Hard watchdog permanently disabled Jan 23 00:07:05.842735 kernel: Segment Routing with IPv6 Jan 23 00:07:05.842748 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 00:07:05.842756 kernel: NET: Registered PF_PACKET protocol family Jan 23 00:07:05.842763 kernel: Key type dns_resolver registered Jan 23 00:07:05.842771 kernel: registered taskstats version 1 Jan 23 00:07:05.842779 kernel: Loading compiled-in X.509 certificates Jan 23 00:07:05.842786 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 380753d9165686712e58c1d21e00c0268e70f18f' Jan 23 00:07:05.842794 kernel: Demotion targets for Node 0: null Jan 23 00:07:05.842801 kernel: Key type .fscrypt registered Jan 23 00:07:05.842809 kernel: Key type fscrypt-provisioning registered Jan 23 00:07:05.842816 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 00:07:05.842825 kernel: ima: Allocated hash algorithm: sha1 Jan 23 00:07:05.842833 kernel: ima: No architecture policies found Jan 23 00:07:05.842841 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 00:07:05.842848 kernel: clk: Disabling unused clocks Jan 23 00:07:05.842855 kernel: PM: genpd: Disabling unused power domains Jan 23 00:07:05.842863 kernel: Warning: unable to open an initial console. Jan 23 00:07:05.842870 kernel: Freeing unused kernel memory: 39552K Jan 23 00:07:05.842878 kernel: Run /init as init process Jan 23 00:07:05.842885 kernel: with arguments: Jan 23 00:07:05.842895 kernel: /init Jan 23 00:07:05.842902 kernel: with environment: Jan 23 00:07:05.842909 kernel: HOME=/ Jan 23 00:07:05.842917 kernel: TERM=linux Jan 23 00:07:05.842926 systemd[1]: Successfully made /usr/ read-only. Jan 23 00:07:05.842937 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 00:07:05.842945 systemd[1]: Detected virtualization kvm. Jan 23 00:07:05.842954 systemd[1]: Detected architecture arm64. Jan 23 00:07:05.842962 systemd[1]: Running in initrd. Jan 23 00:07:05.842970 systemd[1]: No hostname configured, using default hostname. Jan 23 00:07:05.842978 systemd[1]: Hostname set to . Jan 23 00:07:05.842985 systemd[1]: Initializing machine ID from VM UUID. Jan 23 00:07:05.842993 systemd[1]: Queued start job for default target initrd.target. Jan 23 00:07:05.843001 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 00:07:05.843009 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 00:07:05.843020 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 00:07:05.843028 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 00:07:05.843036 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 00:07:05.843044 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 00:07:05.843053 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 00:07:05.843062 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 00:07:05.843069 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 00:07:05.843079 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 00:07:05.843087 systemd[1]: Reached target paths.target - Path Units. Jan 23 00:07:05.843095 systemd[1]: Reached target slices.target - Slice Units. Jan 23 00:07:05.843102 systemd[1]: Reached target swap.target - Swaps. Jan 23 00:07:05.843110 systemd[1]: Reached target timers.target - Timer Units. Jan 23 00:07:05.843118 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 00:07:05.843126 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 00:07:05.843136 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 00:07:05.843144 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 00:07:05.843153 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 00:07:05.843161 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 00:07:05.843169 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 00:07:05.843177 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 00:07:05.843185 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 00:07:05.843193 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 00:07:05.843201 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 00:07:05.843209 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 00:07:05.843219 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 00:07:05.843227 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 00:07:05.843235 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 00:07:05.843243 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:07:05.843251 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 00:07:05.843293 systemd-journald[245]: Collecting audit messages is disabled. Jan 23 00:07:05.843316 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 00:07:05.843324 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 00:07:05.843333 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 00:07:05.843342 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 00:07:05.843350 kernel: Bridge firewalling registered Jan 23 00:07:05.843358 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 00:07:05.843366 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 00:07:05.843374 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:07:05.843383 systemd-journald[245]: Journal started Jan 23 00:07:05.843402 systemd-journald[245]: Runtime Journal (/run/log/journal/80b69c430d60431988430f35d28586ea) is 8M, max 76.5M, 68.5M free. Jan 23 00:07:05.803287 systemd-modules-load[246]: Inserted module 'overlay' Jan 23 00:07:05.827441 systemd-modules-load[246]: Inserted module 'br_netfilter' Jan 23 00:07:05.848142 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 00:07:05.849760 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 00:07:05.851715 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 00:07:05.862520 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 00:07:05.864103 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 00:07:05.866742 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:07:05.883543 systemd-tmpfiles[269]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 00:07:05.887010 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 00:07:05.890833 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 00:07:05.892668 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 00:07:05.895913 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 00:07:05.899853 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 00:07:05.926825 dracut-cmdline[284]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=38aa0560e146398cb8c3378a56d449784f1c7652139d7b61279d764fcc4c793a Jan 23 00:07:05.943313 systemd-resolved[286]: Positive Trust Anchors: Jan 23 00:07:05.943337 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 00:07:05.943368 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 00:07:05.948981 systemd-resolved[286]: Defaulting to hostname 'linux'. Jan 23 00:07:05.950779 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 00:07:05.951459 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 00:07:06.036762 kernel: SCSI subsystem initialized Jan 23 00:07:06.040734 kernel: Loading iSCSI transport class v2.0-870. Jan 23 00:07:06.048741 kernel: iscsi: registered transport (tcp) Jan 23 00:07:06.062735 kernel: iscsi: registered transport (qla4xxx) Jan 23 00:07:06.062803 kernel: QLogic iSCSI HBA Driver Jan 23 00:07:06.084824 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 00:07:06.119589 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 00:07:06.123354 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 00:07:06.176385 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 00:07:06.180083 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 00:07:06.248751 kernel: raid6: neonx8 gen() 15531 MB/s Jan 23 00:07:06.265736 kernel: raid6: neonx4 gen() 15656 MB/s Jan 23 00:07:06.282749 kernel: raid6: neonx2 gen() 13057 MB/s Jan 23 00:07:06.299756 kernel: raid6: neonx1 gen() 10290 MB/s Jan 23 00:07:06.316744 kernel: raid6: int64x8 gen() 6739 MB/s Jan 23 00:07:06.333744 kernel: raid6: int64x4 gen() 7267 MB/s Jan 23 00:07:06.350749 kernel: raid6: int64x2 gen() 6033 MB/s Jan 23 00:07:06.367771 kernel: raid6: int64x1 gen() 5024 MB/s Jan 23 00:07:06.367880 kernel: raid6: using algorithm neonx4 gen() 15656 MB/s Jan 23 00:07:06.384793 kernel: raid6: .... xor() 12309 MB/s, rmw enabled Jan 23 00:07:06.384891 kernel: raid6: using neon recovery algorithm Jan 23 00:07:06.389757 kernel: xor: measuring software checksum speed Jan 23 00:07:06.389834 kernel: 8regs : 21636 MB/sec Jan 23 00:07:06.391013 kernel: 32regs : 21653 MB/sec Jan 23 00:07:06.391046 kernel: arm64_neon : 28089 MB/sec Jan 23 00:07:06.391062 kernel: xor: using function: arm64_neon (28089 MB/sec) Jan 23 00:07:06.445749 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 00:07:06.455107 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 00:07:06.458866 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 00:07:06.495478 systemd-udevd[495]: Using default interface naming scheme 'v255'. Jan 23 00:07:06.500194 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 00:07:06.507116 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 00:07:06.541675 dracut-pre-trigger[505]: rd.md=0: removing MD RAID activation Jan 23 00:07:06.572919 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 00:07:06.576095 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 00:07:06.643147 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 00:07:06.649581 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 00:07:06.736714 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Jan 23 00:07:06.753717 kernel: scsi host0: Virtio SCSI HBA Jan 23 00:07:06.760210 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 23 00:07:06.760313 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 23 00:07:06.775218 kernel: ACPI: bus type USB registered Jan 23 00:07:06.775272 kernel: usbcore: registered new interface driver usbfs Jan 23 00:07:06.776133 kernel: usbcore: registered new interface driver hub Jan 23 00:07:06.776181 kernel: usbcore: registered new device driver usb Jan 23 00:07:06.796211 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 23 00:07:06.796444 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 23 00:07:06.796553 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 23 00:07:06.797807 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 23 00:07:06.797974 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 23 00:07:06.798722 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 23 00:07:06.799715 kernel: hub 1-0:1.0: USB hub found Jan 23 00:07:06.799899 kernel: hub 1-0:1.0: 4 ports detected Jan 23 00:07:06.799979 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 23 00:07:06.801442 kernel: hub 2-0:1.0: USB hub found Jan 23 00:07:06.801600 kernel: hub 2-0:1.0: 4 ports detected Jan 23 00:07:06.807029 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 00:07:06.807263 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:07:06.810171 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:07:06.813400 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 23 00:07:06.814760 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:07:06.816586 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 23 00:07:06.818751 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 23 00:07:06.817220 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 00:07:06.822032 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 23 00:07:06.822252 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 00:07:06.822264 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 23 00:07:06.822347 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 23 00:07:06.822435 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 23 00:07:06.825730 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 23 00:07:06.848127 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 00:07:06.848209 kernel: GPT:17805311 != 80003071 Jan 23 00:07:06.848233 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 00:07:06.848255 kernel: GPT:17805311 != 80003071 Jan 23 00:07:06.848276 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 00:07:06.848297 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 00:07:06.850107 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 23 00:07:06.851941 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:07:06.917206 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 23 00:07:06.943592 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 23 00:07:06.955589 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 23 00:07:06.965716 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 23 00:07:06.966509 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 23 00:07:06.971733 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 00:07:06.977515 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 00:07:06.978337 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 00:07:06.979679 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 00:07:06.982066 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 00:07:06.983536 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 00:07:07.007019 disk-uuid[602]: Primary Header is updated. Jan 23 00:07:07.007019 disk-uuid[602]: Secondary Entries is updated. Jan 23 00:07:07.007019 disk-uuid[602]: Secondary Header is updated. Jan 23 00:07:07.017912 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 00:07:07.020754 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 00:07:07.028732 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 00:07:07.040787 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 23 00:07:07.173158 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 23 00:07:07.173214 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 23 00:07:07.173939 kernel: usbcore: registered new interface driver usbhid Jan 23 00:07:07.174704 kernel: usbhid: USB HID core driver Jan 23 00:07:07.278780 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 23 00:07:07.406734 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 23 00:07:07.459765 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 23 00:07:08.044723 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 00:07:08.046082 disk-uuid[606]: The operation has completed successfully. Jan 23 00:07:08.118503 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 00:07:08.119417 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 00:07:08.153329 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 00:07:08.187942 sh[627]: Success Jan 23 00:07:08.202740 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 00:07:08.202825 kernel: device-mapper: uevent: version 1.0.3 Jan 23 00:07:08.204268 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 00:07:08.213733 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 23 00:07:08.277992 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 00:07:08.279889 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 00:07:08.300387 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 00:07:08.309879 kernel: BTRFS: device fsid 97a43946-ed04-45c1-a355-c0350e8b973e devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (639) Jan 23 00:07:08.309947 kernel: BTRFS info (device dm-0): first mount of filesystem 97a43946-ed04-45c1-a355-c0350e8b973e Jan 23 00:07:08.311051 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 00:07:08.318836 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 00:07:08.318908 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 00:07:08.318938 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 00:07:08.320756 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 00:07:08.322153 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 00:07:08.323537 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 00:07:08.325884 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 00:07:08.327972 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 00:07:08.363743 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (674) Jan 23 00:07:08.365735 kernel: BTRFS info (device sda6): first mount of filesystem e9ae44b3-0aec-43ca-ad8b-9cf4e242132f Jan 23 00:07:08.365803 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 00:07:08.373094 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 00:07:08.373181 kernel: BTRFS info (device sda6): turning on async discard Jan 23 00:07:08.373208 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 00:07:08.377909 kernel: BTRFS info (device sda6): last unmount of filesystem e9ae44b3-0aec-43ca-ad8b-9cf4e242132f Jan 23 00:07:08.381778 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 00:07:08.383880 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 00:07:08.478363 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 00:07:08.482331 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 00:07:08.520549 ignition[723]: Ignition 2.22.0 Jan 23 00:07:08.520564 ignition[723]: Stage: fetch-offline Jan 23 00:07:08.520594 ignition[723]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:07:08.520603 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 00:07:08.524586 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 00:07:08.520679 ignition[723]: parsed url from cmdline: "" Jan 23 00:07:08.520682 ignition[723]: no config URL provided Jan 23 00:07:08.520700 ignition[723]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 00:07:08.526709 systemd-networkd[812]: lo: Link UP Jan 23 00:07:08.520706 ignition[723]: no config at "/usr/lib/ignition/user.ign" Jan 23 00:07:08.526712 systemd-networkd[812]: lo: Gained carrier Jan 23 00:07:08.520711 ignition[723]: failed to fetch config: resource requires networking Jan 23 00:07:08.528771 systemd-networkd[812]: Enumeration completed Jan 23 00:07:08.520974 ignition[723]: Ignition finished successfully Jan 23 00:07:08.529091 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 00:07:08.529335 systemd-networkd[812]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:07:08.529338 systemd-networkd[812]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 00:07:08.530111 systemd[1]: Reached target network.target - Network. Jan 23 00:07:08.530455 systemd-networkd[812]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:07:08.530458 systemd-networkd[812]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 00:07:08.531503 systemd-networkd[812]: eth0: Link UP Jan 23 00:07:08.531756 systemd-networkd[812]: eth1: Link UP Jan 23 00:07:08.532468 systemd-networkd[812]: eth0: Gained carrier Jan 23 00:07:08.532480 systemd-networkd[812]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:07:08.532548 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 00:07:08.536461 systemd-networkd[812]: eth1: Gained carrier Jan 23 00:07:08.536477 systemd-networkd[812]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:07:08.569965 ignition[817]: Ignition 2.22.0 Jan 23 00:07:08.570651 ignition[817]: Stage: fetch Jan 23 00:07:08.570863 ignition[817]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:07:08.570873 ignition[817]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 00:07:08.570959 ignition[817]: parsed url from cmdline: "" Jan 23 00:07:08.570962 ignition[817]: no config URL provided Jan 23 00:07:08.570967 ignition[817]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 00:07:08.570974 ignition[817]: no config at "/usr/lib/ignition/user.ign" Jan 23 00:07:08.571014 ignition[817]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 23 00:07:08.573296 ignition[817]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 23 00:07:08.576615 systemd-networkd[812]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 23 00:07:08.584894 systemd-networkd[812]: eth0: DHCPv4 address 88.198.161.46/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 23 00:07:08.774082 ignition[817]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 23 00:07:08.779546 ignition[817]: GET result: OK Jan 23 00:07:08.779791 ignition[817]: parsing config with SHA512: c765de966c46bb0d1db1e0962da3e4921dafd4f2a59c5c5ed8dd685d0ee804da675ab13e12de56cb67a93785ebbbe75469ba8e17ef2f0da8d5cc41673791e4b4 Jan 23 00:07:08.784495 unknown[817]: fetched base config from "system" Jan 23 00:07:08.784513 unknown[817]: fetched base config from "system" Jan 23 00:07:08.784518 unknown[817]: fetched user config from "hetzner" Jan 23 00:07:08.788152 ignition[817]: fetch: fetch complete Jan 23 00:07:08.788160 ignition[817]: fetch: fetch passed Jan 23 00:07:08.788237 ignition[817]: Ignition finished successfully Jan 23 00:07:08.790083 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 00:07:08.792774 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 00:07:08.826867 ignition[824]: Ignition 2.22.0 Jan 23 00:07:08.826883 ignition[824]: Stage: kargs Jan 23 00:07:08.827035 ignition[824]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:07:08.827045 ignition[824]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 00:07:08.827922 ignition[824]: kargs: kargs passed Jan 23 00:07:08.831674 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 00:07:08.827970 ignition[824]: Ignition finished successfully Jan 23 00:07:08.835247 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 00:07:08.868454 ignition[830]: Ignition 2.22.0 Jan 23 00:07:08.869115 ignition[830]: Stage: disks Jan 23 00:07:08.869725 ignition[830]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:07:08.870250 ignition[830]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 00:07:08.871865 ignition[830]: disks: disks passed Jan 23 00:07:08.872425 ignition[830]: Ignition finished successfully Jan 23 00:07:08.875492 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 00:07:08.876334 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 00:07:08.877569 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 00:07:08.879015 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 00:07:08.880290 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 00:07:08.881432 systemd[1]: Reached target basic.target - Basic System. Jan 23 00:07:08.883389 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 00:07:08.914489 systemd-fsck[838]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jan 23 00:07:08.919141 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 00:07:08.922366 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 00:07:09.000717 kernel: EXT4-fs (sda9): mounted filesystem f31390ab-27e9-47d9-a374-053913301d53 r/w with ordered data mode. Quota mode: none. Jan 23 00:07:09.002110 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 00:07:09.004298 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 00:07:09.007602 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 00:07:09.009200 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 00:07:09.012981 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 23 00:07:09.013727 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 00:07:09.013763 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 00:07:09.038174 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 00:07:09.039911 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 00:07:09.052714 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (846) Jan 23 00:07:09.059311 kernel: BTRFS info (device sda6): first mount of filesystem e9ae44b3-0aec-43ca-ad8b-9cf4e242132f Jan 23 00:07:09.059384 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 00:07:09.062977 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 00:07:09.063038 kernel: BTRFS info (device sda6): turning on async discard Jan 23 00:07:09.063049 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 00:07:09.067258 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 00:07:09.101616 coreos-metadata[848]: Jan 23 00:07:09.101 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 23 00:07:09.103976 coreos-metadata[848]: Jan 23 00:07:09.103 INFO Fetch successful Jan 23 00:07:09.107856 coreos-metadata[848]: Jan 23 00:07:09.106 INFO wrote hostname ci-4459-2-2-n-8734b5e787 to /sysroot/etc/hostname Jan 23 00:07:09.108788 initrd-setup-root[876]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 00:07:09.111745 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 00:07:09.118738 initrd-setup-root[884]: cut: /sysroot/etc/group: No such file or directory Jan 23 00:07:09.124342 initrd-setup-root[891]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 00:07:09.129155 initrd-setup-root[898]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 00:07:09.234738 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 00:07:09.236746 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 00:07:09.239165 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 00:07:09.259718 kernel: BTRFS info (device sda6): last unmount of filesystem e9ae44b3-0aec-43ca-ad8b-9cf4e242132f Jan 23 00:07:09.278442 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 00:07:09.288472 ignition[967]: INFO : Ignition 2.22.0 Jan 23 00:07:09.288472 ignition[967]: INFO : Stage: mount Jan 23 00:07:09.289677 ignition[967]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 00:07:09.289677 ignition[967]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 00:07:09.289677 ignition[967]: INFO : mount: mount passed Jan 23 00:07:09.292783 ignition[967]: INFO : Ignition finished successfully Jan 23 00:07:09.291807 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 00:07:09.294309 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 00:07:09.310176 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 00:07:09.320711 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 00:07:09.347385 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (979) Jan 23 00:07:09.347453 kernel: BTRFS info (device sda6): first mount of filesystem e9ae44b3-0aec-43ca-ad8b-9cf4e242132f Jan 23 00:07:09.348121 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 23 00:07:09.351724 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 00:07:09.351789 kernel: BTRFS info (device sda6): turning on async discard Jan 23 00:07:09.351800 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 00:07:09.355600 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 00:07:09.391285 ignition[996]: INFO : Ignition 2.22.0 Jan 23 00:07:09.392097 ignition[996]: INFO : Stage: files Jan 23 00:07:09.392502 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 00:07:09.392502 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 00:07:09.393713 ignition[996]: DEBUG : files: compiled without relabeling support, skipping Jan 23 00:07:09.394853 ignition[996]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 00:07:09.394853 ignition[996]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 00:07:09.399081 ignition[996]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 00:07:09.400001 ignition[996]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 00:07:09.401327 unknown[996]: wrote ssh authorized keys file for user: core Jan 23 00:07:09.402513 ignition[996]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 00:07:09.404538 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 00:07:09.405757 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 23 00:07:09.492410 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 00:07:09.570043 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 00:07:09.571563 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 00:07:09.571563 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 00:07:09.571563 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 00:07:09.571563 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 00:07:09.571563 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 00:07:09.571563 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 00:07:09.571563 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 00:07:09.571563 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 00:07:09.580155 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 00:07:09.580155 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 00:07:09.580155 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 00:07:09.580155 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 00:07:09.580155 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 00:07:09.580155 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Jan 23 00:07:09.899707 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 00:07:09.902957 systemd-networkd[812]: eth1: Gained IPv6LL Jan 23 00:07:10.287068 systemd-networkd[812]: eth0: Gained IPv6LL Jan 23 00:07:10.414827 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 00:07:10.414827 ignition[996]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 00:07:10.418408 ignition[996]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 00:07:10.421845 ignition[996]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 00:07:10.421845 ignition[996]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 00:07:10.421845 ignition[996]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 23 00:07:10.421845 ignition[996]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 23 00:07:10.421845 ignition[996]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 23 00:07:10.421845 ignition[996]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 23 00:07:10.421845 ignition[996]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 23 00:07:10.421845 ignition[996]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 00:07:10.421845 ignition[996]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 00:07:10.421845 ignition[996]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 00:07:10.421845 ignition[996]: INFO : files: files passed Jan 23 00:07:10.421845 ignition[996]: INFO : Ignition finished successfully Jan 23 00:07:10.422323 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 00:07:10.424738 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 00:07:10.431487 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 00:07:10.441100 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 00:07:10.443883 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 00:07:10.452705 initrd-setup-root-after-ignition[1026]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 00:07:10.452705 initrd-setup-root-after-ignition[1026]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 00:07:10.455738 initrd-setup-root-after-ignition[1030]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 00:07:10.459133 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 00:07:10.460349 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 00:07:10.462942 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 00:07:10.526736 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 00:07:10.527778 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 00:07:10.529680 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 00:07:10.531312 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 00:07:10.532185 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 00:07:10.533139 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 00:07:10.563582 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 00:07:10.565955 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 00:07:10.593285 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 00:07:10.595939 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 00:07:10.596717 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 00:07:10.597272 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 00:07:10.597418 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 00:07:10.600024 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 00:07:10.600783 systemd[1]: Stopped target basic.target - Basic System. Jan 23 00:07:10.601881 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 00:07:10.603194 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 00:07:10.604344 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 00:07:10.605584 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 00:07:10.606733 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 00:07:10.607900 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 00:07:10.609245 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 00:07:10.610446 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 00:07:10.611620 systemd[1]: Stopped target swap.target - Swaps. Jan 23 00:07:10.612815 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 00:07:10.612958 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 00:07:10.614871 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 00:07:10.615633 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 00:07:10.616753 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 00:07:10.620767 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 00:07:10.621528 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 00:07:10.621661 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 00:07:10.624181 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 00:07:10.624573 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 00:07:10.627657 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 00:07:10.627796 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 00:07:10.629518 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 23 00:07:10.629644 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 00:07:10.633151 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 00:07:10.638011 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 00:07:10.641865 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 00:07:10.643548 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 00:07:10.647799 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 00:07:10.648736 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 00:07:10.654794 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 00:07:10.657765 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 00:07:10.671319 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 00:07:10.679057 ignition[1050]: INFO : Ignition 2.22.0 Jan 23 00:07:10.685766 ignition[1050]: INFO : Stage: umount Jan 23 00:07:10.685766 ignition[1050]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 00:07:10.685766 ignition[1050]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 00:07:10.685766 ignition[1050]: INFO : umount: umount passed Jan 23 00:07:10.685766 ignition[1050]: INFO : Ignition finished successfully Jan 23 00:07:10.682414 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 00:07:10.682767 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 00:07:10.687511 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 00:07:10.687674 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 00:07:10.689675 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 00:07:10.689786 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 00:07:10.691086 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 00:07:10.691142 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 00:07:10.692493 systemd[1]: Stopped target network.target - Network. Jan 23 00:07:10.697947 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 00:07:10.698023 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 00:07:10.699125 systemd[1]: Stopped target paths.target - Path Units. Jan 23 00:07:10.700566 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 00:07:10.704885 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 00:07:10.707316 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 00:07:10.708495 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 00:07:10.709458 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 00:07:10.709509 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 00:07:10.710572 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 00:07:10.710610 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 00:07:10.717825 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 00:07:10.717948 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 00:07:10.720752 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 00:07:10.721918 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 00:07:10.722962 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 00:07:10.724348 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 00:07:10.729292 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 00:07:10.729419 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 00:07:10.731969 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 00:07:10.732030 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 00:07:10.735242 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 00:07:10.735375 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 00:07:10.739649 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 00:07:10.739982 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 00:07:10.741740 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 00:07:10.744517 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 00:07:10.746854 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 00:07:10.747607 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 00:07:10.747651 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 00:07:10.749681 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 00:07:10.751293 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 00:07:10.751370 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 00:07:10.753528 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 00:07:10.753609 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:07:10.754611 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 00:07:10.754662 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 00:07:10.756745 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 00:07:10.756791 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 00:07:10.759586 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 00:07:10.764272 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 00:07:10.764345 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 00:07:10.787742 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 00:07:10.787951 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 00:07:10.790058 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 00:07:10.790139 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 00:07:10.792016 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 00:07:10.792055 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 00:07:10.794219 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 00:07:10.794278 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 00:07:10.796147 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 00:07:10.796206 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 00:07:10.798027 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 00:07:10.798091 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 00:07:10.800508 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 00:07:10.802810 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 00:07:10.802887 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 00:07:10.805237 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 00:07:10.805296 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 00:07:10.809999 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 00:07:10.810071 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 00:07:10.812480 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 00:07:10.812543 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 00:07:10.814352 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 00:07:10.814427 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:07:10.820863 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 00:07:10.820947 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jan 23 00:07:10.820978 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 00:07:10.821012 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 00:07:10.821365 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 00:07:10.821480 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 00:07:10.825896 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 00:07:10.827136 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 00:07:10.828915 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 00:07:10.833901 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 00:07:10.850420 systemd[1]: Switching root. Jan 23 00:07:10.885110 systemd-journald[245]: Journal stopped Jan 23 00:07:11.827029 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Jan 23 00:07:11.827585 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 00:07:11.827618 kernel: SELinux: policy capability open_perms=1 Jan 23 00:07:11.827631 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 00:07:11.827644 kernel: SELinux: policy capability always_check_network=0 Jan 23 00:07:11.827657 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 00:07:11.827671 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 00:07:11.827680 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 00:07:11.827728 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 00:07:11.827738 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 00:07:11.827747 kernel: audit: type=1403 audit(1769126831.051:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 00:07:11.827761 systemd[1]: Successfully loaded SELinux policy in 69.350ms. Jan 23 00:07:11.827783 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.644ms. Jan 23 00:07:11.827794 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 00:07:11.827808 systemd[1]: Detected virtualization kvm. Jan 23 00:07:11.827817 systemd[1]: Detected architecture arm64. Jan 23 00:07:11.827827 systemd[1]: Detected first boot. Jan 23 00:07:11.827837 systemd[1]: Hostname set to . Jan 23 00:07:11.827849 systemd[1]: Initializing machine ID from VM UUID. Jan 23 00:07:11.827861 zram_generator::config[1093]: No configuration found. Jan 23 00:07:11.827874 kernel: NET: Registered PF_VSOCK protocol family Jan 23 00:07:11.827884 systemd[1]: Populated /etc with preset unit settings. Jan 23 00:07:11.827896 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 00:07:11.827906 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 00:07:11.827916 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 00:07:11.827925 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 00:07:11.827936 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 00:07:11.827946 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 00:07:11.827958 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 00:07:11.827968 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 00:07:11.827978 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 00:07:11.827988 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 00:07:11.827998 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 00:07:11.828007 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 00:07:11.828017 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 00:07:11.828027 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 00:07:11.828037 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 00:07:11.828048 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 00:07:11.828058 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 00:07:11.828068 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 00:07:11.828078 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 23 00:07:11.828089 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 00:07:11.828099 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 00:07:11.828110 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 00:07:11.828120 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 00:07:11.828130 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 00:07:11.828140 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 00:07:11.828150 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 00:07:11.828206 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 00:07:11.828218 systemd[1]: Reached target slices.target - Slice Units. Jan 23 00:07:11.828228 systemd[1]: Reached target swap.target - Swaps. Jan 23 00:07:11.828241 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 00:07:11.828257 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 00:07:11.828270 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 00:07:11.828281 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 00:07:11.828291 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 00:07:11.828301 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 00:07:11.828314 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 00:07:11.828324 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 00:07:11.828334 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 00:07:11.828344 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 00:07:11.828356 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 00:07:11.828366 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 00:07:11.828381 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 00:07:11.828393 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 00:07:11.828404 systemd[1]: Reached target machines.target - Containers. Jan 23 00:07:11.828414 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 00:07:11.828424 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 00:07:11.828434 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 00:07:11.828444 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 00:07:11.828456 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 00:07:11.828468 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 00:07:11.828478 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 00:07:11.828487 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 00:07:11.828497 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 00:07:11.828509 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 00:07:11.828519 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 00:07:11.828531 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 00:07:11.828541 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 00:07:11.828550 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 00:07:11.828561 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 00:07:11.828571 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 00:07:11.828582 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 00:07:11.828594 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 00:07:11.828605 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 00:07:11.828615 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 00:07:11.828625 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 00:07:11.828676 systemd-journald[1159]: Collecting audit messages is disabled. Jan 23 00:07:11.829765 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 00:07:11.829787 systemd[1]: Stopped verity-setup.service. Jan 23 00:07:11.829802 systemd-journald[1159]: Journal started Jan 23 00:07:11.829825 systemd-journald[1159]: Runtime Journal (/run/log/journal/80b69c430d60431988430f35d28586ea) is 8M, max 76.5M, 68.5M free. Jan 23 00:07:11.582314 systemd[1]: Queued start job for default target multi-user.target. Jan 23 00:07:11.607575 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 23 00:07:11.608108 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 00:07:11.836700 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 00:07:11.836358 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 00:07:11.838283 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 00:07:11.839702 kernel: fuse: init (API version 7.41) Jan 23 00:07:11.842191 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 00:07:11.842939 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 00:07:11.843746 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 00:07:11.845351 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 00:07:11.847186 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 00:07:11.849159 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 00:07:11.849820 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 00:07:11.852097 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 00:07:11.852269 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 00:07:11.853650 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 00:07:11.854300 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 00:07:11.862261 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 00:07:11.868560 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 00:07:11.869868 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 00:07:11.876857 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 00:07:11.878218 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 00:07:11.881710 kernel: ACPI: bus type drm_connector registered Jan 23 00:07:11.884797 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 00:07:11.885074 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 00:07:11.887713 kernel: loop: module loaded Jan 23 00:07:11.889502 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 00:07:11.889732 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 00:07:11.901396 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 00:07:11.910899 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 00:07:11.911730 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 00:07:11.913784 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 00:07:11.915654 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 00:07:11.925062 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 00:07:11.926918 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 00:07:11.929950 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 00:07:11.933095 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 00:07:11.934788 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 00:07:11.936419 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 00:07:11.937921 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 00:07:11.944102 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 00:07:11.952865 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 00:07:11.959223 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 00:07:11.963885 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 00:07:11.965334 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 00:07:11.967066 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 00:07:11.973401 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 00:07:11.985633 systemd-journald[1159]: Time spent on flushing to /var/log/journal/80b69c430d60431988430f35d28586ea is 56.281ms for 1176 entries. Jan 23 00:07:11.985633 systemd-journald[1159]: System Journal (/var/log/journal/80b69c430d60431988430f35d28586ea) is 8M, max 584.8M, 576.8M free. Jan 23 00:07:12.064020 systemd-journald[1159]: Received client request to flush runtime journal. Jan 23 00:07:12.064089 kernel: loop0: detected capacity change from 0 to 100632 Jan 23 00:07:12.064112 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 00:07:11.992147 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 00:07:12.006544 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 00:07:12.007678 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 00:07:12.018952 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 00:07:12.027045 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:07:12.066768 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 00:07:12.071967 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 00:07:12.083319 kernel: loop1: detected capacity change from 0 to 8 Jan 23 00:07:12.081193 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Jan 23 00:07:12.081218 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Jan 23 00:07:12.086760 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 00:07:12.090880 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 00:07:12.102427 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 00:07:12.114715 kernel: loop2: detected capacity change from 0 to 119840 Jan 23 00:07:12.144810 kernel: loop3: detected capacity change from 0 to 200800 Jan 23 00:07:12.149981 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 00:07:12.156900 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 00:07:12.182055 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Jan 23 00:07:12.182075 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Jan 23 00:07:12.187746 kernel: loop4: detected capacity change from 0 to 100632 Jan 23 00:07:12.188313 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 00:07:12.217725 kernel: loop5: detected capacity change from 0 to 8 Jan 23 00:07:12.222718 kernel: loop6: detected capacity change from 0 to 119840 Jan 23 00:07:12.236735 kernel: loop7: detected capacity change from 0 to 200800 Jan 23 00:07:12.266319 (sd-merge)[1240]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 23 00:07:12.267059 (sd-merge)[1240]: Merged extensions into '/usr'. Jan 23 00:07:12.272856 systemd[1]: Reload requested from client PID 1210 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 00:07:12.272874 systemd[1]: Reloading... Jan 23 00:07:12.429724 zram_generator::config[1271]: No configuration found. Jan 23 00:07:12.497582 ldconfig[1202]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 00:07:12.629231 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 00:07:12.629964 systemd[1]: Reloading finished in 356 ms. Jan 23 00:07:12.646753 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 00:07:12.647804 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 00:07:12.660936 systemd[1]: Starting ensure-sysext.service... Jan 23 00:07:12.664860 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 00:07:12.690004 systemd[1]: Reload requested from client PID 1305 ('systemctl') (unit ensure-sysext.service)... Jan 23 00:07:12.690017 systemd[1]: Reloading... Jan 23 00:07:12.693457 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 00:07:12.693892 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 00:07:12.694170 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 00:07:12.694368 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 00:07:12.695173 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 00:07:12.695486 systemd-tmpfiles[1306]: ACLs are not supported, ignoring. Jan 23 00:07:12.695541 systemd-tmpfiles[1306]: ACLs are not supported, ignoring. Jan 23 00:07:12.699484 systemd-tmpfiles[1306]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 00:07:12.699500 systemd-tmpfiles[1306]: Skipping /boot Jan 23 00:07:12.709195 systemd-tmpfiles[1306]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 00:07:12.709215 systemd-tmpfiles[1306]: Skipping /boot Jan 23 00:07:12.764723 zram_generator::config[1333]: No configuration found. Jan 23 00:07:12.931458 systemd[1]: Reloading finished in 241 ms. Jan 23 00:07:12.951208 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 00:07:12.957367 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 00:07:12.967881 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 00:07:12.972187 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 00:07:12.974967 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 00:07:12.985941 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 00:07:12.990941 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 00:07:12.998151 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 00:07:13.006640 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 00:07:13.009490 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 00:07:13.012972 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 00:07:13.019657 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 00:07:13.020855 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 00:07:13.020991 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 00:07:13.024139 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 00:07:13.027719 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 00:07:13.038043 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 00:07:13.042551 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 00:07:13.042739 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 00:07:13.042827 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 00:07:13.045616 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 00:07:13.059566 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 00:07:13.061171 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 00:07:13.061306 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 00:07:13.071949 systemd[1]: Finished ensure-sysext.service. Jan 23 00:07:13.074561 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 00:07:13.076017 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 00:07:13.076177 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 00:07:13.079456 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 00:07:13.079678 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 00:07:13.083744 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 00:07:13.086089 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 00:07:13.089542 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 00:07:13.094837 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 00:07:13.097141 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 00:07:13.097365 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 00:07:13.109604 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 00:07:13.110845 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 00:07:13.112662 systemd-udevd[1377]: Using default interface naming scheme 'v255'. Jan 23 00:07:13.114986 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 23 00:07:13.117781 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 00:07:13.144328 augenrules[1413]: No rules Jan 23 00:07:13.145912 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 00:07:13.147748 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 00:07:13.150805 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 00:07:13.155489 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 00:07:13.163194 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 00:07:13.315508 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 23 00:07:13.415061 systemd-networkd[1422]: lo: Link UP Jan 23 00:07:13.415803 systemd-networkd[1422]: lo: Gained carrier Jan 23 00:07:13.417225 systemd-networkd[1422]: Enumeration completed Jan 23 00:07:13.417836 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 00:07:13.420783 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 00:07:13.423364 systemd-networkd[1422]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:07:13.423879 systemd-networkd[1422]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 00:07:13.424818 systemd-networkd[1422]: eth0: Link UP Jan 23 00:07:13.425098 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 00:07:13.426107 systemd-networkd[1422]: eth0: Gained carrier Jan 23 00:07:13.426877 systemd-networkd[1422]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:07:13.465792 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 23 00:07:13.466537 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 00:07:13.469215 systemd-resolved[1375]: Positive Trust Anchors: Jan 23 00:07:13.469239 systemd-resolved[1375]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 00:07:13.469272 systemd-resolved[1375]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 00:07:13.476035 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 00:07:13.476756 systemd-resolved[1375]: Using system hostname 'ci-4459-2-2-n-8734b5e787'. Jan 23 00:07:13.478337 systemd-networkd[1422]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:07:13.478650 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 00:07:13.479392 systemd-networkd[1422]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 00:07:13.479808 systemd[1]: Reached target network.target - Network. Jan 23 00:07:13.481029 systemd-networkd[1422]: eth1: Link UP Jan 23 00:07:13.481168 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 00:07:13.481880 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 00:07:13.483578 systemd-networkd[1422]: eth1: Gained carrier Jan 23 00:07:13.483671 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 00:07:13.484609 systemd-networkd[1422]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:07:13.485006 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 00:07:13.486971 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 00:07:13.489179 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 00:07:13.490852 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 00:07:13.491582 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 00:07:13.491611 systemd[1]: Reached target paths.target - Path Units. Jan 23 00:07:13.492483 systemd[1]: Reached target timers.target - Timer Units. Jan 23 00:07:13.497434 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 00:07:13.501017 systemd-networkd[1422]: eth0: DHCPv4 address 88.198.161.46/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 23 00:07:13.504066 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 00:07:13.505763 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 00:07:13.505247 systemd-timesyncd[1408]: Network configuration changed, trying to establish connection. Jan 23 00:07:13.508601 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 00:07:13.511890 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 00:07:13.513462 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 00:07:13.527285 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 00:07:13.529195 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 00:07:13.532545 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 00:07:13.541764 systemd-networkd[1422]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 23 00:07:13.550843 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 00:07:13.551445 systemd[1]: Reached target basic.target - Basic System. Jan 23 00:07:13.552420 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 00:07:13.552530 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 00:07:13.554849 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 00:07:13.556981 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 00:07:13.560556 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 00:07:13.564176 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 00:07:13.566337 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 00:07:13.573941 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 00:07:13.576794 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 00:07:13.581930 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 00:07:13.585892 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 00:07:13.591143 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 00:07:13.604886 jq[1485]: false Jan 23 00:07:13.600958 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 00:07:13.605499 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 00:07:13.608798 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 00:07:13.609341 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 00:07:13.613936 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 00:07:13.625007 extend-filesystems[1486]: Found /dev/sda6 Jan 23 00:07:13.638570 extend-filesystems[1486]: Found /dev/sda9 Jan 23 00:07:13.633571 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 00:07:13.639602 extend-filesystems[1486]: Checking size of /dev/sda9 Jan 23 00:07:13.654071 coreos-metadata[1482]: Jan 23 00:07:13.641 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 23 00:07:13.654071 coreos-metadata[1482]: Jan 23 00:07:13.643 INFO Fetch successful Jan 23 00:07:13.654071 coreos-metadata[1482]: Jan 23 00:07:13.643 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 23 00:07:13.654071 coreos-metadata[1482]: Jan 23 00:07:13.644 INFO Fetch successful Jan 23 00:07:13.642726 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 00:07:13.645837 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 00:07:13.646058 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 00:07:13.662677 extend-filesystems[1486]: Resized partition /dev/sda9 Jan 23 00:07:13.668281 extend-filesystems[1512]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 00:07:13.670138 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 23 00:07:13.672333 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 00:07:13.675450 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 00:07:13.691486 jq[1499]: true Jan 23 00:07:13.708058 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 23 00:07:13.731949 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 00:07:13.737332 dbus-daemon[1483]: [system] SELinux support is enabled Jan 23 00:07:13.740674 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 00:07:13.747140 (ntainerd)[1518]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 00:07:13.747976 systemd-timesyncd[1408]: Contacted time server 178.215.228.24:123 (0.flatcar.pool.ntp.org). Jan 23 00:07:13.748030 systemd-timesyncd[1408]: Initial clock synchronization to Fri 2026-01-23 00:07:13.988988 UTC. Jan 23 00:07:13.758482 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 00:07:13.758547 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 00:07:13.761072 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 00:07:13.761091 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 00:07:13.767362 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 00:07:13.769751 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 00:07:13.776862 update_engine[1497]: I20260123 00:07:13.776449 1497 main.cc:92] Flatcar Update Engine starting Jan 23 00:07:13.783786 systemd[1]: Started update-engine.service - Update Engine. Jan 23 00:07:13.785055 update_engine[1497]: I20260123 00:07:13.783670 1497 update_check_scheduler.cc:74] Next update check in 8m43s Jan 23 00:07:13.806713 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 23 00:07:13.806822 jq[1527]: true Jan 23 00:07:13.821779 extend-filesystems[1512]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 23 00:07:13.821779 extend-filesystems[1512]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 23 00:07:13.821779 extend-filesystems[1512]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 23 00:07:13.838438 extend-filesystems[1486]: Resized filesystem in /dev/sda9 Jan 23 00:07:13.823287 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 00:07:13.844902 tar[1514]: linux-arm64/LICENSE Jan 23 00:07:13.844902 tar[1514]: linux-arm64/helm Jan 23 00:07:13.825247 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 00:07:13.826797 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 00:07:13.835342 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 23 00:07:13.865214 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 23 00:07:13.865309 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 23 00:07:13.865327 kernel: [drm] features: -context_init Jan 23 00:07:13.865343 kernel: [drm] number of scanouts: 1 Jan 23 00:07:13.865714 kernel: [drm] number of cap sets: 0 Jan 23 00:07:13.868956 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Jan 23 00:07:13.876709 kernel: Console: switching to colour frame buffer device 160x50 Jan 23 00:07:13.894710 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 23 00:07:13.895978 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 23 00:07:13.899729 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 00:07:13.920254 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 00:07:13.922029 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 00:07:14.014578 systemd-logind[1494]: New seat seat0. Jan 23 00:07:14.018626 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 00:07:14.033729 bash[1572]: Updated "/home/core/.ssh/authorized_keys" Jan 23 00:07:14.036711 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 00:07:14.040441 systemd[1]: Starting sshkeys.service... Jan 23 00:07:14.089812 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 00:07:14.093770 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 00:07:14.204865 containerd[1518]: time="2026-01-23T00:07:14Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 00:07:14.208272 containerd[1518]: time="2026-01-23T00:07:14.208217742Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 00:07:14.262479 containerd[1518]: time="2026-01-23T00:07:14.262412473Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.816µs" Jan 23 00:07:14.262479 containerd[1518]: time="2026-01-23T00:07:14.262464351Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 00:07:14.262601 containerd[1518]: time="2026-01-23T00:07:14.262491794Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 00:07:14.262697 containerd[1518]: time="2026-01-23T00:07:14.262670051Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 00:07:14.267738 containerd[1518]: time="2026-01-23T00:07:14.262702521Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 00:07:14.267860 containerd[1518]: time="2026-01-23T00:07:14.267766260Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 00:07:14.267933 containerd[1518]: time="2026-01-23T00:07:14.267906319Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 00:07:14.267963 containerd[1518]: time="2026-01-23T00:07:14.267931537Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 00:07:14.268285 containerd[1518]: time="2026-01-23T00:07:14.268240745Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 00:07:14.268285 containerd[1518]: time="2026-01-23T00:07:14.268276058Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 00:07:14.268355 containerd[1518]: time="2026-01-23T00:07:14.268297362Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 00:07:14.268355 containerd[1518]: time="2026-01-23T00:07:14.268310383Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 00:07:14.268430 containerd[1518]: time="2026-01-23T00:07:14.268405074Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 00:07:14.268662 containerd[1518]: time="2026-01-23T00:07:14.268633972Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 00:07:14.268694 containerd[1518]: time="2026-01-23T00:07:14.268676414Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 00:07:14.268727 containerd[1518]: time="2026-01-23T00:07:14.268692525Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 00:07:14.271606 containerd[1518]: time="2026-01-23T00:07:14.271548128Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 00:07:14.275454 containerd[1518]: time="2026-01-23T00:07:14.275345325Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 00:07:14.275561 containerd[1518]: time="2026-01-23T00:07:14.275541835Z" level=info msg="metadata content store policy set" policy=shared Jan 23 00:07:14.285865 containerd[1518]: time="2026-01-23T00:07:14.285817573Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 00:07:14.285965 containerd[1518]: time="2026-01-23T00:07:14.285898707Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 00:07:14.285965 containerd[1518]: time="2026-01-23T00:07:14.285916467Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 00:07:14.285965 containerd[1518]: time="2026-01-23T00:07:14.285929982Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 00:07:14.285965 containerd[1518]: time="2026-01-23T00:07:14.285945187Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 00:07:14.285965 containerd[1518]: time="2026-01-23T00:07:14.285956766Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 00:07:14.286074 containerd[1518]: time="2026-01-23T00:07:14.285971353Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 00:07:14.286074 containerd[1518]: time="2026-01-23T00:07:14.285984662Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 00:07:14.286074 containerd[1518]: time="2026-01-23T00:07:14.285996900Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 00:07:14.286074 containerd[1518]: time="2026-01-23T00:07:14.286006996Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 00:07:14.286074 containerd[1518]: time="2026-01-23T00:07:14.286019440Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 00:07:14.286074 containerd[1518]: time="2026-01-23T00:07:14.286036499Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 00:07:14.286209 containerd[1518]: time="2026-01-23T00:07:14.286185788Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 00:07:14.286235 containerd[1518]: time="2026-01-23T00:07:14.286219535Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 00:07:14.286328 containerd[1518]: time="2026-01-23T00:07:14.286237254Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 00:07:14.286328 containerd[1518]: time="2026-01-23T00:07:14.286254024Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 00:07:14.286328 containerd[1518]: time="2026-01-23T00:07:14.286267128Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 00:07:14.286328 containerd[1518]: time="2026-01-23T00:07:14.286285052Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 00:07:14.286328 containerd[1518]: time="2026-01-23T00:07:14.286297538Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 00:07:14.286557 containerd[1518]: time="2026-01-23T00:07:14.286310064Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 00:07:14.286607 containerd[1518]: time="2026-01-23T00:07:14.286570361Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 00:07:14.286607 containerd[1518]: time="2026-01-23T00:07:14.286585649Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 00:07:14.286607 containerd[1518]: time="2026-01-23T00:07:14.286599576Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 00:07:14.291826 containerd[1518]: time="2026-01-23T00:07:14.290186953Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 00:07:14.291826 containerd[1518]: time="2026-01-23T00:07:14.290252841Z" level=info msg="Start snapshots syncer" Jan 23 00:07:14.291826 containerd[1518]: time="2026-01-23T00:07:14.290302041Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 00:07:14.289996 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:07:14.292199 containerd[1518]: time="2026-01-23T00:07:14.290589163Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 00:07:14.292199 containerd[1518]: time="2026-01-23T00:07:14.290637332Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 00:07:14.292199 containerd[1518]: time="2026-01-23T00:07:14.290698729Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 00:07:14.297114 containerd[1518]: time="2026-01-23T00:07:14.296881989Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 00:07:14.297114 containerd[1518]: time="2026-01-23T00:07:14.296976391Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 00:07:14.297114 containerd[1518]: time="2026-01-23T00:07:14.297010015Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 00:07:14.297114 containerd[1518]: time="2026-01-23T00:07:14.297058061Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 00:07:14.297114 containerd[1518]: time="2026-01-23T00:07:14.297075821Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 00:07:14.297114 containerd[1518]: time="2026-01-23T00:07:14.297088265Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 00:07:14.297114 containerd[1518]: time="2026-01-23T00:07:14.297103964Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 00:07:14.297968 containerd[1518]: time="2026-01-23T00:07:14.297141956Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 00:07:14.297968 containerd[1518]: time="2026-01-23T00:07:14.297154153Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 00:07:14.297968 containerd[1518]: time="2026-01-23T00:07:14.297168369Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 00:07:14.297968 containerd[1518]: time="2026-01-23T00:07:14.297211924Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 00:07:14.297968 containerd[1518]: time="2026-01-23T00:07:14.297228777Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 00:07:14.297968 containerd[1518]: time="2026-01-23T00:07:14.297238831Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 00:07:14.297968 containerd[1518]: time="2026-01-23T00:07:14.297250080Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 00:07:14.297968 containerd[1518]: time="2026-01-23T00:07:14.297258280Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 00:07:14.297968 containerd[1518]: time="2026-01-23T00:07:14.297268911Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 00:07:14.297968 containerd[1518]: time="2026-01-23T00:07:14.297280202Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 00:07:14.297968 containerd[1518]: time="2026-01-23T00:07:14.297363561Z" level=info msg="runtime interface created" Jan 23 00:07:14.297968 containerd[1518]: time="2026-01-23T00:07:14.297368877Z" level=info msg="created NRI interface" Jan 23 00:07:14.297968 containerd[1518]: time="2026-01-23T00:07:14.297379137Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 00:07:14.297968 containerd[1518]: time="2026-01-23T00:07:14.297397473Z" level=info msg="Connect containerd service" Jan 23 00:07:14.297968 containerd[1518]: time="2026-01-23T00:07:14.297420631Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 00:07:14.307127 containerd[1518]: time="2026-01-23T00:07:14.307038435Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 00:07:14.309726 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 00:07:14.310141 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:07:14.319180 systemd-logind[1494]: Watching system buttons on /dev/input/event0 (Power Button) Jan 23 00:07:14.320128 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:07:14.336896 systemd-logind[1494]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 23 00:07:14.337262 locksmithd[1537]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 00:07:14.369800 coreos-metadata[1579]: Jan 23 00:07:14.368 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 23 00:07:14.379077 coreos-metadata[1579]: Jan 23 00:07:14.377 INFO Fetch successful Jan 23 00:07:14.379917 unknown[1579]: wrote ssh authorized keys file for user: core Jan 23 00:07:14.454891 update-ssh-keys[1605]: Updated "/home/core/.ssh/authorized_keys" Jan 23 00:07:14.457809 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 00:07:14.465900 systemd[1]: Finished sshkeys.service. Jan 23 00:07:14.526213 tar[1514]: linux-arm64/README.md Jan 23 00:07:14.536166 containerd[1518]: time="2026-01-23T00:07:14.536118445Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 00:07:14.536251 containerd[1518]: time="2026-01-23T00:07:14.536185858Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 00:07:14.536251 containerd[1518]: time="2026-01-23T00:07:14.536210870Z" level=info msg="Start subscribing containerd event" Jan 23 00:07:14.536251 containerd[1518]: time="2026-01-23T00:07:14.536249232Z" level=info msg="Start recovering state" Jan 23 00:07:14.536335 containerd[1518]: time="2026-01-23T00:07:14.536329419Z" level=info msg="Start event monitor" Jan 23 00:07:14.536355 containerd[1518]: time="2026-01-23T00:07:14.536345984Z" level=info msg="Start cni network conf syncer for default" Jan 23 00:07:14.536378 containerd[1518]: time="2026-01-23T00:07:14.536355461Z" level=info msg="Start streaming server" Jan 23 00:07:14.536378 containerd[1518]: time="2026-01-23T00:07:14.536363702Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 00:07:14.536378 containerd[1518]: time="2026-01-23T00:07:14.536371943Z" level=info msg="runtime interface starting up..." Jan 23 00:07:14.536378 containerd[1518]: time="2026-01-23T00:07:14.536378001Z" level=info msg="starting plugins..." Jan 23 00:07:14.536451 containerd[1518]: time="2026-01-23T00:07:14.536392587Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 00:07:14.537755 containerd[1518]: time="2026-01-23T00:07:14.536508623Z" level=info msg="containerd successfully booted in 0.334559s" Jan 23 00:07:14.536628 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 00:07:14.552071 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:07:14.559810 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 00:07:15.150882 systemd-networkd[1422]: eth0: Gained IPv6LL Jan 23 00:07:15.154925 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 00:07:15.156607 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 00:07:15.161024 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:07:15.166132 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 00:07:15.200949 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 00:07:15.215785 systemd-networkd[1422]: eth1: Gained IPv6LL Jan 23 00:07:15.334675 sshd_keygen[1531]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 00:07:15.357601 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 00:07:15.364091 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 00:07:15.379814 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 00:07:15.382747 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 00:07:15.387987 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 00:07:15.410418 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 00:07:15.415365 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 00:07:15.420995 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 23 00:07:15.422163 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 00:07:16.016401 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:07:16.018833 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 00:07:16.022850 systemd[1]: Startup finished in 2.405s (kernel) + 5.425s (initrd) + 5.040s (userspace) = 12.872s. Jan 23 00:07:16.029107 (kubelet)[1659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:07:16.512133 kubelet[1659]: E0123 00:07:16.511972 1659 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:07:16.516505 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:07:16.516696 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:07:16.517476 systemd[1]: kubelet.service: Consumed 827ms CPU time, 247M memory peak. Jan 23 00:07:26.594837 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 00:07:26.596559 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:07:26.796366 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:07:26.807174 (kubelet)[1678]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:07:26.857958 kubelet[1678]: E0123 00:07:26.857830 1678 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:07:26.861446 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:07:26.862050 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:07:26.862646 systemd[1]: kubelet.service: Consumed 188ms CPU time, 107.2M memory peak. Jan 23 00:07:37.092425 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 00:07:37.094704 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:07:37.245357 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:07:37.258277 (kubelet)[1693]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:07:37.311176 kubelet[1693]: E0123 00:07:37.311024 1693 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:07:37.314514 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:07:37.314627 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:07:37.315449 systemd[1]: kubelet.service: Consumed 179ms CPU time, 106.8M memory peak. Jan 23 00:07:46.495204 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 00:07:46.498008 systemd[1]: Started sshd@0-88.198.161.46:22-68.220.241.50:49298.service - OpenSSH per-connection server daemon (68.220.241.50:49298). Jan 23 00:07:47.129610 sshd[1701]: Accepted publickey for core from 68.220.241.50 port 49298 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:07:47.135089 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:07:47.148810 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 00:07:47.159724 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 00:07:47.163006 systemd-logind[1494]: New session 1 of user core. Jan 23 00:07:47.187747 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 00:07:47.190452 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 00:07:47.207205 (systemd)[1706]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 00:07:47.210788 systemd-logind[1494]: New session c1 of user core. Jan 23 00:07:47.342634 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 00:07:47.346913 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:07:47.354425 systemd[1706]: Queued start job for default target default.target. Jan 23 00:07:47.356061 systemd[1706]: Created slice app.slice - User Application Slice. Jan 23 00:07:47.356264 systemd[1706]: Reached target paths.target - Paths. Jan 23 00:07:47.356396 systemd[1706]: Reached target timers.target - Timers. Jan 23 00:07:47.359898 systemd[1706]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 00:07:47.383726 systemd[1706]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 00:07:47.383853 systemd[1706]: Reached target sockets.target - Sockets. Jan 23 00:07:47.383898 systemd[1706]: Reached target basic.target - Basic System. Jan 23 00:07:47.383924 systemd[1706]: Reached target default.target - Main User Target. Jan 23 00:07:47.383948 systemd[1706]: Startup finished in 165ms. Jan 23 00:07:47.384327 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 00:07:47.394029 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 00:07:47.508664 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:07:47.521677 (kubelet)[1723]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:07:47.562282 kubelet[1723]: E0123 00:07:47.562228 1723 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:07:47.565923 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:07:47.566064 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:07:47.566724 systemd[1]: kubelet.service: Consumed 164ms CPU time, 107M memory peak. Jan 23 00:07:47.848571 systemd[1]: Started sshd@1-88.198.161.46:22-68.220.241.50:49310.service - OpenSSH per-connection server daemon (68.220.241.50:49310). Jan 23 00:07:48.483882 sshd[1732]: Accepted publickey for core from 68.220.241.50 port 49310 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:07:48.486209 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:07:48.494979 systemd-logind[1494]: New session 2 of user core. Jan 23 00:07:48.505168 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 00:07:48.923729 sshd[1735]: Connection closed by 68.220.241.50 port 49310 Jan 23 00:07:48.924070 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Jan 23 00:07:48.929997 systemd-logind[1494]: Session 2 logged out. Waiting for processes to exit. Jan 23 00:07:48.930989 systemd[1]: sshd@1-88.198.161.46:22-68.220.241.50:49310.service: Deactivated successfully. Jan 23 00:07:48.935552 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 00:07:48.942607 systemd-logind[1494]: Removed session 2. Jan 23 00:07:49.044213 systemd[1]: Started sshd@2-88.198.161.46:22-68.220.241.50:49318.service - OpenSSH per-connection server daemon (68.220.241.50:49318). Jan 23 00:07:49.692734 sshd[1741]: Accepted publickey for core from 68.220.241.50 port 49318 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:07:49.694093 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:07:49.700430 systemd-logind[1494]: New session 3 of user core. Jan 23 00:07:49.707726 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 00:07:50.132772 sshd[1744]: Connection closed by 68.220.241.50 port 49318 Jan 23 00:07:50.132879 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Jan 23 00:07:50.138555 systemd-logind[1494]: Session 3 logged out. Waiting for processes to exit. Jan 23 00:07:50.138911 systemd[1]: sshd@2-88.198.161.46:22-68.220.241.50:49318.service: Deactivated successfully. Jan 23 00:07:50.142902 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 00:07:50.147990 systemd-logind[1494]: Removed session 3. Jan 23 00:07:50.254028 systemd[1]: Started sshd@3-88.198.161.46:22-68.220.241.50:49334.service - OpenSSH per-connection server daemon (68.220.241.50:49334). Jan 23 00:07:50.896740 sshd[1750]: Accepted publickey for core from 68.220.241.50 port 49334 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:07:50.898918 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:07:50.906298 systemd-logind[1494]: New session 4 of user core. Jan 23 00:07:50.916989 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 00:07:51.343430 sshd[1753]: Connection closed by 68.220.241.50 port 49334 Jan 23 00:07:51.344297 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Jan 23 00:07:51.351212 systemd-logind[1494]: Session 4 logged out. Waiting for processes to exit. Jan 23 00:07:51.351985 systemd[1]: sshd@3-88.198.161.46:22-68.220.241.50:49334.service: Deactivated successfully. Jan 23 00:07:51.355332 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 00:07:51.358966 systemd-logind[1494]: Removed session 4. Jan 23 00:07:51.455188 systemd[1]: Started sshd@4-88.198.161.46:22-68.220.241.50:49342.service - OpenSSH per-connection server daemon (68.220.241.50:49342). Jan 23 00:07:52.095413 sshd[1759]: Accepted publickey for core from 68.220.241.50 port 49342 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:07:52.098878 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:07:52.104568 systemd-logind[1494]: New session 5 of user core. Jan 23 00:07:52.113307 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 00:07:52.445165 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 00:07:52.445468 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:07:52.459184 sudo[1763]: pam_unix(sudo:session): session closed for user root Jan 23 00:07:52.558800 sshd[1762]: Connection closed by 68.220.241.50 port 49342 Jan 23 00:07:52.560097 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Jan 23 00:07:52.567000 systemd[1]: sshd@4-88.198.161.46:22-68.220.241.50:49342.service: Deactivated successfully. Jan 23 00:07:52.570140 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 00:07:52.571868 systemd-logind[1494]: Session 5 logged out. Waiting for processes to exit. Jan 23 00:07:52.573273 systemd-logind[1494]: Removed session 5. Jan 23 00:07:52.683791 systemd[1]: Started sshd@5-88.198.161.46:22-68.220.241.50:41834.service - OpenSSH per-connection server daemon (68.220.241.50:41834). Jan 23 00:07:53.321199 sshd[1769]: Accepted publickey for core from 68.220.241.50 port 41834 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:07:53.323398 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:07:53.331860 systemd-logind[1494]: New session 6 of user core. Jan 23 00:07:53.340318 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 00:07:53.663374 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 00:07:53.663819 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:07:53.672386 sudo[1774]: pam_unix(sudo:session): session closed for user root Jan 23 00:07:53.679526 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 00:07:53.680496 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:07:53.704999 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 00:07:53.762469 augenrules[1796]: No rules Jan 23 00:07:53.764444 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 00:07:53.765859 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 00:07:53.768068 sudo[1773]: pam_unix(sudo:session): session closed for user root Jan 23 00:07:53.866199 sshd[1772]: Connection closed by 68.220.241.50 port 41834 Jan 23 00:07:53.866726 sshd-session[1769]: pam_unix(sshd:session): session closed for user core Jan 23 00:07:53.873613 systemd-logind[1494]: Session 6 logged out. Waiting for processes to exit. Jan 23 00:07:53.874218 systemd[1]: sshd@5-88.198.161.46:22-68.220.241.50:41834.service: Deactivated successfully. Jan 23 00:07:53.877353 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 00:07:53.881389 systemd-logind[1494]: Removed session 6. Jan 23 00:07:53.974296 systemd[1]: Started sshd@6-88.198.161.46:22-68.220.241.50:41836.service - OpenSSH per-connection server daemon (68.220.241.50:41836). Jan 23 00:07:54.615978 sshd[1805]: Accepted publickey for core from 68.220.241.50 port 41836 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:07:54.620247 sshd-session[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:07:54.632293 systemd-logind[1494]: New session 7 of user core. Jan 23 00:07:54.636030 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 00:07:54.957364 sudo[1809]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 00:07:54.958561 sudo[1809]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:07:55.284069 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 00:07:55.309676 (dockerd)[1827]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 00:07:55.542645 dockerd[1827]: time="2026-01-23T00:07:55.542489230Z" level=info msg="Starting up" Jan 23 00:07:55.545182 dockerd[1827]: time="2026-01-23T00:07:55.545015425Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 00:07:55.558002 dockerd[1827]: time="2026-01-23T00:07:55.557915060Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 00:07:55.574566 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2788966027-merged.mount: Deactivated successfully. Jan 23 00:07:55.596858 systemd[1]: var-lib-docker-metacopy\x2dcheck2973510705-merged.mount: Deactivated successfully. Jan 23 00:07:55.607033 dockerd[1827]: time="2026-01-23T00:07:55.606738634Z" level=info msg="Loading containers: start." Jan 23 00:07:55.615718 kernel: Initializing XFRM netlink socket Jan 23 00:07:55.866311 systemd-networkd[1422]: docker0: Link UP Jan 23 00:07:55.872546 dockerd[1827]: time="2026-01-23T00:07:55.871836106Z" level=info msg="Loading containers: done." Jan 23 00:07:55.891831 dockerd[1827]: time="2026-01-23T00:07:55.891776132Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 00:07:55.892159 dockerd[1827]: time="2026-01-23T00:07:55.892131901Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 00:07:55.892454 dockerd[1827]: time="2026-01-23T00:07:55.892428369Z" level=info msg="Initializing buildkit" Jan 23 00:07:55.918572 dockerd[1827]: time="2026-01-23T00:07:55.918529828Z" level=info msg="Completed buildkit initialization" Jan 23 00:07:55.926277 dockerd[1827]: time="2026-01-23T00:07:55.926225577Z" level=info msg="Daemon has completed initialization" Jan 23 00:07:55.926726 dockerd[1827]: time="2026-01-23T00:07:55.926453460Z" level=info msg="API listen on /run/docker.sock" Jan 23 00:07:55.929226 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 00:07:56.985267 containerd[1518]: time="2026-01-23T00:07:56.984092009Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 23 00:07:57.592592 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 00:07:57.595370 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:07:57.666822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1055357239.mount: Deactivated successfully. Jan 23 00:07:57.763896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:07:57.774577 (kubelet)[2058]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:07:57.818323 kubelet[2058]: E0123 00:07:57.818283 2058 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:07:57.821015 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:07:57.821152 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:07:57.823185 systemd[1]: kubelet.service: Consumed 159ms CPU time, 106.5M memory peak. Jan 23 00:07:58.726716 containerd[1518]: time="2026-01-23T00:07:58.725262160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:58.727111 containerd[1518]: time="2026-01-23T00:07:58.727029188Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=24571138" Jan 23 00:07:58.728381 containerd[1518]: time="2026-01-23T00:07:58.727405705Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:58.731764 containerd[1518]: time="2026-01-23T00:07:58.731725005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:58.732771 containerd[1518]: time="2026-01-23T00:07:58.732734078Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 1.748597574s" Jan 23 00:07:58.732771 containerd[1518]: time="2026-01-23T00:07:58.732774771Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Jan 23 00:07:58.733452 containerd[1518]: time="2026-01-23T00:07:58.733427053Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 23 00:07:59.151470 update_engine[1497]: I20260123 00:07:59.150967 1497 update_attempter.cc:509] Updating boot flags... Jan 23 00:08:00.069051 containerd[1518]: time="2026-01-23T00:08:00.067887601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:08:00.069051 containerd[1518]: time="2026-01-23T00:08:00.069005395Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=19135497" Jan 23 00:08:00.070096 containerd[1518]: time="2026-01-23T00:08:00.070055610Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:08:00.073485 containerd[1518]: time="2026-01-23T00:08:00.073439120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:08:00.074870 containerd[1518]: time="2026-01-23T00:08:00.074806384Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 1.341259534s" Jan 23 00:08:00.074870 containerd[1518]: time="2026-01-23T00:08:00.074868681Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Jan 23 00:08:00.076070 containerd[1518]: time="2026-01-23T00:08:00.076035609Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 23 00:08:01.002728 containerd[1518]: time="2026-01-23T00:08:01.002502350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:08:01.003752 containerd[1518]: time="2026-01-23T00:08:01.003658939Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=14191736" Jan 23 00:08:01.005710 containerd[1518]: time="2026-01-23T00:08:01.004500364Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:08:01.010558 containerd[1518]: time="2026-01-23T00:08:01.010513171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:08:01.011537 containerd[1518]: time="2026-01-23T00:08:01.011481110Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 935.40261ms" Jan 23 00:08:01.011537 containerd[1518]: time="2026-01-23T00:08:01.011530523Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Jan 23 00:08:01.012556 containerd[1518]: time="2026-01-23T00:08:01.012529270Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 23 00:08:01.980674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1818070279.mount: Deactivated successfully. Jan 23 00:08:02.234439 containerd[1518]: time="2026-01-23T00:08:02.234144515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:08:02.235681 containerd[1518]: time="2026-01-23T00:08:02.235381150Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=22805279" Jan 23 00:08:02.236943 containerd[1518]: time="2026-01-23T00:08:02.236868889Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:08:02.238979 containerd[1518]: time="2026-01-23T00:08:02.238915091Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:08:02.240180 containerd[1518]: time="2026-01-23T00:08:02.240127560Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 1.227448249s" Jan 23 00:08:02.240261 containerd[1518]: time="2026-01-23T00:08:02.240183494Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Jan 23 00:08:02.240849 containerd[1518]: time="2026-01-23T00:08:02.240815175Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 23 00:08:02.840410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2184271059.mount: Deactivated successfully. Jan 23 00:08:03.838631 containerd[1518]: time="2026-01-23T00:08:03.838535172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:08:03.840311 containerd[1518]: time="2026-01-23T00:08:03.840244587Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395498" Jan 23 00:08:03.841588 containerd[1518]: time="2026-01-23T00:08:03.841515096Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:08:03.844716 containerd[1518]: time="2026-01-23T00:08:03.844639535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:08:03.846754 containerd[1518]: time="2026-01-23T00:08:03.846662747Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.605811083s" Jan 23 00:08:03.846754 containerd[1518]: time="2026-01-23T00:08:03.846728203Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Jan 23 00:08:03.847421 containerd[1518]: time="2026-01-23T00:08:03.847388363Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 23 00:08:04.370842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3870431795.mount: Deactivated successfully. Jan 23 00:08:04.377551 containerd[1518]: time="2026-01-23T00:08:04.377311003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:08:04.379720 containerd[1518]: time="2026-01-23T00:08:04.379639543Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268729" Jan 23 00:08:04.381343 containerd[1518]: time="2026-01-23T00:08:04.381274483Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:08:04.384054 containerd[1518]: time="2026-01-23T00:08:04.383983711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:08:04.384840 containerd[1518]: time="2026-01-23T00:08:04.384677192Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 537.166239ms" Jan 23 00:08:04.384840 containerd[1518]: time="2026-01-23T00:08:04.384725683Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Jan 23 00:08:04.385771 containerd[1518]: time="2026-01-23T00:08:04.385752441Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 23 00:08:04.994107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount421634150.mount: Deactivated successfully. Jan 23 00:08:07.844134 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 23 00:08:07.845669 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:08:08.019867 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:08:08.030174 (kubelet)[2266]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:08:08.077387 kubelet[2266]: E0123 00:08:08.077342 2266 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:08:08.081785 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:08:08.082068 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:08:08.083818 systemd[1]: kubelet.service: Consumed 171ms CPU time, 105.4M memory peak. Jan 23 00:08:08.117808 containerd[1518]: time="2026-01-23T00:08:08.117628342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:08:08.119819 containerd[1518]: time="2026-01-23T00:08:08.119757435Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=98063043" Jan 23 00:08:08.120803 containerd[1518]: time="2026-01-23T00:08:08.120738786Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:08:08.124562 containerd[1518]: time="2026-01-23T00:08:08.124271352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:08:08.125521 containerd[1518]: time="2026-01-23T00:08:08.125271826Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 3.739347028s" Jan 23 00:08:08.125521 containerd[1518]: time="2026-01-23T00:08:08.125313795Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Jan 23 00:08:13.971064 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:08:13.971238 systemd[1]: kubelet.service: Consumed 171ms CPU time, 105.4M memory peak. Jan 23 00:08:13.974404 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:08:14.008537 systemd[1]: Reload requested from client PID 2297 ('systemctl') (unit session-7.scope)... Jan 23 00:08:14.008558 systemd[1]: Reloading... Jan 23 00:08:14.148721 zram_generator::config[2340]: No configuration found. Jan 23 00:08:14.337613 systemd[1]: Reloading finished in 328 ms. Jan 23 00:08:14.407579 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 00:08:14.408039 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 00:08:14.408681 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:08:14.408812 systemd[1]: kubelet.service: Consumed 115ms CPU time, 95M memory peak. Jan 23 00:08:14.412518 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:08:14.579193 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:08:14.588073 (kubelet)[2389]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 00:08:14.631332 kubelet[2389]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 00:08:14.631332 kubelet[2389]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 00:08:14.632624 kubelet[2389]: I0123 00:08:14.632538 2389 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 00:08:15.433466 kubelet[2389]: I0123 00:08:15.433377 2389 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 00:08:15.433466 kubelet[2389]: I0123 00:08:15.433420 2389 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 00:08:15.435988 kubelet[2389]: I0123 00:08:15.435944 2389 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 00:08:15.435988 kubelet[2389]: I0123 00:08:15.435982 2389 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 00:08:15.437021 kubelet[2389]: I0123 00:08:15.436619 2389 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 00:08:15.444245 kubelet[2389]: E0123 00:08:15.444204 2389 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://88.198.161.46:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 88.198.161.46:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 00:08:15.445918 kubelet[2389]: I0123 00:08:15.445874 2389 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 00:08:15.451322 kubelet[2389]: I0123 00:08:15.451299 2389 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 00:08:15.454317 kubelet[2389]: I0123 00:08:15.454025 2389 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 00:08:15.454317 kubelet[2389]: I0123 00:08:15.454271 2389 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 00:08:15.454487 kubelet[2389]: I0123 00:08:15.454301 2389 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-2-n-8734b5e787","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 00:08:15.454583 kubelet[2389]: I0123 00:08:15.454495 2389 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 00:08:15.454583 kubelet[2389]: I0123 00:08:15.454505 2389 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 00:08:15.454627 kubelet[2389]: I0123 00:08:15.454620 2389 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 00:08:15.457235 kubelet[2389]: I0123 00:08:15.457203 2389 state_mem.go:36] "Initialized new in-memory state store" Jan 23 00:08:15.459712 kubelet[2389]: I0123 00:08:15.458780 2389 kubelet.go:475] "Attempting to sync node with API server" Jan 23 00:08:15.459712 kubelet[2389]: I0123 00:08:15.458832 2389 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 00:08:15.459712 kubelet[2389]: I0123 00:08:15.458863 2389 kubelet.go:387] "Adding apiserver pod source" Jan 23 00:08:15.459712 kubelet[2389]: I0123 00:08:15.458880 2389 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 00:08:15.460567 kubelet[2389]: E0123 00:08:15.460481 2389 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://88.198.161.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-2-n-8734b5e787&limit=500&resourceVersion=0\": dial tcp 88.198.161.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 00:08:15.460664 kubelet[2389]: I0123 00:08:15.460648 2389 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 00:08:15.461408 kubelet[2389]: I0123 00:08:15.461373 2389 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 00:08:15.461485 kubelet[2389]: I0123 00:08:15.461421 2389 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 00:08:15.461535 kubelet[2389]: W0123 00:08:15.461516 2389 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 00:08:15.465782 kubelet[2389]: I0123 00:08:15.465750 2389 server.go:1262] "Started kubelet" Jan 23 00:08:15.466060 kubelet[2389]: E0123 00:08:15.466025 2389 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://88.198.161.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 88.198.161.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 00:08:15.467961 kubelet[2389]: I0123 00:08:15.467923 2389 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 00:08:15.469764 kubelet[2389]: I0123 00:08:15.469119 2389 server.go:310] "Adding debug handlers to kubelet server" Jan 23 00:08:15.470713 kubelet[2389]: I0123 00:08:15.470357 2389 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 00:08:15.470713 kubelet[2389]: I0123 00:08:15.470484 2389 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 00:08:15.474769 kubelet[2389]: I0123 00:08:15.472799 2389 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 00:08:15.474935 kubelet[2389]: I0123 00:08:15.474879 2389 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 00:08:15.475760 kubelet[2389]: E0123 00:08:15.474004 2389 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://88.198.161.46:6443/api/v1/namespaces/default/events\": dial tcp 88.198.161.46:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-2-n-8734b5e787.188d3389c6d21111 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-2-n-8734b5e787,UID:ci-4459-2-2-n-8734b5e787,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-2-n-8734b5e787,},FirstTimestamp:2026-01-23 00:08:15.465664785 +0000 UTC m=+0.873605489,LastTimestamp:2026-01-23 00:08:15.465664785 +0000 UTC m=+0.873605489,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-2-n-8734b5e787,}" Jan 23 00:08:15.477981 kubelet[2389]: I0123 00:08:15.477953 2389 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 00:08:15.483579 kubelet[2389]: E0123 00:08:15.483532 2389 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-2-n-8734b5e787\" not found" Jan 23 00:08:15.484729 kubelet[2389]: I0123 00:08:15.484152 2389 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 00:08:15.485488 kubelet[2389]: I0123 00:08:15.485405 2389 factory.go:223] Registration of the systemd container factory successfully Jan 23 00:08:15.485575 kubelet[2389]: I0123 00:08:15.485525 2389 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 00:08:15.485994 kubelet[2389]: E0123 00:08:15.485953 2389 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://88.198.161.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 88.198.161.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 00:08:15.486067 kubelet[2389]: E0123 00:08:15.486033 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://88.198.161.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-n-8734b5e787?timeout=10s\": dial tcp 88.198.161.46:6443: connect: connection refused" interval="200ms" Jan 23 00:08:15.486833 kubelet[2389]: I0123 00:08:15.486803 2389 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 00:08:15.486944 kubelet[2389]: I0123 00:08:15.486926 2389 reconciler.go:29] "Reconciler: start to sync state" Jan 23 00:08:15.487675 kubelet[2389]: I0123 00:08:15.487638 2389 factory.go:223] Registration of the containerd container factory successfully Jan 23 00:08:15.503128 kubelet[2389]: I0123 00:08:15.503010 2389 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 00:08:15.504421 kubelet[2389]: I0123 00:08:15.504394 2389 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 00:08:15.504652 kubelet[2389]: I0123 00:08:15.504639 2389 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 00:08:15.504826 kubelet[2389]: I0123 00:08:15.504813 2389 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 00:08:15.504942 kubelet[2389]: E0123 00:08:15.504923 2389 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 00:08:15.513854 kubelet[2389]: E0123 00:08:15.513825 2389 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 00:08:15.514008 kubelet[2389]: E0123 00:08:15.513819 2389 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://88.198.161.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 88.198.161.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 00:08:15.519258 kubelet[2389]: I0123 00:08:15.519228 2389 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 00:08:15.519258 kubelet[2389]: I0123 00:08:15.519248 2389 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 00:08:15.519258 kubelet[2389]: I0123 00:08:15.519268 2389 state_mem.go:36] "Initialized new in-memory state store" Jan 23 00:08:15.520999 kubelet[2389]: I0123 00:08:15.520920 2389 policy_none.go:49] "None policy: Start" Jan 23 00:08:15.521121 kubelet[2389]: I0123 00:08:15.521009 2389 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 00:08:15.521121 kubelet[2389]: I0123 00:08:15.521037 2389 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 00:08:15.522394 kubelet[2389]: I0123 00:08:15.522368 2389 policy_none.go:47] "Start" Jan 23 00:08:15.527195 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 00:08:15.542942 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 00:08:15.548850 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 00:08:15.558868 kubelet[2389]: E0123 00:08:15.558823 2389 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 00:08:15.559592 kubelet[2389]: I0123 00:08:15.559551 2389 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 00:08:15.559965 kubelet[2389]: I0123 00:08:15.559784 2389 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 00:08:15.562248 kubelet[2389]: I0123 00:08:15.561930 2389 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 00:08:15.564956 kubelet[2389]: E0123 00:08:15.564896 2389 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 00:08:15.564956 kubelet[2389]: E0123 00:08:15.564960 2389 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-2-n-8734b5e787\" not found" Jan 23 00:08:15.626966 systemd[1]: Created slice kubepods-burstable-pod42e802775a7d2c97d008bdecd365cb5f.slice - libcontainer container kubepods-burstable-pod42e802775a7d2c97d008bdecd365cb5f.slice. Jan 23 00:08:15.640093 kubelet[2389]: E0123 00:08:15.640047 2389 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-n-8734b5e787\" not found" node="ci-4459-2-2-n-8734b5e787" Jan 23 00:08:15.644138 systemd[1]: Created slice kubepods-burstable-pod5a08d0e857caa8eabb85ae0abcda6428.slice - libcontainer container kubepods-burstable-pod5a08d0e857caa8eabb85ae0abcda6428.slice. Jan 23 00:08:15.653892 kubelet[2389]: E0123 00:08:15.653850 2389 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-n-8734b5e787\" not found" node="ci-4459-2-2-n-8734b5e787" Jan 23 00:08:15.658328 systemd[1]: Created slice kubepods-burstable-podeb3a3100b7f9c180b0766b61005848a9.slice - libcontainer container kubepods-burstable-podeb3a3100b7f9c180b0766b61005848a9.slice. Jan 23 00:08:15.661596 kubelet[2389]: E0123 00:08:15.661566 2389 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-n-8734b5e787\" not found" node="ci-4459-2-2-n-8734b5e787" Jan 23 00:08:15.662591 kubelet[2389]: I0123 00:08:15.662563 2389 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-n-8734b5e787" Jan 23 00:08:15.663035 kubelet[2389]: E0123 00:08:15.663010 2389 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://88.198.161.46:6443/api/v1/nodes\": dial tcp 88.198.161.46:6443: connect: connection refused" node="ci-4459-2-2-n-8734b5e787" Jan 23 00:08:15.687192 kubelet[2389]: E0123 00:08:15.687008 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://88.198.161.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-n-8734b5e787?timeout=10s\": dial tcp 88.198.161.46:6443: connect: connection refused" interval="400ms" Jan 23 00:08:15.688156 kubelet[2389]: I0123 00:08:15.688037 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42e802775a7d2c97d008bdecd365cb5f-k8s-certs\") pod \"kube-apiserver-ci-4459-2-2-n-8734b5e787\" (UID: \"42e802775a7d2c97d008bdecd365cb5f\") " pod="kube-system/kube-apiserver-ci-4459-2-2-n-8734b5e787" Jan 23 00:08:15.688156 kubelet[2389]: I0123 00:08:15.688110 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5a08d0e857caa8eabb85ae0abcda6428-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-2-n-8734b5e787\" (UID: \"5a08d0e857caa8eabb85ae0abcda6428\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-8734b5e787" Jan 23 00:08:15.688387 kubelet[2389]: I0123 00:08:15.688143 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5a08d0e857caa8eabb85ae0abcda6428-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-2-n-8734b5e787\" (UID: \"5a08d0e857caa8eabb85ae0abcda6428\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-8734b5e787" Jan 23 00:08:15.688387 kubelet[2389]: I0123 00:08:15.688355 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5a08d0e857caa8eabb85ae0abcda6428-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-2-n-8734b5e787\" (UID: \"5a08d0e857caa8eabb85ae0abcda6428\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-8734b5e787" Jan 23 00:08:15.688646 kubelet[2389]: I0123 00:08:15.688592 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5a08d0e857caa8eabb85ae0abcda6428-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-2-n-8734b5e787\" (UID: \"5a08d0e857caa8eabb85ae0abcda6428\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-8734b5e787" Jan 23 00:08:15.688820 kubelet[2389]: I0123 00:08:15.688743 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb3a3100b7f9c180b0766b61005848a9-kubeconfig\") pod \"kube-scheduler-ci-4459-2-2-n-8734b5e787\" (UID: \"eb3a3100b7f9c180b0766b61005848a9\") " pod="kube-system/kube-scheduler-ci-4459-2-2-n-8734b5e787" Jan 23 00:08:15.688820 kubelet[2389]: I0123 00:08:15.688781 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42e802775a7d2c97d008bdecd365cb5f-ca-certs\") pod \"kube-apiserver-ci-4459-2-2-n-8734b5e787\" (UID: \"42e802775a7d2c97d008bdecd365cb5f\") " pod="kube-system/kube-apiserver-ci-4459-2-2-n-8734b5e787" Jan 23 00:08:15.688968 kubelet[2389]: I0123 00:08:15.688805 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42e802775a7d2c97d008bdecd365cb5f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-2-n-8734b5e787\" (UID: \"42e802775a7d2c97d008bdecd365cb5f\") " pod="kube-system/kube-apiserver-ci-4459-2-2-n-8734b5e787" Jan 23 00:08:15.689116 kubelet[2389]: I0123 00:08:15.689023 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5a08d0e857caa8eabb85ae0abcda6428-ca-certs\") pod \"kube-controller-manager-ci-4459-2-2-n-8734b5e787\" (UID: \"5a08d0e857caa8eabb85ae0abcda6428\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-8734b5e787" Jan 23 00:08:15.866722 kubelet[2389]: I0123 00:08:15.866585 2389 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-n-8734b5e787" Jan 23 00:08:15.867103 kubelet[2389]: E0123 00:08:15.867033 2389 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://88.198.161.46:6443/api/v1/nodes\": dial tcp 88.198.161.46:6443: connect: connection refused" node="ci-4459-2-2-n-8734b5e787" Jan 23 00:08:15.945591 containerd[1518]: time="2026-01-23T00:08:15.944409733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-2-n-8734b5e787,Uid:42e802775a7d2c97d008bdecd365cb5f,Namespace:kube-system,Attempt:0,}" Jan 23 00:08:15.958896 containerd[1518]: time="2026-01-23T00:08:15.958830743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-2-n-8734b5e787,Uid:5a08d0e857caa8eabb85ae0abcda6428,Namespace:kube-system,Attempt:0,}" Jan 23 00:08:15.965210 containerd[1518]: time="2026-01-23T00:08:15.965146476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-2-n-8734b5e787,Uid:eb3a3100b7f9c180b0766b61005848a9,Namespace:kube-system,Attempt:0,}" Jan 23 00:08:16.088228 kubelet[2389]: E0123 00:08:16.088139 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://88.198.161.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-n-8734b5e787?timeout=10s\": dial tcp 88.198.161.46:6443: connect: connection refused" interval="800ms" Jan 23 00:08:16.270393 kubelet[2389]: I0123 00:08:16.270247 2389 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-n-8734b5e787" Jan 23 00:08:16.271493 kubelet[2389]: E0123 00:08:16.271111 2389 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://88.198.161.46:6443/api/v1/nodes\": dial tcp 88.198.161.46:6443: connect: connection refused" node="ci-4459-2-2-n-8734b5e787" Jan 23 00:08:16.479780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount220733621.mount: Deactivated successfully. Jan 23 00:08:16.488719 containerd[1518]: time="2026-01-23T00:08:16.488264895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 00:08:16.489960 containerd[1518]: time="2026-01-23T00:08:16.489927292Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Jan 23 00:08:16.492601 containerd[1518]: time="2026-01-23T00:08:16.492539425Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 00:08:16.497289 containerd[1518]: time="2026-01-23T00:08:16.497246296Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 00:08:16.498015 containerd[1518]: time="2026-01-23T00:08:16.497983641Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 00:08:16.500259 containerd[1518]: time="2026-01-23T00:08:16.500224761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 00:08:16.501562 containerd[1518]: time="2026-01-23T00:08:16.501529907Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 540.975669ms" Jan 23 00:08:16.501952 containerd[1518]: time="2026-01-23T00:08:16.501930964Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 00:08:16.502969 containerd[1518]: time="2026-01-23T00:08:16.502943309Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 00:08:16.505397 containerd[1518]: time="2026-01-23T00:08:16.505359933Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 538.028574ms" Jan 23 00:08:16.512355 containerd[1518]: time="2026-01-23T00:08:16.512312925Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 564.761407ms" Jan 23 00:08:16.542550 containerd[1518]: time="2026-01-23T00:08:16.542252794Z" level=info msg="connecting to shim 9fec135aa0ffd66d8627e073cc8164fe1366aa88c57cbab99c4dfbdd22582e2b" address="unix:///run/containerd/s/3b9326bf848d18dfcdf6336ea762171ec43363f9c369f60b88fbc63e6d16647d" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:08:16.556956 containerd[1518]: time="2026-01-23T00:08:16.556906164Z" level=info msg="connecting to shim 264048158041f3c5d8d684b33821869acb461ff4c4462fe879d439cb54c7ff95" address="unix:///run/containerd/s/b9d2a59a21a98aa1ac5c6f9b4d22a7cdf9bd24e253f82a3d78e2d561467679e2" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:08:16.567892 containerd[1518]: time="2026-01-23T00:08:16.567839283Z" level=info msg="connecting to shim 85bbc2a2bd57c1ace3a181e4c83c7f445c32ea83227713182b53d917d97ebc48" address="unix:///run/containerd/s/1b10f6382ca3ad49e23b6f5d7c09b520c2a4cbe18f1f880ce433fc78f0f9ceea" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:08:16.583626 systemd[1]: Started cri-containerd-9fec135aa0ffd66d8627e073cc8164fe1366aa88c57cbab99c4dfbdd22582e2b.scope - libcontainer container 9fec135aa0ffd66d8627e073cc8164fe1366aa88c57cbab99c4dfbdd22582e2b. Jan 23 00:08:16.608582 kubelet[2389]: E0123 00:08:16.608537 2389 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://88.198.161.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 88.198.161.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 00:08:16.610967 systemd[1]: Started cri-containerd-264048158041f3c5d8d684b33821869acb461ff4c4462fe879d439cb54c7ff95.scope - libcontainer container 264048158041f3c5d8d684b33821869acb461ff4c4462fe879d439cb54c7ff95. Jan 23 00:08:16.629932 systemd[1]: Started cri-containerd-85bbc2a2bd57c1ace3a181e4c83c7f445c32ea83227713182b53d917d97ebc48.scope - libcontainer container 85bbc2a2bd57c1ace3a181e4c83c7f445c32ea83227713182b53d917d97ebc48. Jan 23 00:08:16.674356 containerd[1518]: time="2026-01-23T00:08:16.674195250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-2-n-8734b5e787,Uid:5a08d0e857caa8eabb85ae0abcda6428,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fec135aa0ffd66d8627e073cc8164fe1366aa88c57cbab99c4dfbdd22582e2b\"" Jan 23 00:08:16.687836 containerd[1518]: time="2026-01-23T00:08:16.687772946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-2-n-8734b5e787,Uid:eb3a3100b7f9c180b0766b61005848a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"264048158041f3c5d8d684b33821869acb461ff4c4462fe879d439cb54c7ff95\"" Jan 23 00:08:16.688229 containerd[1518]: time="2026-01-23T00:08:16.688137878Z" level=info msg="CreateContainer within sandbox \"9fec135aa0ffd66d8627e073cc8164fe1366aa88c57cbab99c4dfbdd22582e2b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 00:08:16.692864 containerd[1518]: time="2026-01-23T00:08:16.692816785Z" level=info msg="CreateContainer within sandbox \"264048158041f3c5d8d684b33821869acb461ff4c4462fe879d439cb54c7ff95\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 00:08:16.701157 containerd[1518]: time="2026-01-23T00:08:16.701093765Z" level=info msg="Container ac16906ef5252cb87acabdc4e5ab8d82d6837eaf016edfb4adca38db28c06ad8: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:08:16.703334 containerd[1518]: time="2026-01-23T00:08:16.702928427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-2-n-8734b5e787,Uid:42e802775a7d2c97d008bdecd365cb5f,Namespace:kube-system,Attempt:0,} returns sandbox id \"85bbc2a2bd57c1ace3a181e4c83c7f445c32ea83227713182b53d917d97ebc48\"" Jan 23 00:08:16.706598 containerd[1518]: time="2026-01-23T00:08:16.706557465Z" level=info msg="Container 4c89009143119a251b7b7ffc78510692e2b055350622ba6bf97c484aa1a059e3: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:08:16.708956 containerd[1518]: time="2026-01-23T00:08:16.708528146Z" level=info msg="CreateContainer within sandbox \"85bbc2a2bd57c1ace3a181e4c83c7f445c32ea83227713182b53d917d97ebc48\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 00:08:16.710741 containerd[1518]: time="2026-01-23T00:08:16.710667411Z" level=info msg="CreateContainer within sandbox \"9fec135aa0ffd66d8627e073cc8164fe1366aa88c57cbab99c4dfbdd22582e2b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ac16906ef5252cb87acabdc4e5ab8d82d6837eaf016edfb4adca38db28c06ad8\"" Jan 23 00:08:16.711373 containerd[1518]: time="2026-01-23T00:08:16.711345947Z" level=info msg="StartContainer for \"ac16906ef5252cb87acabdc4e5ab8d82d6837eaf016edfb4adca38db28c06ad8\"" Jan 23 00:08:16.713069 containerd[1518]: time="2026-01-23T00:08:16.713035268Z" level=info msg="connecting to shim ac16906ef5252cb87acabdc4e5ab8d82d6837eaf016edfb4adca38db28c06ad8" address="unix:///run/containerd/s/3b9326bf848d18dfcdf6336ea762171ec43363f9c369f60b88fbc63e6d16647d" protocol=ttrpc version=3 Jan 23 00:08:16.715394 containerd[1518]: time="2026-01-23T00:08:16.715347638Z" level=info msg="CreateContainer within sandbox \"264048158041f3c5d8d684b33821869acb461ff4c4462fe879d439cb54c7ff95\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4c89009143119a251b7b7ffc78510692e2b055350622ba6bf97c484aa1a059e3\"" Jan 23 00:08:16.716083 containerd[1518]: time="2026-01-23T00:08:16.716052139Z" level=info msg="StartContainer for \"4c89009143119a251b7b7ffc78510692e2b055350622ba6bf97c484aa1a059e3\"" Jan 23 00:08:16.717707 containerd[1518]: time="2026-01-23T00:08:16.717167538Z" level=info msg="connecting to shim 4c89009143119a251b7b7ffc78510692e2b055350622ba6bf97c484aa1a059e3" address="unix:///run/containerd/s/b9d2a59a21a98aa1ac5c6f9b4d22a7cdf9bd24e253f82a3d78e2d561467679e2" protocol=ttrpc version=3 Jan 23 00:08:16.737541 containerd[1518]: time="2026-01-23T00:08:16.737501957Z" level=info msg="Container 951dbb4a39701a32b6bf41f067d0aa273ad67a0aef98afaafa8d0b6d3cdd68c5: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:08:16.745916 systemd[1]: Started cri-containerd-ac16906ef5252cb87acabdc4e5ab8d82d6837eaf016edfb4adca38db28c06ad8.scope - libcontainer container ac16906ef5252cb87acabdc4e5ab8d82d6837eaf016edfb4adca38db28c06ad8. Jan 23 00:08:16.753004 containerd[1518]: time="2026-01-23T00:08:16.752949800Z" level=info msg="CreateContainer within sandbox \"85bbc2a2bd57c1ace3a181e4c83c7f445c32ea83227713182b53d917d97ebc48\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"951dbb4a39701a32b6bf41f067d0aa273ad67a0aef98afaafa8d0b6d3cdd68c5\"" Jan 23 00:08:16.754098 containerd[1518]: time="2026-01-23T00:08:16.754042076Z" level=info msg="StartContainer for \"951dbb4a39701a32b6bf41f067d0aa273ad67a0aef98afaafa8d0b6d3cdd68c5\"" Jan 23 00:08:16.755448 systemd[1]: Started cri-containerd-4c89009143119a251b7b7ffc78510692e2b055350622ba6bf97c484aa1a059e3.scope - libcontainer container 4c89009143119a251b7b7ffc78510692e2b055350622ba6bf97c484aa1a059e3. Jan 23 00:08:16.756861 containerd[1518]: time="2026-01-23T00:08:16.756815112Z" level=info msg="connecting to shim 951dbb4a39701a32b6bf41f067d0aa273ad67a0aef98afaafa8d0b6d3cdd68c5" address="unix:///run/containerd/s/1b10f6382ca3ad49e23b6f5d7c09b520c2a4cbe18f1f880ce433fc78f0f9ceea" protocol=ttrpc version=3 Jan 23 00:08:16.785987 systemd[1]: Started cri-containerd-951dbb4a39701a32b6bf41f067d0aa273ad67a0aef98afaafa8d0b6d3cdd68c5.scope - libcontainer container 951dbb4a39701a32b6bf41f067d0aa273ad67a0aef98afaafa8d0b6d3cdd68c5. Jan 23 00:08:16.801362 kubelet[2389]: E0123 00:08:16.801208 2389 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://88.198.161.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 88.198.161.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 00:08:16.822384 containerd[1518]: time="2026-01-23T00:08:16.822341536Z" level=info msg="StartContainer for \"ac16906ef5252cb87acabdc4e5ab8d82d6837eaf016edfb4adca38db28c06ad8\" returns successfully" Jan 23 00:08:16.866133 containerd[1518]: time="2026-01-23T00:08:16.866046448Z" level=info msg="StartContainer for \"4c89009143119a251b7b7ffc78510692e2b055350622ba6bf97c484aa1a059e3\" returns successfully" Jan 23 00:08:16.882884 containerd[1518]: time="2026-01-23T00:08:16.882831442Z" level=info msg="StartContainer for \"951dbb4a39701a32b6bf41f067d0aa273ad67a0aef98afaafa8d0b6d3cdd68c5\" returns successfully" Jan 23 00:08:16.889383 kubelet[2389]: E0123 00:08:16.889341 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://88.198.161.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-n-8734b5e787?timeout=10s\": dial tcp 88.198.161.46:6443: connect: connection refused" interval="1.6s" Jan 23 00:08:16.975078 kubelet[2389]: E0123 00:08:16.975025 2389 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://88.198.161.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-2-n-8734b5e787&limit=500&resourceVersion=0\": dial tcp 88.198.161.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 00:08:17.073965 kubelet[2389]: I0123 00:08:17.073740 2389 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-n-8734b5e787" Jan 23 00:08:17.528442 kubelet[2389]: E0123 00:08:17.528405 2389 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-n-8734b5e787\" not found" node="ci-4459-2-2-n-8734b5e787" Jan 23 00:08:17.533207 kubelet[2389]: E0123 00:08:17.533164 2389 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-n-8734b5e787\" not found" node="ci-4459-2-2-n-8734b5e787" Jan 23 00:08:17.537903 kubelet[2389]: E0123 00:08:17.537866 2389 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-n-8734b5e787\" not found" node="ci-4459-2-2-n-8734b5e787" Jan 23 00:08:18.542930 kubelet[2389]: E0123 00:08:18.542841 2389 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-n-8734b5e787\" not found" node="ci-4459-2-2-n-8734b5e787" Jan 23 00:08:18.543566 kubelet[2389]: E0123 00:08:18.543323 2389 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-n-8734b5e787\" not found" node="ci-4459-2-2-n-8734b5e787" Jan 23 00:08:20.088355 kubelet[2389]: E0123 00:08:20.088319 2389 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-n-8734b5e787\" not found" node="ci-4459-2-2-n-8734b5e787" Jan 23 00:08:20.251364 kubelet[2389]: E0123 00:08:20.251317 2389 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-2-2-n-8734b5e787\" not found" node="ci-4459-2-2-n-8734b5e787" Jan 23 00:08:20.371109 kubelet[2389]: I0123 00:08:20.371012 2389 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-2-n-8734b5e787" Jan 23 00:08:20.384043 kubelet[2389]: I0123 00:08:20.383990 2389 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-2-n-8734b5e787" Jan 23 00:08:20.398461 kubelet[2389]: E0123 00:08:20.398408 2389 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-2-n-8734b5e787\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-2-2-n-8734b5e787" Jan 23 00:08:20.398461 kubelet[2389]: I0123 00:08:20.398453 2389 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-n-8734b5e787" Jan 23 00:08:20.402792 kubelet[2389]: E0123 00:08:20.402746 2389 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-2-n-8734b5e787\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-2-n-8734b5e787" Jan 23 00:08:20.402792 kubelet[2389]: I0123 00:08:20.402779 2389 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-n-8734b5e787" Jan 23 00:08:20.409484 kubelet[2389]: E0123 00:08:20.409439 2389 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-2-n-8734b5e787\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-2-n-8734b5e787" Jan 23 00:08:20.462023 kubelet[2389]: I0123 00:08:20.461968 2389 apiserver.go:52] "Watching apiserver" Jan 23 00:08:20.484509 kubelet[2389]: I0123 00:08:20.484452 2389 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 00:08:20.592523 kubelet[2389]: I0123 00:08:20.592036 2389 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-2-n-8734b5e787" Jan 23 00:08:20.598667 kubelet[2389]: E0123 00:08:20.598637 2389 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-2-n-8734b5e787\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-2-2-n-8734b5e787" Jan 23 00:08:22.555871 systemd[1]: Reload requested from client PID 2675 ('systemctl') (unit session-7.scope)... Jan 23 00:08:22.556233 systemd[1]: Reloading... Jan 23 00:08:22.664853 zram_generator::config[2716]: No configuration found. Jan 23 00:08:22.891771 systemd[1]: Reloading finished in 335 ms. Jan 23 00:08:22.917514 kubelet[2389]: I0123 00:08:22.917218 2389 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 00:08:22.919070 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:08:22.935275 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 00:08:22.937767 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:08:22.937866 systemd[1]: kubelet.service: Consumed 1.350s CPU time, 121.4M memory peak. Jan 23 00:08:22.940834 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:08:23.097980 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:08:23.112101 (kubelet)[2765]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 00:08:23.181995 kubelet[2765]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 00:08:23.181995 kubelet[2765]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 00:08:23.183977 kubelet[2765]: I0123 00:08:23.182259 2765 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 00:08:23.192597 kubelet[2765]: I0123 00:08:23.192534 2765 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 00:08:23.192597 kubelet[2765]: I0123 00:08:23.192571 2765 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 00:08:23.192597 kubelet[2765]: I0123 00:08:23.192598 2765 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 00:08:23.192597 kubelet[2765]: I0123 00:08:23.192605 2765 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 00:08:23.192896 kubelet[2765]: I0123 00:08:23.192852 2765 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 00:08:23.196466 kubelet[2765]: I0123 00:08:23.196163 2765 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 00:08:23.199021 kubelet[2765]: I0123 00:08:23.198982 2765 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 00:08:23.203700 kubelet[2765]: I0123 00:08:23.203658 2765 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 00:08:23.206731 kubelet[2765]: I0123 00:08:23.206228 2765 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 00:08:23.206731 kubelet[2765]: I0123 00:08:23.206506 2765 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 00:08:23.206731 kubelet[2765]: I0123 00:08:23.206533 2765 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-2-n-8734b5e787","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 00:08:23.206731 kubelet[2765]: I0123 00:08:23.206698 2765 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 00:08:23.206982 kubelet[2765]: I0123 00:08:23.206707 2765 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 00:08:23.206982 kubelet[2765]: I0123 00:08:23.206742 2765 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 00:08:23.207840 kubelet[2765]: I0123 00:08:23.207822 2765 state_mem.go:36] "Initialized new in-memory state store" Jan 23 00:08:23.208070 kubelet[2765]: I0123 00:08:23.208058 2765 kubelet.go:475] "Attempting to sync node with API server" Jan 23 00:08:23.208118 kubelet[2765]: I0123 00:08:23.208084 2765 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 00:08:23.208118 kubelet[2765]: I0123 00:08:23.208109 2765 kubelet.go:387] "Adding apiserver pod source" Jan 23 00:08:23.208161 kubelet[2765]: I0123 00:08:23.208123 2765 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 00:08:23.209949 kubelet[2765]: I0123 00:08:23.209913 2765 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 00:08:23.211097 kubelet[2765]: I0123 00:08:23.211060 2765 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 00:08:23.211171 kubelet[2765]: I0123 00:08:23.211125 2765 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 00:08:23.217282 kubelet[2765]: I0123 00:08:23.213533 2765 server.go:1262] "Started kubelet" Jan 23 00:08:23.217282 kubelet[2765]: I0123 00:08:23.217196 2765 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 00:08:23.221536 kubelet[2765]: I0123 00:08:23.221482 2765 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 00:08:23.222464 kubelet[2765]: I0123 00:08:23.222387 2765 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 00:08:23.222553 kubelet[2765]: I0123 00:08:23.222478 2765 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 00:08:23.223619 kubelet[2765]: I0123 00:08:23.223599 2765 server.go:310] "Adding debug handlers to kubelet server" Jan 23 00:08:23.224849 kubelet[2765]: I0123 00:08:23.224819 2765 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 00:08:23.231703 kubelet[2765]: I0123 00:08:23.230108 2765 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 00:08:23.238107 kubelet[2765]: I0123 00:08:23.238004 2765 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 00:08:23.238371 kubelet[2765]: E0123 00:08:23.238158 2765 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459-2-2-n-8734b5e787\" not found" Jan 23 00:08:23.253966 kubelet[2765]: I0123 00:08:23.253923 2765 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 00:08:23.261471 kubelet[2765]: I0123 00:08:23.260878 2765 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 00:08:23.261471 kubelet[2765]: I0123 00:08:23.261012 2765 reconciler.go:29] "Reconciler: start to sync state" Jan 23 00:08:23.267338 kubelet[2765]: E0123 00:08:23.267120 2765 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 00:08:23.267492 kubelet[2765]: I0123 00:08:23.267459 2765 factory.go:223] Registration of the containerd container factory successfully Jan 23 00:08:23.267492 kubelet[2765]: I0123 00:08:23.267474 2765 factory.go:223] Registration of the systemd container factory successfully Jan 23 00:08:23.268535 kubelet[2765]: I0123 00:08:23.268375 2765 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 00:08:23.273176 kubelet[2765]: I0123 00:08:23.273146 2765 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 00:08:23.273748 kubelet[2765]: I0123 00:08:23.273306 2765 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 00:08:23.273748 kubelet[2765]: I0123 00:08:23.273339 2765 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 00:08:23.273748 kubelet[2765]: E0123 00:08:23.273383 2765 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 00:08:23.337766 kubelet[2765]: I0123 00:08:23.337736 2765 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 00:08:23.337941 kubelet[2765]: I0123 00:08:23.337925 2765 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 00:08:23.338024 kubelet[2765]: I0123 00:08:23.338015 2765 state_mem.go:36] "Initialized new in-memory state store" Jan 23 00:08:23.338237 kubelet[2765]: I0123 00:08:23.338221 2765 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 00:08:23.338327 kubelet[2765]: I0123 00:08:23.338293 2765 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 00:08:23.338393 kubelet[2765]: I0123 00:08:23.338383 2765 policy_none.go:49] "None policy: Start" Jan 23 00:08:23.338468 kubelet[2765]: I0123 00:08:23.338459 2765 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 00:08:23.338530 kubelet[2765]: I0123 00:08:23.338521 2765 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 00:08:23.338802 kubelet[2765]: I0123 00:08:23.338757 2765 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 23 00:08:23.338802 kubelet[2765]: I0123 00:08:23.338773 2765 policy_none.go:47] "Start" Jan 23 00:08:23.344974 kubelet[2765]: E0123 00:08:23.344932 2765 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 00:08:23.345151 kubelet[2765]: I0123 00:08:23.345129 2765 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 00:08:23.345188 kubelet[2765]: I0123 00:08:23.345147 2765 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 00:08:23.345473 kubelet[2765]: I0123 00:08:23.345446 2765 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 00:08:23.348497 kubelet[2765]: E0123 00:08:23.348013 2765 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 00:08:23.375360 kubelet[2765]: I0123 00:08:23.375280 2765 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-n-8734b5e787" Jan 23 00:08:23.377341 kubelet[2765]: I0123 00:08:23.377287 2765 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-2-n-8734b5e787" Jan 23 00:08:23.378060 kubelet[2765]: I0123 00:08:23.378040 2765 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-n-8734b5e787" Jan 23 00:08:23.457184 kubelet[2765]: I0123 00:08:23.457025 2765 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-n-8734b5e787" Jan 23 00:08:23.469673 kubelet[2765]: I0123 00:08:23.469637 2765 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-2-2-n-8734b5e787" Jan 23 00:08:23.469823 kubelet[2765]: I0123 00:08:23.469740 2765 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-2-n-8734b5e787" Jan 23 00:08:23.563263 kubelet[2765]: I0123 00:08:23.563207 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5a08d0e857caa8eabb85ae0abcda6428-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-2-n-8734b5e787\" (UID: \"5a08d0e857caa8eabb85ae0abcda6428\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-8734b5e787" Jan 23 00:08:23.564492 kubelet[2765]: I0123 00:08:23.564459 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5a08d0e857caa8eabb85ae0abcda6428-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-2-n-8734b5e787\" (UID: \"5a08d0e857caa8eabb85ae0abcda6428\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-8734b5e787" Jan 23 00:08:23.564747 kubelet[2765]: I0123 00:08:23.564673 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb3a3100b7f9c180b0766b61005848a9-kubeconfig\") pod \"kube-scheduler-ci-4459-2-2-n-8734b5e787\" (UID: \"eb3a3100b7f9c180b0766b61005848a9\") " pod="kube-system/kube-scheduler-ci-4459-2-2-n-8734b5e787" Jan 23 00:08:23.564858 kubelet[2765]: I0123 00:08:23.564804 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42e802775a7d2c97d008bdecd365cb5f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-2-n-8734b5e787\" (UID: \"42e802775a7d2c97d008bdecd365cb5f\") " pod="kube-system/kube-apiserver-ci-4459-2-2-n-8734b5e787" Jan 23 00:08:23.564858 kubelet[2765]: I0123 00:08:23.564834 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5a08d0e857caa8eabb85ae0abcda6428-ca-certs\") pod \"kube-controller-manager-ci-4459-2-2-n-8734b5e787\" (UID: \"5a08d0e857caa8eabb85ae0abcda6428\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-8734b5e787" Jan 23 00:08:23.564999 kubelet[2765]: I0123 00:08:23.564945 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5a08d0e857caa8eabb85ae0abcda6428-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-2-n-8734b5e787\" (UID: \"5a08d0e857caa8eabb85ae0abcda6428\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-8734b5e787" Jan 23 00:08:23.564999 kubelet[2765]: I0123 00:08:23.564975 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42e802775a7d2c97d008bdecd365cb5f-ca-certs\") pod \"kube-apiserver-ci-4459-2-2-n-8734b5e787\" (UID: \"42e802775a7d2c97d008bdecd365cb5f\") " pod="kube-system/kube-apiserver-ci-4459-2-2-n-8734b5e787" Jan 23 00:08:23.565116 kubelet[2765]: I0123 00:08:23.565103 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42e802775a7d2c97d008bdecd365cb5f-k8s-certs\") pod \"kube-apiserver-ci-4459-2-2-n-8734b5e787\" (UID: \"42e802775a7d2c97d008bdecd365cb5f\") " pod="kube-system/kube-apiserver-ci-4459-2-2-n-8734b5e787" Jan 23 00:08:23.565258 kubelet[2765]: I0123 00:08:23.565224 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5a08d0e857caa8eabb85ae0abcda6428-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-2-n-8734b5e787\" (UID: \"5a08d0e857caa8eabb85ae0abcda6428\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-8734b5e787" Jan 23 00:08:24.209094 kubelet[2765]: I0123 00:08:24.208910 2765 apiserver.go:52] "Watching apiserver" Jan 23 00:08:24.262221 kubelet[2765]: I0123 00:08:24.262166 2765 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 00:08:24.311876 kubelet[2765]: I0123 00:08:24.311801 2765 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-n-8734b5e787" Jan 23 00:08:24.325413 kubelet[2765]: E0123 00:08:24.325377 2765 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-2-n-8734b5e787\" already exists" pod="kube-system/kube-scheduler-ci-4459-2-2-n-8734b5e787" Jan 23 00:08:24.348317 kubelet[2765]: I0123 00:08:24.348257 2765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-2-n-8734b5e787" podStartSLOduration=1.348241278 podStartE2EDuration="1.348241278s" podCreationTimestamp="2026-01-23 00:08:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:08:24.346404793 +0000 UTC m=+1.226782748" watchObservedRunningTime="2026-01-23 00:08:24.348241278 +0000 UTC m=+1.228619233" Jan 23 00:08:24.377100 kubelet[2765]: I0123 00:08:24.377017 2765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-2-n-8734b5e787" podStartSLOduration=1.376993052 podStartE2EDuration="1.376993052s" podCreationTimestamp="2026-01-23 00:08:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:08:24.362020658 +0000 UTC m=+1.242398613" watchObservedRunningTime="2026-01-23 00:08:24.376993052 +0000 UTC m=+1.257371007" Jan 23 00:08:24.394388 kubelet[2765]: I0123 00:08:24.393845 2765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-2-n-8734b5e787" podStartSLOduration=1.393821574 podStartE2EDuration="1.393821574s" podCreationTimestamp="2026-01-23 00:08:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:08:24.377902834 +0000 UTC m=+1.258280789" watchObservedRunningTime="2026-01-23 00:08:24.393821574 +0000 UTC m=+1.274199529" Jan 23 00:08:28.619228 kubelet[2765]: I0123 00:08:28.618911 2765 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 00:08:28.620682 containerd[1518]: time="2026-01-23T00:08:28.620634221Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 00:08:28.621891 kubelet[2765]: I0123 00:08:28.621481 2765 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 00:08:29.487167 systemd[1]: Created slice kubepods-besteffort-pod1e5d7e57_11f5_448c_beba_6db5b403e50d.slice - libcontainer container kubepods-besteffort-pod1e5d7e57_11f5_448c_beba_6db5b403e50d.slice. Jan 23 00:08:29.507907 kubelet[2765]: I0123 00:08:29.507835 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e5d7e57-11f5-448c-beba-6db5b403e50d-xtables-lock\") pod \"kube-proxy-qgqk7\" (UID: \"1e5d7e57-11f5-448c-beba-6db5b403e50d\") " pod="kube-system/kube-proxy-qgqk7" Jan 23 00:08:29.507907 kubelet[2765]: I0123 00:08:29.507912 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e5d7e57-11f5-448c-beba-6db5b403e50d-lib-modules\") pod \"kube-proxy-qgqk7\" (UID: \"1e5d7e57-11f5-448c-beba-6db5b403e50d\") " pod="kube-system/kube-proxy-qgqk7" Jan 23 00:08:29.508195 kubelet[2765]: I0123 00:08:29.507956 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxgfg\" (UniqueName: \"kubernetes.io/projected/1e5d7e57-11f5-448c-beba-6db5b403e50d-kube-api-access-nxgfg\") pod \"kube-proxy-qgqk7\" (UID: \"1e5d7e57-11f5-448c-beba-6db5b403e50d\") " pod="kube-system/kube-proxy-qgqk7" Jan 23 00:08:29.508195 kubelet[2765]: I0123 00:08:29.508008 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1e5d7e57-11f5-448c-beba-6db5b403e50d-kube-proxy\") pod \"kube-proxy-qgqk7\" (UID: \"1e5d7e57-11f5-448c-beba-6db5b403e50d\") " pod="kube-system/kube-proxy-qgqk7" Jan 23 00:08:29.774342 systemd[1]: Created slice kubepods-besteffort-pod64b570ef_32f4_4624_97d1_0589103bdde8.slice - libcontainer container kubepods-besteffort-pod64b570ef_32f4_4624_97d1_0589103bdde8.slice. Jan 23 00:08:29.799825 containerd[1518]: time="2026-01-23T00:08:29.799753972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qgqk7,Uid:1e5d7e57-11f5-448c-beba-6db5b403e50d,Namespace:kube-system,Attempt:0,}" Jan 23 00:08:29.809777 kubelet[2765]: I0123 00:08:29.809726 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrdtz\" (UniqueName: \"kubernetes.io/projected/64b570ef-32f4-4624-97d1-0589103bdde8-kube-api-access-xrdtz\") pod \"tigera-operator-65cdcdfd6d-fdp2r\" (UID: \"64b570ef-32f4-4624-97d1-0589103bdde8\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-fdp2r" Jan 23 00:08:29.810837 kubelet[2765]: I0123 00:08:29.810316 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/64b570ef-32f4-4624-97d1-0589103bdde8-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-fdp2r\" (UID: \"64b570ef-32f4-4624-97d1-0589103bdde8\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-fdp2r" Jan 23 00:08:29.817350 containerd[1518]: time="2026-01-23T00:08:29.817303194Z" level=info msg="connecting to shim 3d0711334369462f8a56f337d38ac88b04a6cf83b58e169b059b4de1241d2fd6" address="unix:///run/containerd/s/3cc1a50c0e672a21605884722eeed48308ca0ef786750c68c2dd23dcc3f86b2d" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:08:29.841904 systemd[1]: Started cri-containerd-3d0711334369462f8a56f337d38ac88b04a6cf83b58e169b059b4de1241d2fd6.scope - libcontainer container 3d0711334369462f8a56f337d38ac88b04a6cf83b58e169b059b4de1241d2fd6. Jan 23 00:08:29.871780 containerd[1518]: time="2026-01-23T00:08:29.871728194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qgqk7,Uid:1e5d7e57-11f5-448c-beba-6db5b403e50d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d0711334369462f8a56f337d38ac88b04a6cf83b58e169b059b4de1241d2fd6\"" Jan 23 00:08:29.878329 containerd[1518]: time="2026-01-23T00:08:29.878245201Z" level=info msg="CreateContainer within sandbox \"3d0711334369462f8a56f337d38ac88b04a6cf83b58e169b059b4de1241d2fd6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 00:08:29.894699 containerd[1518]: time="2026-01-23T00:08:29.894626226Z" level=info msg="Container 4a98284c199d96144f92b4d1007575bdf655a6cdd974cc84ba73a90279dfb495: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:08:29.905577 containerd[1518]: time="2026-01-23T00:08:29.905473382Z" level=info msg="CreateContainer within sandbox \"3d0711334369462f8a56f337d38ac88b04a6cf83b58e169b059b4de1241d2fd6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4a98284c199d96144f92b4d1007575bdf655a6cdd974cc84ba73a90279dfb495\"" Jan 23 00:08:29.907976 containerd[1518]: time="2026-01-23T00:08:29.907114065Z" level=info msg="StartContainer for \"4a98284c199d96144f92b4d1007575bdf655a6cdd974cc84ba73a90279dfb495\"" Jan 23 00:08:29.913923 containerd[1518]: time="2026-01-23T00:08:29.913846613Z" level=info msg="connecting to shim 4a98284c199d96144f92b4d1007575bdf655a6cdd974cc84ba73a90279dfb495" address="unix:///run/containerd/s/3cc1a50c0e672a21605884722eeed48308ca0ef786750c68c2dd23dcc3f86b2d" protocol=ttrpc version=3 Jan 23 00:08:29.944112 systemd[1]: Started cri-containerd-4a98284c199d96144f92b4d1007575bdf655a6cdd974cc84ba73a90279dfb495.scope - libcontainer container 4a98284c199d96144f92b4d1007575bdf655a6cdd974cc84ba73a90279dfb495. Jan 23 00:08:30.027018 containerd[1518]: time="2026-01-23T00:08:30.025928682Z" level=info msg="StartContainer for \"4a98284c199d96144f92b4d1007575bdf655a6cdd974cc84ba73a90279dfb495\" returns successfully" Jan 23 00:08:30.084940 containerd[1518]: time="2026-01-23T00:08:30.084881010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-fdp2r,Uid:64b570ef-32f4-4624-97d1-0589103bdde8,Namespace:tigera-operator,Attempt:0,}" Jan 23 00:08:30.107954 containerd[1518]: time="2026-01-23T00:08:30.107905567Z" level=info msg="connecting to shim dcabdc23984b492f09526e80736cc17b9501380c6d5d6ae23420ccc5088d4306" address="unix:///run/containerd/s/226f1f02f1586669acc6595dee591b5371a17bc0a3855a3a29c79b4c83d451c6" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:08:30.136937 systemd[1]: Started cri-containerd-dcabdc23984b492f09526e80736cc17b9501380c6d5d6ae23420ccc5088d4306.scope - libcontainer container dcabdc23984b492f09526e80736cc17b9501380c6d5d6ae23420ccc5088d4306. Jan 23 00:08:30.178503 containerd[1518]: time="2026-01-23T00:08:30.178372253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-fdp2r,Uid:64b570ef-32f4-4624-97d1-0589103bdde8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"dcabdc23984b492f09526e80736cc17b9501380c6d5d6ae23420ccc5088d4306\"" Jan 23 00:08:30.180432 containerd[1518]: time="2026-01-23T00:08:30.180362287Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 00:08:30.349706 kubelet[2765]: I0123 00:08:30.349571 2765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qgqk7" podStartSLOduration=1.3495523249999999 podStartE2EDuration="1.349552325s" podCreationTimestamp="2026-01-23 00:08:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:08:30.348846456 +0000 UTC m=+7.229224411" watchObservedRunningTime="2026-01-23 00:08:30.349552325 +0000 UTC m=+7.229930280" Jan 23 00:08:31.899056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4147792416.mount: Deactivated successfully. Jan 23 00:08:32.286127 containerd[1518]: time="2026-01-23T00:08:32.285771763Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:08:32.288703 containerd[1518]: time="2026-01-23T00:08:32.288576385Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Jan 23 00:08:32.290770 containerd[1518]: time="2026-01-23T00:08:32.290602934Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:08:32.295237 containerd[1518]: time="2026-01-23T00:08:32.295165800Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:08:32.296348 containerd[1518]: time="2026-01-23T00:08:32.295757816Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.115358366s" Jan 23 00:08:32.296348 containerd[1518]: time="2026-01-23T00:08:32.295851464Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 23 00:08:32.301837 containerd[1518]: time="2026-01-23T00:08:32.301784459Z" level=info msg="CreateContainer within sandbox \"dcabdc23984b492f09526e80736cc17b9501380c6d5d6ae23420ccc5088d4306\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 00:08:32.312733 containerd[1518]: time="2026-01-23T00:08:32.309438213Z" level=info msg="Container ba6589a2b5f0cf356e41e371dd086f3797e63a85be13c854c6e6898fe398423c: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:08:32.333248 containerd[1518]: time="2026-01-23T00:08:32.332522090Z" level=info msg="CreateContainer within sandbox \"dcabdc23984b492f09526e80736cc17b9501380c6d5d6ae23420ccc5088d4306\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ba6589a2b5f0cf356e41e371dd086f3797e63a85be13c854c6e6898fe398423c\"" Jan 23 00:08:32.334912 containerd[1518]: time="2026-01-23T00:08:32.334881350Z" level=info msg="StartContainer for \"ba6589a2b5f0cf356e41e371dd086f3797e63a85be13c854c6e6898fe398423c\"" Jan 23 00:08:32.338010 containerd[1518]: time="2026-01-23T00:08:32.337862149Z" level=info msg="connecting to shim ba6589a2b5f0cf356e41e371dd086f3797e63a85be13c854c6e6898fe398423c" address="unix:///run/containerd/s/226f1f02f1586669acc6595dee591b5371a17bc0a3855a3a29c79b4c83d451c6" protocol=ttrpc version=3 Jan 23 00:08:32.370183 systemd[1]: Started cri-containerd-ba6589a2b5f0cf356e41e371dd086f3797e63a85be13c854c6e6898fe398423c.scope - libcontainer container ba6589a2b5f0cf356e41e371dd086f3797e63a85be13c854c6e6898fe398423c. Jan 23 00:08:32.409935 containerd[1518]: time="2026-01-23T00:08:32.409858914Z" level=info msg="StartContainer for \"ba6589a2b5f0cf356e41e371dd086f3797e63a85be13c854c6e6898fe398423c\" returns successfully" Jan 23 00:08:38.577376 sudo[1809]: pam_unix(sudo:session): session closed for user root Jan 23 00:08:38.676056 sshd[1808]: Connection closed by 68.220.241.50 port 41836 Jan 23 00:08:38.678967 sshd-session[1805]: pam_unix(sshd:session): session closed for user core Jan 23 00:08:38.685095 systemd[1]: sshd@6-88.198.161.46:22-68.220.241.50:41836.service: Deactivated successfully. Jan 23 00:08:38.690550 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 00:08:38.690973 systemd[1]: session-7.scope: Consumed 7.608s CPU time, 223M memory peak. Jan 23 00:08:38.693275 systemd-logind[1494]: Session 7 logged out. Waiting for processes to exit. Jan 23 00:08:38.697858 systemd-logind[1494]: Removed session 7. Jan 23 00:08:51.940352 kubelet[2765]: I0123 00:08:51.940173 2765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-fdp2r" podStartSLOduration=20.823211856 podStartE2EDuration="22.94014945s" podCreationTimestamp="2026-01-23 00:08:29 +0000 UTC" firstStartedPulling="2026-01-23 00:08:30.179939726 +0000 UTC m=+7.060317681" lastFinishedPulling="2026-01-23 00:08:32.29687732 +0000 UTC m=+9.177255275" observedRunningTime="2026-01-23 00:08:33.383973175 +0000 UTC m=+10.264351130" watchObservedRunningTime="2026-01-23 00:08:51.94014945 +0000 UTC m=+28.820527405" Jan 23 00:08:51.955483 systemd[1]: Created slice kubepods-besteffort-poda7d8b3a8_9461_4a40_8e4e_6c0d629855e2.slice - libcontainer container kubepods-besteffort-poda7d8b3a8_9461_4a40_8e4e_6c0d629855e2.slice. Jan 23 00:08:52.066100 kubelet[2765]: I0123 00:08:52.065929 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7d8b3a8-9461-4a40-8e4e-6c0d629855e2-tigera-ca-bundle\") pod \"calico-typha-555c5ddc6f-nrw7f\" (UID: \"a7d8b3a8-9461-4a40-8e4e-6c0d629855e2\") " pod="calico-system/calico-typha-555c5ddc6f-nrw7f" Jan 23 00:08:52.066100 kubelet[2765]: I0123 00:08:52.065979 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r55gh\" (UniqueName: \"kubernetes.io/projected/a7d8b3a8-9461-4a40-8e4e-6c0d629855e2-kube-api-access-r55gh\") pod \"calico-typha-555c5ddc6f-nrw7f\" (UID: \"a7d8b3a8-9461-4a40-8e4e-6c0d629855e2\") " pod="calico-system/calico-typha-555c5ddc6f-nrw7f" Jan 23 00:08:52.066100 kubelet[2765]: I0123 00:08:52.066041 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a7d8b3a8-9461-4a40-8e4e-6c0d629855e2-typha-certs\") pod \"calico-typha-555c5ddc6f-nrw7f\" (UID: \"a7d8b3a8-9461-4a40-8e4e-6c0d629855e2\") " pod="calico-system/calico-typha-555c5ddc6f-nrw7f" Jan 23 00:08:52.268263 containerd[1518]: time="2026-01-23T00:08:52.268012416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-555c5ddc6f-nrw7f,Uid:a7d8b3a8-9461-4a40-8e4e-6c0d629855e2,Namespace:calico-system,Attempt:0,}" Jan 23 00:08:52.299136 containerd[1518]: time="2026-01-23T00:08:52.299005270Z" level=info msg="connecting to shim f773d549c37c55121c59bd997c54e2a82338f535d4c83a6965391235998e7408" address="unix:///run/containerd/s/a5d4cd10dae30bb869a1bd0a1143769bfb39dce69bf20d7c0f9951cf223128e0" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:08:52.348308 systemd[1]: Started cri-containerd-f773d549c37c55121c59bd997c54e2a82338f535d4c83a6965391235998e7408.scope - libcontainer container f773d549c37c55121c59bd997c54e2a82338f535d4c83a6965391235998e7408. Jan 23 00:08:52.396331 systemd[1]: Created slice kubepods-besteffort-pod6d40903f_8e1c_47ce_9f76_8b8f9258adc5.slice - libcontainer container kubepods-besteffort-pod6d40903f_8e1c_47ce_9f76_8b8f9258adc5.slice. Jan 23 00:08:52.441152 containerd[1518]: time="2026-01-23T00:08:52.441091627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-555c5ddc6f-nrw7f,Uid:a7d8b3a8-9461-4a40-8e4e-6c0d629855e2,Namespace:calico-system,Attempt:0,} returns sandbox id \"f773d549c37c55121c59bd997c54e2a82338f535d4c83a6965391235998e7408\"" Jan 23 00:08:52.443713 containerd[1518]: time="2026-01-23T00:08:52.443267443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 00:08:52.469214 kubelet[2765]: I0123 00:08:52.469147 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6d40903f-8e1c-47ce-9f76-8b8f9258adc5-var-run-calico\") pod \"calico-node-zrq8d\" (UID: \"6d40903f-8e1c-47ce-9f76-8b8f9258adc5\") " pod="calico-system/calico-node-zrq8d" Jan 23 00:08:52.469654 kubelet[2765]: I0123 00:08:52.469492 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb44x\" (UniqueName: \"kubernetes.io/projected/6d40903f-8e1c-47ce-9f76-8b8f9258adc5-kube-api-access-zb44x\") pod \"calico-node-zrq8d\" (UID: \"6d40903f-8e1c-47ce-9f76-8b8f9258adc5\") " pod="calico-system/calico-node-zrq8d" Jan 23 00:08:52.469654 kubelet[2765]: I0123 00:08:52.469608 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d40903f-8e1c-47ce-9f76-8b8f9258adc5-tigera-ca-bundle\") pod \"calico-node-zrq8d\" (UID: \"6d40903f-8e1c-47ce-9f76-8b8f9258adc5\") " pod="calico-system/calico-node-zrq8d" Jan 23 00:08:52.469875 kubelet[2765]: I0123 00:08:52.469804 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6d40903f-8e1c-47ce-9f76-8b8f9258adc5-node-certs\") pod \"calico-node-zrq8d\" (UID: \"6d40903f-8e1c-47ce-9f76-8b8f9258adc5\") " pod="calico-system/calico-node-zrq8d" Jan 23 00:08:52.470018 kubelet[2765]: I0123 00:08:52.469858 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d40903f-8e1c-47ce-9f76-8b8f9258adc5-lib-modules\") pod \"calico-node-zrq8d\" (UID: \"6d40903f-8e1c-47ce-9f76-8b8f9258adc5\") " pod="calico-system/calico-node-zrq8d" Jan 23 00:08:52.470018 kubelet[2765]: I0123 00:08:52.469967 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6d40903f-8e1c-47ce-9f76-8b8f9258adc5-policysync\") pod \"calico-node-zrq8d\" (UID: \"6d40903f-8e1c-47ce-9f76-8b8f9258adc5\") " pod="calico-system/calico-node-zrq8d" Jan 23 00:08:52.470018 kubelet[2765]: I0123 00:08:52.469988 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d40903f-8e1c-47ce-9f76-8b8f9258adc5-xtables-lock\") pod \"calico-node-zrq8d\" (UID: \"6d40903f-8e1c-47ce-9f76-8b8f9258adc5\") " pod="calico-system/calico-node-zrq8d" Jan 23 00:08:52.470221 kubelet[2765]: I0123 00:08:52.470164 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6d40903f-8e1c-47ce-9f76-8b8f9258adc5-cni-log-dir\") pod \"calico-node-zrq8d\" (UID: \"6d40903f-8e1c-47ce-9f76-8b8f9258adc5\") " pod="calico-system/calico-node-zrq8d" Jan 23 00:08:52.470221 kubelet[2765]: I0123 00:08:52.470198 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6d40903f-8e1c-47ce-9f76-8b8f9258adc5-flexvol-driver-host\") pod \"calico-node-zrq8d\" (UID: \"6d40903f-8e1c-47ce-9f76-8b8f9258adc5\") " pod="calico-system/calico-node-zrq8d" Jan 23 00:08:52.470364 kubelet[2765]: I0123 00:08:52.470348 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6d40903f-8e1c-47ce-9f76-8b8f9258adc5-cni-bin-dir\") pod \"calico-node-zrq8d\" (UID: \"6d40903f-8e1c-47ce-9f76-8b8f9258adc5\") " pod="calico-system/calico-node-zrq8d" Jan 23 00:08:52.470511 kubelet[2765]: I0123 00:08:52.470421 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6d40903f-8e1c-47ce-9f76-8b8f9258adc5-cni-net-dir\") pod \"calico-node-zrq8d\" (UID: \"6d40903f-8e1c-47ce-9f76-8b8f9258adc5\") " pod="calico-system/calico-node-zrq8d" Jan 23 00:08:52.470511 kubelet[2765]: I0123 00:08:52.470444 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6d40903f-8e1c-47ce-9f76-8b8f9258adc5-var-lib-calico\") pod \"calico-node-zrq8d\" (UID: \"6d40903f-8e1c-47ce-9f76-8b8f9258adc5\") " pod="calico-system/calico-node-zrq8d" Jan 23 00:08:52.571179 kubelet[2765]: E0123 00:08:52.570887 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fd26x" podUID="40189ccf-4a54-4a06-a382-10a9d6df2d28" Jan 23 00:08:52.575722 kubelet[2765]: E0123 00:08:52.575673 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.576306 kubelet[2765]: W0123 00:08:52.576051 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.576306 kubelet[2765]: E0123 00:08:52.576095 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.576714 kubelet[2765]: E0123 00:08:52.576695 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.576968 kubelet[2765]: W0123 00:08:52.576769 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.576968 kubelet[2765]: E0123 00:08:52.576789 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.577422 kubelet[2765]: E0123 00:08:52.577400 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.577637 kubelet[2765]: W0123 00:08:52.577593 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.577637 kubelet[2765]: E0123 00:08:52.577616 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.579176 kubelet[2765]: E0123 00:08:52.578955 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.579176 kubelet[2765]: W0123 00:08:52.579113 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.579176 kubelet[2765]: E0123 00:08:52.579132 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.581184 kubelet[2765]: E0123 00:08:52.581151 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.581476 kubelet[2765]: W0123 00:08:52.581304 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.581476 kubelet[2765]: E0123 00:08:52.581329 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.583223 kubelet[2765]: E0123 00:08:52.582829 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.583223 kubelet[2765]: W0123 00:08:52.582847 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.583223 kubelet[2765]: E0123 00:08:52.582867 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.583223 kubelet[2765]: E0123 00:08:52.583215 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.583223 kubelet[2765]: W0123 00:08:52.583225 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.583339 kubelet[2765]: E0123 00:08:52.583237 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.583585 kubelet[2765]: E0123 00:08:52.583478 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.583585 kubelet[2765]: W0123 00:08:52.583580 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.583764 kubelet[2765]: E0123 00:08:52.583595 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.586564 kubelet[2765]: E0123 00:08:52.586306 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.586564 kubelet[2765]: W0123 00:08:52.586325 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.586564 kubelet[2765]: E0123 00:08:52.586342 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.587924 kubelet[2765]: E0123 00:08:52.587892 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.587924 kubelet[2765]: W0123 00:08:52.587915 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.588037 kubelet[2765]: E0123 00:08:52.587935 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.592072 kubelet[2765]: E0123 00:08:52.591906 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.592072 kubelet[2765]: W0123 00:08:52.591926 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.592072 kubelet[2765]: E0123 00:08:52.591944 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.592287 kubelet[2765]: E0123 00:08:52.592275 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.592351 kubelet[2765]: W0123 00:08:52.592340 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.592462 kubelet[2765]: E0123 00:08:52.592388 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.592781 kubelet[2765]: E0123 00:08:52.592768 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.593052 kubelet[2765]: W0123 00:08:52.592906 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.593052 kubelet[2765]: E0123 00:08:52.592925 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.593340 kubelet[2765]: E0123 00:08:52.593327 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.593412 kubelet[2765]: W0123 00:08:52.593400 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.593464 kubelet[2765]: E0123 00:08:52.593455 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.609491 kubelet[2765]: E0123 00:08:52.609111 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.609491 kubelet[2765]: W0123 00:08:52.609418 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.609491 kubelet[2765]: E0123 00:08:52.609440 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.638554 kubelet[2765]: E0123 00:08:52.638467 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.638890 kubelet[2765]: W0123 00:08:52.638724 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.638890 kubelet[2765]: E0123 00:08:52.638756 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.640035 kubelet[2765]: E0123 00:08:52.639790 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.640035 kubelet[2765]: W0123 00:08:52.639911 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.640035 kubelet[2765]: E0123 00:08:52.639966 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.641117 kubelet[2765]: E0123 00:08:52.641094 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.641262 kubelet[2765]: W0123 00:08:52.641188 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.641262 kubelet[2765]: E0123 00:08:52.641211 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.641581 kubelet[2765]: E0123 00:08:52.641487 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.641581 kubelet[2765]: W0123 00:08:52.641522 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.641581 kubelet[2765]: E0123 00:08:52.641534 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.642031 kubelet[2765]: E0123 00:08:52.641981 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.642613 kubelet[2765]: W0123 00:08:52.642543 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.642613 kubelet[2765]: E0123 00:08:52.642568 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.642934 kubelet[2765]: E0123 00:08:52.642892 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.642934 kubelet[2765]: W0123 00:08:52.642904 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.642934 kubelet[2765]: E0123 00:08:52.642916 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.643718 kubelet[2765]: E0123 00:08:52.643331 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.643718 kubelet[2765]: W0123 00:08:52.643344 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.643718 kubelet[2765]: E0123 00:08:52.643356 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.644079 kubelet[2765]: E0123 00:08:52.644012 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.644079 kubelet[2765]: W0123 00:08:52.644026 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.644079 kubelet[2765]: E0123 00:08:52.644038 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.644534 kubelet[2765]: E0123 00:08:52.644471 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.645786 kubelet[2765]: W0123 00:08:52.645751 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.645786 kubelet[2765]: E0123 00:08:52.645782 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.646582 kubelet[2765]: E0123 00:08:52.646285 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.646582 kubelet[2765]: W0123 00:08:52.646315 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.646582 kubelet[2765]: E0123 00:08:52.646422 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.647112 kubelet[2765]: E0123 00:08:52.646874 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.647112 kubelet[2765]: W0123 00:08:52.646892 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.647112 kubelet[2765]: E0123 00:08:52.646905 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.649022 kubelet[2765]: E0123 00:08:52.649003 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.649241 kubelet[2765]: W0123 00:08:52.649098 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.649241 kubelet[2765]: E0123 00:08:52.649120 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.649835 kubelet[2765]: E0123 00:08:52.649818 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.650097 kubelet[2765]: W0123 00:08:52.650083 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.650480 kubelet[2765]: E0123 00:08:52.650170 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.651109 kubelet[2765]: E0123 00:08:52.651038 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.651109 kubelet[2765]: W0123 00:08:52.651057 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.651109 kubelet[2765]: E0123 00:08:52.651069 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.651620 kubelet[2765]: E0123 00:08:52.651548 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.651620 kubelet[2765]: W0123 00:08:52.651564 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.651620 kubelet[2765]: E0123 00:08:52.651576 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.653365 kubelet[2765]: E0123 00:08:52.653301 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.653365 kubelet[2765]: W0123 00:08:52.653327 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.653365 kubelet[2765]: E0123 00:08:52.653340 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.653923 kubelet[2765]: E0123 00:08:52.653906 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.654705 kubelet[2765]: W0123 00:08:52.653991 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.654705 kubelet[2765]: E0123 00:08:52.654008 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.655043 kubelet[2765]: E0123 00:08:52.654981 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.655043 kubelet[2765]: W0123 00:08:52.654995 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.655043 kubelet[2765]: E0123 00:08:52.655007 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.655294 kubelet[2765]: E0123 00:08:52.655266 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.655412 kubelet[2765]: W0123 00:08:52.655345 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.655412 kubelet[2765]: E0123 00:08:52.655366 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.655789 kubelet[2765]: E0123 00:08:52.655774 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.655879 kubelet[2765]: W0123 00:08:52.655867 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.655936 kubelet[2765]: E0123 00:08:52.655917 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.672396 kubelet[2765]: E0123 00:08:52.672329 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.672396 kubelet[2765]: W0123 00:08:52.672352 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.672396 kubelet[2765]: E0123 00:08:52.672372 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.672784 kubelet[2765]: I0123 00:08:52.672618 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/40189ccf-4a54-4a06-a382-10a9d6df2d28-registration-dir\") pod \"csi-node-driver-fd26x\" (UID: \"40189ccf-4a54-4a06-a382-10a9d6df2d28\") " pod="calico-system/csi-node-driver-fd26x" Jan 23 00:08:52.672898 kubelet[2765]: E0123 00:08:52.672885 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.672951 kubelet[2765]: W0123 00:08:52.672939 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.673000 kubelet[2765]: E0123 00:08:52.672990 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.673225 kubelet[2765]: E0123 00:08:52.673191 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.673225 kubelet[2765]: W0123 00:08:52.673202 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.673225 kubelet[2765]: E0123 00:08:52.673212 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.673580 kubelet[2765]: E0123 00:08:52.673537 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.673580 kubelet[2765]: W0123 00:08:52.673552 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.673580 kubelet[2765]: E0123 00:08:52.673564 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.673734 kubelet[2765]: I0123 00:08:52.673719 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/40189ccf-4a54-4a06-a382-10a9d6df2d28-kubelet-dir\") pod \"csi-node-driver-fd26x\" (UID: \"40189ccf-4a54-4a06-a382-10a9d6df2d28\") " pod="calico-system/csi-node-driver-fd26x" Jan 23 00:08:52.674595 kubelet[2765]: E0123 00:08:52.674099 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.674738 kubelet[2765]: W0123 00:08:52.674721 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.675559 kubelet[2765]: E0123 00:08:52.674807 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.676051 kubelet[2765]: E0123 00:08:52.676035 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.676133 kubelet[2765]: W0123 00:08:52.676120 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.676189 kubelet[2765]: E0123 00:08:52.676178 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.676615 kubelet[2765]: E0123 00:08:52.676601 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.676733 kubelet[2765]: W0123 00:08:52.676719 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.676804 kubelet[2765]: E0123 00:08:52.676792 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.676879 kubelet[2765]: I0123 00:08:52.676866 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/40189ccf-4a54-4a06-a382-10a9d6df2d28-socket-dir\") pod \"csi-node-driver-fd26x\" (UID: \"40189ccf-4a54-4a06-a382-10a9d6df2d28\") " pod="calico-system/csi-node-driver-fd26x" Jan 23 00:08:52.677150 kubelet[2765]: E0123 00:08:52.677137 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.677229 kubelet[2765]: W0123 00:08:52.677203 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.677229 kubelet[2765]: E0123 00:08:52.677219 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.677613 kubelet[2765]: E0123 00:08:52.677577 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.677613 kubelet[2765]: W0123 00:08:52.677590 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.677613 kubelet[2765]: E0123 00:08:52.677600 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.678018 kubelet[2765]: E0123 00:08:52.677977 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.678018 kubelet[2765]: W0123 00:08:52.677990 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.678018 kubelet[2765]: E0123 00:08:52.678003 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.678139 kubelet[2765]: I0123 00:08:52.678125 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/40189ccf-4a54-4a06-a382-10a9d6df2d28-varrun\") pod \"csi-node-driver-fd26x\" (UID: \"40189ccf-4a54-4a06-a382-10a9d6df2d28\") " pod="calico-system/csi-node-driver-fd26x" Jan 23 00:08:52.678396 kubelet[2765]: E0123 00:08:52.678384 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.678463 kubelet[2765]: W0123 00:08:52.678451 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.678543 kubelet[2765]: E0123 00:08:52.678531 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.678796 kubelet[2765]: E0123 00:08:52.678759 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.678796 kubelet[2765]: W0123 00:08:52.678771 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.678796 kubelet[2765]: E0123 00:08:52.678781 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.679121 kubelet[2765]: E0123 00:08:52.679085 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.679121 kubelet[2765]: W0123 00:08:52.679096 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.679121 kubelet[2765]: E0123 00:08:52.679105 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.679289 kubelet[2765]: I0123 00:08:52.679236 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tp75\" (UniqueName: \"kubernetes.io/projected/40189ccf-4a54-4a06-a382-10a9d6df2d28-kube-api-access-2tp75\") pod \"csi-node-driver-fd26x\" (UID: \"40189ccf-4a54-4a06-a382-10a9d6df2d28\") " pod="calico-system/csi-node-driver-fd26x" Jan 23 00:08:52.679598 kubelet[2765]: E0123 00:08:52.679487 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.679598 kubelet[2765]: W0123 00:08:52.679539 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.679598 kubelet[2765]: E0123 00:08:52.679554 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.679945 kubelet[2765]: E0123 00:08:52.679899 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.679945 kubelet[2765]: W0123 00:08:52.679911 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.679945 kubelet[2765]: E0123 00:08:52.679921 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.703890 containerd[1518]: time="2026-01-23T00:08:52.703821224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zrq8d,Uid:6d40903f-8e1c-47ce-9f76-8b8f9258adc5,Namespace:calico-system,Attempt:0,}" Jan 23 00:08:52.730435 containerd[1518]: time="2026-01-23T00:08:52.730010332Z" level=info msg="connecting to shim 946b9f4be4e10df3feb5aab18dae3d2e7a08e68e7b2e5322c14042e156912e05" address="unix:///run/containerd/s/19f169cc70fd1d25a3c6cb66abf519f7165a03c10a1e1f0b5dfc19d6b3e5f155" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:08:52.754059 systemd[1]: Started cri-containerd-946b9f4be4e10df3feb5aab18dae3d2e7a08e68e7b2e5322c14042e156912e05.scope - libcontainer container 946b9f4be4e10df3feb5aab18dae3d2e7a08e68e7b2e5322c14042e156912e05. Jan 23 00:08:52.779665 kubelet[2765]: E0123 00:08:52.779637 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.779917 kubelet[2765]: W0123 00:08:52.779828 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.779917 kubelet[2765]: E0123 00:08:52.779858 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.780409 kubelet[2765]: E0123 00:08:52.780356 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.780409 kubelet[2765]: W0123 00:08:52.780372 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.780409 kubelet[2765]: E0123 00:08:52.780392 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.781082 kubelet[2765]: E0123 00:08:52.780983 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.781082 kubelet[2765]: W0123 00:08:52.781006 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.781082 kubelet[2765]: E0123 00:08:52.781022 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.782050 kubelet[2765]: E0123 00:08:52.781680 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.782050 kubelet[2765]: W0123 00:08:52.781717 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.782050 kubelet[2765]: E0123 00:08:52.781734 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.782050 kubelet[2765]: E0123 00:08:52.781928 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.782050 kubelet[2765]: W0123 00:08:52.781938 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.782050 kubelet[2765]: E0123 00:08:52.781947 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.782509 kubelet[2765]: E0123 00:08:52.782405 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.782509 kubelet[2765]: W0123 00:08:52.782421 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.782509 kubelet[2765]: E0123 00:08:52.782439 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.782799 kubelet[2765]: E0123 00:08:52.782786 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.782941 kubelet[2765]: W0123 00:08:52.782866 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.782941 kubelet[2765]: E0123 00:08:52.782884 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.783163 kubelet[2765]: E0123 00:08:52.783149 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.783295 kubelet[2765]: W0123 00:08:52.783213 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.783295 kubelet[2765]: E0123 00:08:52.783228 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.783679 kubelet[2765]: E0123 00:08:52.783561 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.783679 kubelet[2765]: W0123 00:08:52.783575 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.783679 kubelet[2765]: E0123 00:08:52.783587 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.783998 kubelet[2765]: E0123 00:08:52.783892 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.783998 kubelet[2765]: W0123 00:08:52.783904 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.783998 kubelet[2765]: E0123 00:08:52.783914 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.784208 kubelet[2765]: E0123 00:08:52.784185 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.784208 kubelet[2765]: W0123 00:08:52.784203 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.784266 kubelet[2765]: E0123 00:08:52.784218 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.784575 kubelet[2765]: E0123 00:08:52.784556 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.784575 kubelet[2765]: W0123 00:08:52.784568 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.784575 kubelet[2765]: E0123 00:08:52.784580 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.785825 kubelet[2765]: E0123 00:08:52.785805 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.785962 kubelet[2765]: W0123 00:08:52.785895 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.785962 kubelet[2765]: E0123 00:08:52.785914 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.786191 kubelet[2765]: E0123 00:08:52.786175 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.786236 kubelet[2765]: W0123 00:08:52.786191 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.786236 kubelet[2765]: E0123 00:08:52.786204 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.786463 kubelet[2765]: E0123 00:08:52.786449 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.786463 kubelet[2765]: W0123 00:08:52.786462 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.786569 kubelet[2765]: E0123 00:08:52.786474 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.786712 kubelet[2765]: E0123 00:08:52.786653 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.786712 kubelet[2765]: W0123 00:08:52.786668 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.786712 kubelet[2765]: E0123 00:08:52.786679 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.786849 kubelet[2765]: E0123 00:08:52.786833 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.786849 kubelet[2765]: W0123 00:08:52.786849 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.787002 kubelet[2765]: E0123 00:08:52.786858 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.787152 kubelet[2765]: E0123 00:08:52.787130 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.787187 kubelet[2765]: W0123 00:08:52.787152 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.787187 kubelet[2765]: E0123 00:08:52.787165 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.787527 kubelet[2765]: E0123 00:08:52.787512 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.787527 kubelet[2765]: W0123 00:08:52.787527 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.787591 kubelet[2765]: E0123 00:08:52.787540 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.787867 kubelet[2765]: E0123 00:08:52.787852 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.787917 kubelet[2765]: W0123 00:08:52.787866 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.787917 kubelet[2765]: E0123 00:08:52.787884 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.788226 kubelet[2765]: E0123 00:08:52.788211 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.788226 kubelet[2765]: W0123 00:08:52.788226 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.788311 kubelet[2765]: E0123 00:08:52.788240 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.788457 kubelet[2765]: E0123 00:08:52.788444 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.788457 kubelet[2765]: W0123 00:08:52.788456 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.788528 kubelet[2765]: E0123 00:08:52.788467 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.788870 kubelet[2765]: E0123 00:08:52.788851 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.788870 kubelet[2765]: W0123 00:08:52.788868 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.788939 kubelet[2765]: E0123 00:08:52.788881 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.789162 kubelet[2765]: E0123 00:08:52.789149 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.789162 kubelet[2765]: W0123 00:08:52.789161 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.789226 kubelet[2765]: E0123 00:08:52.789172 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.789532 kubelet[2765]: E0123 00:08:52.789510 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.789532 kubelet[2765]: W0123 00:08:52.789522 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.789593 kubelet[2765]: E0123 00:08:52.789532 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:52.798906 containerd[1518]: time="2026-01-23T00:08:52.798866375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zrq8d,Uid:6d40903f-8e1c-47ce-9f76-8b8f9258adc5,Namespace:calico-system,Attempt:0,} returns sandbox id \"946b9f4be4e10df3feb5aab18dae3d2e7a08e68e7b2e5322c14042e156912e05\"" Jan 23 00:08:52.808482 kubelet[2765]: E0123 00:08:52.808392 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:52.808482 kubelet[2765]: W0123 00:08:52.808416 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:52.808482 kubelet[2765]: E0123 00:08:52.808436 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:53.895985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2558110832.mount: Deactivated successfully. Jan 23 00:08:54.273991 kubelet[2765]: E0123 00:08:54.273863 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fd26x" podUID="40189ccf-4a54-4a06-a382-10a9d6df2d28" Jan 23 00:08:54.433851 containerd[1518]: time="2026-01-23T00:08:54.433781208Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:08:54.435449 containerd[1518]: time="2026-01-23T00:08:54.435393538Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Jan 23 00:08:54.435890 containerd[1518]: time="2026-01-23T00:08:54.435842096Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:08:54.441047 containerd[1518]: time="2026-01-23T00:08:54.440983854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:08:54.442051 containerd[1518]: time="2026-01-23T00:08:54.442016038Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.998063305s" Jan 23 00:08:54.442204 containerd[1518]: time="2026-01-23T00:08:54.442187102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 23 00:08:54.443709 containerd[1518]: time="2026-01-23T00:08:54.443639206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 00:08:54.469726 containerd[1518]: time="2026-01-23T00:08:54.469622775Z" level=info msg="CreateContainer within sandbox \"f773d549c37c55121c59bd997c54e2a82338f535d4c83a6965391235998e7408\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 00:08:54.479705 containerd[1518]: time="2026-01-23T00:08:54.479631958Z" level=info msg="Container 1481bbefce61a981c426f627dc80b72b536c145fa43ef5d81266a8df795b22ae: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:08:54.483889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1377770311.mount: Deactivated successfully. Jan 23 00:08:54.492953 containerd[1518]: time="2026-01-23T00:08:54.492894837Z" level=info msg="CreateContainer within sandbox \"f773d549c37c55121c59bd997c54e2a82338f535d4c83a6965391235998e7408\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1481bbefce61a981c426f627dc80b72b536c145fa43ef5d81266a8df795b22ae\"" Jan 23 00:08:54.494052 containerd[1518]: time="2026-01-23T00:08:54.493974256Z" level=info msg="StartContainer for \"1481bbefce61a981c426f627dc80b72b536c145fa43ef5d81266a8df795b22ae\"" Jan 23 00:08:54.495669 containerd[1518]: time="2026-01-23T00:08:54.495606783Z" level=info msg="connecting to shim 1481bbefce61a981c426f627dc80b72b536c145fa43ef5d81266a8df795b22ae" address="unix:///run/containerd/s/a5d4cd10dae30bb869a1bd0a1143769bfb39dce69bf20d7c0f9951cf223128e0" protocol=ttrpc version=3 Jan 23 00:08:54.518952 systemd[1]: Started cri-containerd-1481bbefce61a981c426f627dc80b72b536c145fa43ef5d81266a8df795b22ae.scope - libcontainer container 1481bbefce61a981c426f627dc80b72b536c145fa43ef5d81266a8df795b22ae. Jan 23 00:08:54.574398 containerd[1518]: time="2026-01-23T00:08:54.574201590Z" level=info msg="StartContainer for \"1481bbefce61a981c426f627dc80b72b536c145fa43ef5d81266a8df795b22ae\" returns successfully" Jan 23 00:08:55.477365 kubelet[2765]: E0123 00:08:55.477272 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:55.477365 kubelet[2765]: W0123 00:08:55.477299 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:55.477365 kubelet[2765]: E0123 00:08:55.477320 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:55.478157 kubelet[2765]: E0123 00:08:55.478033 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:55.478157 kubelet[2765]: W0123 00:08:55.478053 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:55.478157 kubelet[2765]: E0123 00:08:55.478105 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:55.478515 kubelet[2765]: E0123 00:08:55.478499 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:55.478657 kubelet[2765]: W0123 00:08:55.478585 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:55.478657 kubelet[2765]: E0123 00:08:55.478608 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:55.479003 kubelet[2765]: E0123 00:08:55.478919 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:55.479003 kubelet[2765]: W0123 00:08:55.478931 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:55.479003 kubelet[2765]: E0123 00:08:55.478944 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:55.479452 kubelet[2765]: E0123 00:08:55.479339 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:55.479452 kubelet[2765]: W0123 00:08:55.479353 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:55.479452 kubelet[2765]: E0123 00:08:55.479367 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:55.479712 kubelet[2765]: E0123 00:08:55.479637 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:55.479712 kubelet[2765]: W0123 00:08:55.479649 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:55.479712 kubelet[2765]: E0123 00:08:55.479661 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:55.480086 kubelet[2765]: E0123 00:08:55.480014 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:55.480086 kubelet[2765]: W0123 00:08:55.480028 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:55.480086 kubelet[2765]: E0123 00:08:55.480041 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:55.480526 kubelet[2765]: E0123 00:08:55.480431 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:55.480526 kubelet[2765]: W0123 00:08:55.480443 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:55.480526 kubelet[2765]: E0123 00:08:55.480454 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:55.480823 kubelet[2765]: E0123 00:08:55.480765 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:55.480823 kubelet[2765]: W0123 00:08:55.480777 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:55.480823 kubelet[2765]: E0123 00:08:55.480788 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:55.481147 kubelet[2765]: E0123 00:08:55.481087 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:55.481147 kubelet[2765]: W0123 00:08:55.481101 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:55.481147 kubelet[2765]: E0123 00:08:55.481113 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:55.481473 kubelet[2765]: E0123 00:08:55.481411 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:55.481473 kubelet[2765]: W0123 00:08:55.481422 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:55.481473 kubelet[2765]: E0123 00:08:55.481435 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:55.481786 kubelet[2765]: E0123 00:08:55.481714 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:55.481786 kubelet[2765]: W0123 00:08:55.481725 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:55.481786 kubelet[2765]: E0123 00:08:55.481736 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:55.482122 kubelet[2765]: E0123 00:08:55.482049 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:55.482122 kubelet[2765]: W0123 00:08:55.482063 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:55.482122 kubelet[2765]: E0123 00:08:55.482075 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:55.482710 kubelet[2765]: E0123 00:08:55.482566 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:55.482710 kubelet[2765]: W0123 00:08:55.482583 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:55.482710 kubelet[2765]: E0123 00:08:55.482599 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:55.482967 kubelet[2765]: E0123 00:08:55.482884 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:55.482967 kubelet[2765]: W0123 00:08:55.482894 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:55.482967 kubelet[2765]: E0123 00:08:55.482906 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:55.506715 kubelet[2765]: E0123 00:08:55.506617 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:55.506715 kubelet[2765]: W0123 00:08:55.506653 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:55.506715 kubelet[2765]: E0123 00:08:55.506679 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:55.506995 kubelet[2765]: E0123 00:08:55.506978 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:55.507032 kubelet[2765]: W0123 00:08:55.506998 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:55.507032 kubelet[2765]: E0123 00:08:55.507015 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:55.507321 kubelet[2765]: E0123 00:08:55.507302 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:55.507321 kubelet[2765]: W0123 00:08:55.507318 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:55.507402 kubelet[2765]: E0123 00:08:55.507330 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:55.507553 kubelet[2765]: E0123 00:08:55.507540 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:55.507553 kubelet[2765]: W0123 00:08:55.507552 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:55.507624 kubelet[2765]: E0123 00:08:55.507562 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:55.507757 kubelet[2765]: E0123 00:08:55.507741 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:55.507757 kubelet[2765]: W0123 00:08:55.507752 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:55.507834 kubelet[2765]: E0123 00:08:55.507762 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:55.507946 kubelet[2765]: E0123 00:08:55.507935 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:55.507984 kubelet[2765]: W0123 00:08:55.507949 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:55.507984 kubelet[2765]: E0123 00:08:55.507959 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:55.508159 kubelet[2765]: E0123 00:08:55.508148 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:55.508159 kubelet[2765]: W0123 00:08:55.508159 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:55.508244 kubelet[2765]: E0123 00:08:55.508168 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:55.508643 kubelet[2765]: E0123 00:08:55.508627 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:55.508643 kubelet[2765]: W0123 00:08:55.508642 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:55.508778 kubelet[2765]: E0123 00:08:55.508655 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:55.508866 kubelet[2765]: E0123 00:08:55.508847 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:55.508866 kubelet[2765]: W0123 00:08:55.508861 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:55.508952 kubelet[2765]: E0123 00:08:55.508871 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:55.509058 kubelet[2765]: E0123 00:08:55.509021 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:55.509058 kubelet[2765]: W0123 00:08:55.509032 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:55.509058 kubelet[2765]: E0123 00:08:55.509041 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:55.509203 kubelet[2765]: E0123 00:08:55.509193 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:55.509289 kubelet[2765]: W0123 00:08:55.509203 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:55.509289 kubelet[2765]: E0123 00:08:55.509256 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:55.509453 kubelet[2765]: E0123 00:08:55.509441 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:55.509485 kubelet[2765]: W0123 00:08:55.509473 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:55.509511 kubelet[2765]: E0123 00:08:55.509486 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:55.509710 kubelet[2765]: E0123 00:08:55.509677 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:55.509710 kubelet[2765]: W0123 00:08:55.509702 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:55.509783 kubelet[2765]: E0123 00:08:55.509712 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:55.509932 kubelet[2765]: E0123 00:08:55.509919 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:55.509932 kubelet[2765]: W0123 00:08:55.509931 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:55.509987 kubelet[2765]: E0123 00:08:55.509942 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:55.510162 kubelet[2765]: E0123 00:08:55.510151 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:55.510197 kubelet[2765]: W0123 00:08:55.510162 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:55.510197 kubelet[2765]: E0123 00:08:55.510170 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:55.510353 kubelet[2765]: E0123 00:08:55.510339 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:55.510388 kubelet[2765]: W0123 00:08:55.510373 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:55.510388 kubelet[2765]: E0123 00:08:55.510385 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:55.510669 kubelet[2765]: E0123 00:08:55.510651 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:55.510933 kubelet[2765]: W0123 00:08:55.510734 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:55.510933 kubelet[2765]: E0123 00:08:55.510750 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:55.511052 kubelet[2765]: E0123 00:08:55.511038 2765 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:08:55.511119 kubelet[2765]: W0123 00:08:55.511107 2765 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:08:55.511195 kubelet[2765]: E0123 00:08:55.511166 2765 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:08:55.858524 containerd[1518]: time="2026-01-23T00:08:55.858452310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:08:55.860321 containerd[1518]: time="2026-01-23T00:08:55.860230871Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Jan 23 00:08:55.861034 containerd[1518]: time="2026-01-23T00:08:55.860967166Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:08:55.864054 containerd[1518]: time="2026-01-23T00:08:55.863956659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:08:55.865398 containerd[1518]: time="2026-01-23T00:08:55.864384901Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.420700339s" Jan 23 00:08:55.865398 containerd[1518]: time="2026-01-23T00:08:55.864422218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 23 00:08:55.871580 containerd[1518]: time="2026-01-23T00:08:55.871537223Z" level=info msg="CreateContainer within sandbox \"946b9f4be4e10df3feb5aab18dae3d2e7a08e68e7b2e5322c14042e156912e05\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 00:08:55.883578 containerd[1518]: time="2026-01-23T00:08:55.883421523Z" level=info msg="Container 265b07fd23038c8b14da66e9e40254400af55fa96ce31707bfe7cc454da5dc73: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:08:55.898116 containerd[1518]: time="2026-01-23T00:08:55.898062458Z" level=info msg="CreateContainer within sandbox \"946b9f4be4e10df3feb5aab18dae3d2e7a08e68e7b2e5322c14042e156912e05\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"265b07fd23038c8b14da66e9e40254400af55fa96ce31707bfe7cc454da5dc73\"" Jan 23 00:08:55.899993 containerd[1518]: time="2026-01-23T00:08:55.899937650Z" level=info msg="StartContainer for \"265b07fd23038c8b14da66e9e40254400af55fa96ce31707bfe7cc454da5dc73\"" Jan 23 00:08:55.904925 containerd[1518]: time="2026-01-23T00:08:55.904843213Z" level=info msg="connecting to shim 265b07fd23038c8b14da66e9e40254400af55fa96ce31707bfe7cc454da5dc73" address="unix:///run/containerd/s/19f169cc70fd1d25a3c6cb66abf519f7165a03c10a1e1f0b5dfc19d6b3e5f155" protocol=ttrpc version=3 Jan 23 00:08:55.936952 systemd[1]: Started cri-containerd-265b07fd23038c8b14da66e9e40254400af55fa96ce31707bfe7cc454da5dc73.scope - libcontainer container 265b07fd23038c8b14da66e9e40254400af55fa96ce31707bfe7cc454da5dc73. Jan 23 00:08:56.014993 containerd[1518]: time="2026-01-23T00:08:56.014924976Z" level=info msg="StartContainer for \"265b07fd23038c8b14da66e9e40254400af55fa96ce31707bfe7cc454da5dc73\" returns successfully" Jan 23 00:08:56.036678 systemd[1]: cri-containerd-265b07fd23038c8b14da66e9e40254400af55fa96ce31707bfe7cc454da5dc73.scope: Deactivated successfully. Jan 23 00:08:56.041582 containerd[1518]: time="2026-01-23T00:08:56.041524917Z" level=info msg="received container exit event container_id:\"265b07fd23038c8b14da66e9e40254400af55fa96ce31707bfe7cc454da5dc73\" id:\"265b07fd23038c8b14da66e9e40254400af55fa96ce31707bfe7cc454da5dc73\" pid:3449 exited_at:{seconds:1769126936 nanos:41017441}" Jan 23 00:08:56.066347 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-265b07fd23038c8b14da66e9e40254400af55fa96ce31707bfe7cc454da5dc73-rootfs.mount: Deactivated successfully. Jan 23 00:08:56.274676 kubelet[2765]: E0123 00:08:56.274423 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fd26x" podUID="40189ccf-4a54-4a06-a382-10a9d6df2d28" Jan 23 00:08:56.464189 kubelet[2765]: I0123 00:08:56.462387 2765 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 00:08:56.467221 containerd[1518]: time="2026-01-23T00:08:56.467095496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 00:08:56.500397 kubelet[2765]: I0123 00:08:56.500343 2765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-555c5ddc6f-nrw7f" podStartSLOduration=3.500116179 podStartE2EDuration="5.500313515s" podCreationTimestamp="2026-01-23 00:08:51 +0000 UTC" firstStartedPulling="2026-01-23 00:08:52.442941397 +0000 UTC m=+29.323319352" lastFinishedPulling="2026-01-23 00:08:54.443138733 +0000 UTC m=+31.323516688" observedRunningTime="2026-01-23 00:08:55.473076757 +0000 UTC m=+32.353454752" watchObservedRunningTime="2026-01-23 00:08:56.500313515 +0000 UTC m=+33.380691470" Jan 23 00:08:58.274453 kubelet[2765]: E0123 00:08:58.274291 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fd26x" podUID="40189ccf-4a54-4a06-a382-10a9d6df2d28" Jan 23 00:08:59.153809 containerd[1518]: time="2026-01-23T00:08:59.153647603Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:08:59.156481 containerd[1518]: time="2026-01-23T00:08:59.155009743Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Jan 23 00:08:59.156481 containerd[1518]: time="2026-01-23T00:08:59.155513986Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:08:59.157888 containerd[1518]: time="2026-01-23T00:08:59.157840217Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:08:59.159182 containerd[1518]: time="2026-01-23T00:08:59.159128363Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.691969912s" Jan 23 00:08:59.159182 containerd[1518]: time="2026-01-23T00:08:59.159174959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 23 00:08:59.165377 containerd[1518]: time="2026-01-23T00:08:59.165318631Z" level=info msg="CreateContainer within sandbox \"946b9f4be4e10df3feb5aab18dae3d2e7a08e68e7b2e5322c14042e156912e05\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 00:08:59.175708 containerd[1518]: time="2026-01-23T00:08:59.175635679Z" level=info msg="Container 3566c1047ea68928160fd53ce6638ca7f3fb0f1df6938039777191116a0bc396: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:08:59.198576 containerd[1518]: time="2026-01-23T00:08:59.198499491Z" level=info msg="CreateContainer within sandbox \"946b9f4be4e10df3feb5aab18dae3d2e7a08e68e7b2e5322c14042e156912e05\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3566c1047ea68928160fd53ce6638ca7f3fb0f1df6938039777191116a0bc396\"" Jan 23 00:08:59.199785 containerd[1518]: time="2026-01-23T00:08:59.199722602Z" level=info msg="StartContainer for \"3566c1047ea68928160fd53ce6638ca7f3fb0f1df6938039777191116a0bc396\"" Jan 23 00:08:59.202071 containerd[1518]: time="2026-01-23T00:08:59.202022674Z" level=info msg="connecting to shim 3566c1047ea68928160fd53ce6638ca7f3fb0f1df6938039777191116a0bc396" address="unix:///run/containerd/s/19f169cc70fd1d25a3c6cb66abf519f7165a03c10a1e1f0b5dfc19d6b3e5f155" protocol=ttrpc version=3 Jan 23 00:08:59.229136 systemd[1]: Started cri-containerd-3566c1047ea68928160fd53ce6638ca7f3fb0f1df6938039777191116a0bc396.scope - libcontainer container 3566c1047ea68928160fd53ce6638ca7f3fb0f1df6938039777191116a0bc396. Jan 23 00:08:59.356362 containerd[1518]: time="2026-01-23T00:08:59.356291860Z" level=info msg="StartContainer for \"3566c1047ea68928160fd53ce6638ca7f3fb0f1df6938039777191116a0bc396\" returns successfully" Jan 23 00:08:59.867038 containerd[1518]: time="2026-01-23T00:08:59.866992766Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 00:08:59.870777 systemd[1]: cri-containerd-3566c1047ea68928160fd53ce6638ca7f3fb0f1df6938039777191116a0bc396.scope: Deactivated successfully. Jan 23 00:08:59.871095 systemd[1]: cri-containerd-3566c1047ea68928160fd53ce6638ca7f3fb0f1df6938039777191116a0bc396.scope: Consumed 514ms CPU time, 188M memory peak, 165.9M written to disk. Jan 23 00:08:59.873381 containerd[1518]: time="2026-01-23T00:08:59.873117039Z" level=info msg="received container exit event container_id:\"3566c1047ea68928160fd53ce6638ca7f3fb0f1df6938039777191116a0bc396\" id:\"3566c1047ea68928160fd53ce6638ca7f3fb0f1df6938039777191116a0bc396\" pid:3509 exited_at:{seconds:1769126939 nanos:872592557}" Jan 23 00:08:59.898110 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3566c1047ea68928160fd53ce6638ca7f3fb0f1df6938039777191116a0bc396-rootfs.mount: Deactivated successfully. Jan 23 00:08:59.930024 kubelet[2765]: I0123 00:08:59.929977 2765 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 23 00:09:00.058077 systemd[1]: Created slice kubepods-burstable-pod8a80960d_00a3_4ff3_b4b4_d91b05db2e77.slice - libcontainer container kubepods-burstable-pod8a80960d_00a3_4ff3_b4b4_d91b05db2e77.slice. Jan 23 00:09:00.083789 systemd[1]: Created slice kubepods-burstable-poda758f720_5521_43b4_9b45_c628b5675ddd.slice - libcontainer container kubepods-burstable-poda758f720_5521_43b4_9b45_c628b5675ddd.slice. Jan 23 00:09:00.108182 systemd[1]: Created slice kubepods-besteffort-pode04e4457_45b8_4cc6_bb9d_cbd9eec7c520.slice - libcontainer container kubepods-besteffort-pode04e4457_45b8_4cc6_bb9d_cbd9eec7c520.slice. Jan 23 00:09:00.124391 systemd[1]: Created slice kubepods-besteffort-pod535d2a42_9500_4db2_9b67_fdfed8e04eb2.slice - libcontainer container kubepods-besteffort-pod535d2a42_9500_4db2_9b67_fdfed8e04eb2.slice. Jan 23 00:09:00.137478 systemd[1]: Created slice kubepods-besteffort-pod04dc4181_cb72_4a34_bab6_405893931b00.slice - libcontainer container kubepods-besteffort-pod04dc4181_cb72_4a34_bab6_405893931b00.slice. Jan 23 00:09:00.146667 kubelet[2765]: I0123 00:09:00.146419 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fm8p\" (UniqueName: \"kubernetes.io/projected/04dc4181-cb72-4a34-bab6-405893931b00-kube-api-access-4fm8p\") pod \"calico-apiserver-ccfb6488b-prfmf\" (UID: \"04dc4181-cb72-4a34-bab6-405893931b00\") " pod="calico-apiserver/calico-apiserver-ccfb6488b-prfmf" Jan 23 00:09:00.146999 kubelet[2765]: I0123 00:09:00.146874 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a80960d-00a3-4ff3-b4b4-d91b05db2e77-config-volume\") pod \"coredns-66bc5c9577-ckhsz\" (UID: \"8a80960d-00a3-4ff3-b4b4-d91b05db2e77\") " pod="kube-system/coredns-66bc5c9577-ckhsz" Jan 23 00:09:00.147033 kubelet[2765]: I0123 00:09:00.146910 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vksqp\" (UniqueName: \"kubernetes.io/projected/a758f720-5521-43b4-9b45-c628b5675ddd-kube-api-access-vksqp\") pod \"coredns-66bc5c9577-jmrqc\" (UID: \"a758f720-5521-43b4-9b45-c628b5675ddd\") " pod="kube-system/coredns-66bc5c9577-jmrqc" Jan 23 00:09:00.147705 kubelet[2765]: I0123 00:09:00.147056 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/535d2a42-9500-4db2-9b67-fdfed8e04eb2-tigera-ca-bundle\") pod \"calico-kube-controllers-5df54fc4cc-6ldlj\" (UID: \"535d2a42-9500-4db2-9b67-fdfed8e04eb2\") " pod="calico-system/calico-kube-controllers-5df54fc4cc-6ldlj" Jan 23 00:09:00.147705 kubelet[2765]: I0123 00:09:00.147282 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7xcr\" (UniqueName: \"kubernetes.io/projected/de599f6a-f928-4066-9266-9bc7100de609-kube-api-access-j7xcr\") pod \"whisker-79b9d654f9-f5q7q\" (UID: \"de599f6a-f928-4066-9266-9bc7100de609\") " pod="calico-system/whisker-79b9d654f9-f5q7q" Jan 23 00:09:00.147705 kubelet[2765]: I0123 00:09:00.147301 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f88ch\" (UniqueName: \"kubernetes.io/projected/8a80960d-00a3-4ff3-b4b4-d91b05db2e77-kube-api-access-f88ch\") pod \"coredns-66bc5c9577-ckhsz\" (UID: \"8a80960d-00a3-4ff3-b4b4-d91b05db2e77\") " pod="kube-system/coredns-66bc5c9577-ckhsz" Jan 23 00:09:00.147705 kubelet[2765]: I0123 00:09:00.147318 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4412ff60-1893-4e70-a14c-c509d31ae479-calico-apiserver-certs\") pod \"calico-apiserver-ccfb6488b-mxfqt\" (UID: \"4412ff60-1893-4e70-a14c-c509d31ae479\") " pod="calico-apiserver/calico-apiserver-ccfb6488b-mxfqt" Jan 23 00:09:00.148860 kubelet[2765]: I0123 00:09:00.148803 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e04e4457-45b8-4cc6-bb9d-cbd9eec7c520-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-vc8hr\" (UID: \"e04e4457-45b8-4cc6-bb9d-cbd9eec7c520\") " pod="calico-system/goldmane-7c778bb748-vc8hr" Jan 23 00:09:00.149226 kubelet[2765]: I0123 00:09:00.149152 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e04e4457-45b8-4cc6-bb9d-cbd9eec7c520-goldmane-key-pair\") pod \"goldmane-7c778bb748-vc8hr\" (UID: \"e04e4457-45b8-4cc6-bb9d-cbd9eec7c520\") " pod="calico-system/goldmane-7c778bb748-vc8hr" Jan 23 00:09:00.149297 kubelet[2765]: I0123 00:09:00.149284 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/04dc4181-cb72-4a34-bab6-405893931b00-calico-apiserver-certs\") pod \"calico-apiserver-ccfb6488b-prfmf\" (UID: \"04dc4181-cb72-4a34-bab6-405893931b00\") " pod="calico-apiserver/calico-apiserver-ccfb6488b-prfmf" Jan 23 00:09:00.149438 kubelet[2765]: I0123 00:09:00.149394 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rqhk\" (UniqueName: \"kubernetes.io/projected/4412ff60-1893-4e70-a14c-c509d31ae479-kube-api-access-5rqhk\") pod \"calico-apiserver-ccfb6488b-mxfqt\" (UID: \"4412ff60-1893-4e70-a14c-c509d31ae479\") " pod="calico-apiserver/calico-apiserver-ccfb6488b-mxfqt" Jan 23 00:09:00.149671 kubelet[2765]: I0123 00:09:00.149423 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a758f720-5521-43b4-9b45-c628b5675ddd-config-volume\") pod \"coredns-66bc5c9577-jmrqc\" (UID: \"a758f720-5521-43b4-9b45-c628b5675ddd\") " pod="kube-system/coredns-66bc5c9577-jmrqc" Jan 23 00:09:00.149671 kubelet[2765]: I0123 00:09:00.149651 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e04e4457-45b8-4cc6-bb9d-cbd9eec7c520-config\") pod \"goldmane-7c778bb748-vc8hr\" (UID: \"e04e4457-45b8-4cc6-bb9d-cbd9eec7c520\") " pod="calico-system/goldmane-7c778bb748-vc8hr" Jan 23 00:09:00.150014 systemd[1]: Created slice kubepods-besteffort-pod4412ff60_1893_4e70_a14c_c509d31ae479.slice - libcontainer container kubepods-besteffort-pod4412ff60_1893_4e70_a14c_c509d31ae479.slice. Jan 23 00:09:00.150249 kubelet[2765]: I0123 00:09:00.150076 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shdnx\" (UniqueName: \"kubernetes.io/projected/e04e4457-45b8-4cc6-bb9d-cbd9eec7c520-kube-api-access-shdnx\") pod \"goldmane-7c778bb748-vc8hr\" (UID: \"e04e4457-45b8-4cc6-bb9d-cbd9eec7c520\") " pod="calico-system/goldmane-7c778bb748-vc8hr" Jan 23 00:09:00.150378 kubelet[2765]: I0123 00:09:00.150346 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjpbx\" (UniqueName: \"kubernetes.io/projected/535d2a42-9500-4db2-9b67-fdfed8e04eb2-kube-api-access-pjpbx\") pod \"calico-kube-controllers-5df54fc4cc-6ldlj\" (UID: \"535d2a42-9500-4db2-9b67-fdfed8e04eb2\") " pod="calico-system/calico-kube-controllers-5df54fc4cc-6ldlj" Jan 23 00:09:00.151797 kubelet[2765]: I0123 00:09:00.151767 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/de599f6a-f928-4066-9266-9bc7100de609-whisker-backend-key-pair\") pod \"whisker-79b9d654f9-f5q7q\" (UID: \"de599f6a-f928-4066-9266-9bc7100de609\") " pod="calico-system/whisker-79b9d654f9-f5q7q" Jan 23 00:09:00.152002 kubelet[2765]: I0123 00:09:00.151972 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de599f6a-f928-4066-9266-9bc7100de609-whisker-ca-bundle\") pod \"whisker-79b9d654f9-f5q7q\" (UID: \"de599f6a-f928-4066-9266-9bc7100de609\") " pod="calico-system/whisker-79b9d654f9-f5q7q" Jan 23 00:09:00.161034 systemd[1]: Created slice kubepods-besteffort-podde599f6a_f928_4066_9266_9bc7100de609.slice - libcontainer container kubepods-besteffort-podde599f6a_f928_4066_9266_9bc7100de609.slice. Jan 23 00:09:00.310578 systemd[1]: Created slice kubepods-besteffort-pod40189ccf_4a54_4a06_a382_10a9d6df2d28.slice - libcontainer container kubepods-besteffort-pod40189ccf_4a54_4a06_a382_10a9d6df2d28.slice. Jan 23 00:09:00.316615 containerd[1518]: time="2026-01-23T00:09:00.316572433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fd26x,Uid:40189ccf-4a54-4a06-a382-10a9d6df2d28,Namespace:calico-system,Attempt:0,}" Jan 23 00:09:00.381890 containerd[1518]: time="2026-01-23T00:09:00.381396627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ckhsz,Uid:8a80960d-00a3-4ff3-b4b4-d91b05db2e77,Namespace:kube-system,Attempt:0,}" Jan 23 00:09:00.402167 containerd[1518]: time="2026-01-23T00:09:00.402090675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jmrqc,Uid:a758f720-5521-43b4-9b45-c628b5675ddd,Namespace:kube-system,Attempt:0,}" Jan 23 00:09:00.419542 containerd[1518]: time="2026-01-23T00:09:00.419497830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-vc8hr,Uid:e04e4457-45b8-4cc6-bb9d-cbd9eec7c520,Namespace:calico-system,Attempt:0,}" Jan 23 00:09:00.440132 containerd[1518]: time="2026-01-23T00:09:00.439969214Z" level=error msg="Failed to destroy network for sandbox \"5a439a89b8a9b60dcfebd69cb28258c112fce3a0054f1a36bdbac5de3af00ee8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:09:00.447623 containerd[1518]: time="2026-01-23T00:09:00.447539770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5df54fc4cc-6ldlj,Uid:535d2a42-9500-4db2-9b67-fdfed8e04eb2,Namespace:calico-system,Attempt:0,}" Jan 23 00:09:00.448240 containerd[1518]: time="2026-01-23T00:09:00.447989459Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fd26x,Uid:40189ccf-4a54-4a06-a382-10a9d6df2d28,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a439a89b8a9b60dcfebd69cb28258c112fce3a0054f1a36bdbac5de3af00ee8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:09:00.450209 kubelet[2765]: E0123 00:09:00.450146 2765 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a439a89b8a9b60dcfebd69cb28258c112fce3a0054f1a36bdbac5de3af00ee8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:09:00.451068 kubelet[2765]: E0123 00:09:00.450474 2765 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a439a89b8a9b60dcfebd69cb28258c112fce3a0054f1a36bdbac5de3af00ee8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fd26x" Jan 23 00:09:00.451068 kubelet[2765]: E0123 00:09:00.450504 2765 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a439a89b8a9b60dcfebd69cb28258c112fce3a0054f1a36bdbac5de3af00ee8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fd26x" Jan 23 00:09:00.451068 kubelet[2765]: E0123 00:09:00.450580 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fd26x_calico-system(40189ccf-4a54-4a06-a382-10a9d6df2d28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fd26x_calico-system(40189ccf-4a54-4a06-a382-10a9d6df2d28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a439a89b8a9b60dcfebd69cb28258c112fce3a0054f1a36bdbac5de3af00ee8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fd26x" podUID="40189ccf-4a54-4a06-a382-10a9d6df2d28" Jan 23 00:09:00.451965 containerd[1518]: time="2026-01-23T00:09:00.451757478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccfb6488b-prfmf,Uid:04dc4181-cb72-4a34-bab6-405893931b00,Namespace:calico-apiserver,Attempt:0,}" Jan 23 00:09:00.461595 containerd[1518]: time="2026-01-23T00:09:00.461142509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccfb6488b-mxfqt,Uid:4412ff60-1893-4e70-a14c-c509d31ae479,Namespace:calico-apiserver,Attempt:0,}" Jan 23 00:09:00.470090 containerd[1518]: time="2026-01-23T00:09:00.469615722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79b9d654f9-f5q7q,Uid:de599f6a-f928-4066-9266-9bc7100de609,Namespace:calico-system,Attempt:0,}" Jan 23 00:09:00.500177 containerd[1518]: time="2026-01-23T00:09:00.499889107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 00:09:00.573422 containerd[1518]: time="2026-01-23T00:09:00.573342944Z" level=error msg="Failed to destroy network for sandbox \"83a0bbe6b97c41a7887b9d7084cfbd08d5e4b5ab86716f4c7f1b24f76db0fef5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:09:00.585562 containerd[1518]: time="2026-01-23T00:09:00.585496223Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-vc8hr,Uid:e04e4457-45b8-4cc6-bb9d-cbd9eec7c520,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"83a0bbe6b97c41a7887b9d7084cfbd08d5e4b5ab86716f4c7f1b24f76db0fef5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:09:00.586117 kubelet[2765]: E0123 00:09:00.586083 2765 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83a0bbe6b97c41a7887b9d7084cfbd08d5e4b5ab86716f4c7f1b24f76db0fef5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:09:00.586274 kubelet[2765]: E0123 00:09:00.586257 2765 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83a0bbe6b97c41a7887b9d7084cfbd08d5e4b5ab86716f4c7f1b24f76db0fef5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-vc8hr" Jan 23 00:09:00.586365 kubelet[2765]: E0123 00:09:00.586350 2765 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83a0bbe6b97c41a7887b9d7084cfbd08d5e4b5ab86716f4c7f1b24f76db0fef5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-vc8hr" Jan 23 00:09:00.586572 kubelet[2765]: E0123 00:09:00.586460 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-vc8hr_calico-system(e04e4457-45b8-4cc6-bb9d-cbd9eec7c520)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-vc8hr_calico-system(e04e4457-45b8-4cc6-bb9d-cbd9eec7c520)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83a0bbe6b97c41a7887b9d7084cfbd08d5e4b5ab86716f4c7f1b24f76db0fef5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-vc8hr" podUID="e04e4457-45b8-4cc6-bb9d-cbd9eec7c520" Jan 23 00:09:00.616063 containerd[1518]: time="2026-01-23T00:09:00.616009071Z" level=error msg="Failed to destroy network for sandbox \"11f05a5295cd749da4d23f0de974720db834c6a789b6d763156bab9722c3ea17\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:09:00.622013 containerd[1518]: time="2026-01-23T00:09:00.621952940Z" level=error msg="Failed to destroy network for sandbox \"2a1774344baeb0014176e30445a199d7fbc3cfa96a70c225ecc964ae9676a8e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:09:00.633442 containerd[1518]: time="2026-01-23T00:09:00.632655319Z" level=error msg="Failed to destroy network for sandbox \"7c530b253b1b24af25f12108b939e565b2f005da46fe421a98361aefaa93585a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:09:00.636557 containerd[1518]: time="2026-01-23T00:09:00.636411460Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccfb6488b-mxfqt,Uid:4412ff60-1893-4e70-a14c-c509d31ae479,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"11f05a5295cd749da4d23f0de974720db834c6a789b6d763156bab9722c3ea17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:09:00.637745 kubelet[2765]: E0123 00:09:00.637335 2765 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11f05a5295cd749da4d23f0de974720db834c6a789b6d763156bab9722c3ea17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:09:00.637745 kubelet[2765]: E0123 00:09:00.637392 2765 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11f05a5295cd749da4d23f0de974720db834c6a789b6d763156bab9722c3ea17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ccfb6488b-mxfqt" Jan 23 00:09:00.637745 kubelet[2765]: E0123 00:09:00.637412 2765 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11f05a5295cd749da4d23f0de974720db834c6a789b6d763156bab9722c3ea17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ccfb6488b-mxfqt" Jan 23 00:09:00.637935 kubelet[2765]: E0123 00:09:00.637463 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ccfb6488b-mxfqt_calico-apiserver(4412ff60-1893-4e70-a14c-c509d31ae479)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ccfb6488b-mxfqt_calico-apiserver(4412ff60-1893-4e70-a14c-c509d31ae479)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11f05a5295cd749da4d23f0de974720db834c6a789b6d763156bab9722c3ea17\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ccfb6488b-mxfqt" podUID="4412ff60-1893-4e70-a14c-c509d31ae479" Jan 23 00:09:00.639623 containerd[1518]: time="2026-01-23T00:09:00.639520884Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ckhsz,Uid:8a80960d-00a3-4ff3-b4b4-d91b05db2e77,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a1774344baeb0014176e30445a199d7fbc3cfa96a70c225ecc964ae9676a8e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:09:00.640122 kubelet[2765]: E0123 00:09:00.640011 2765 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a1774344baeb0014176e30445a199d7fbc3cfa96a70c225ecc964ae9676a8e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:09:00.640122 kubelet[2765]: E0123 00:09:00.640092 2765 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a1774344baeb0014176e30445a199d7fbc3cfa96a70c225ecc964ae9676a8e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ckhsz" Jan 23 00:09:00.640456 kubelet[2765]: E0123 00:09:00.640251 2765 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a1774344baeb0014176e30445a199d7fbc3cfa96a70c225ecc964ae9676a8e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ckhsz" Jan 23 00:09:00.640456 kubelet[2765]: E0123 00:09:00.640422 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-ckhsz_kube-system(8a80960d-00a3-4ff3-b4b4-d91b05db2e77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-ckhsz_kube-system(8a80960d-00a3-4ff3-b4b4-d91b05db2e77)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a1774344baeb0014176e30445a199d7fbc3cfa96a70c225ecc964ae9676a8e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-ckhsz" podUID="8a80960d-00a3-4ff3-b4b4-d91b05db2e77" Jan 23 00:09:00.645765 containerd[1518]: time="2026-01-23T00:09:00.645571986Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jmrqc,Uid:a758f720-5521-43b4-9b45-c628b5675ddd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c530b253b1b24af25f12108b939e565b2f005da46fe421a98361aefaa93585a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:09:00.646322 kubelet[2765]: E0123 00:09:00.646289 2765 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c530b253b1b24af25f12108b939e565b2f005da46fe421a98361aefaa93585a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:09:00.646563 kubelet[2765]: E0123 00:09:00.646433 2765 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c530b253b1b24af25f12108b939e565b2f005da46fe421a98361aefaa93585a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-jmrqc" Jan 23 00:09:00.646563 kubelet[2765]: E0123 00:09:00.646461 2765 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c530b253b1b24af25f12108b939e565b2f005da46fe421a98361aefaa93585a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-jmrqc" Jan 23 00:09:00.646563 kubelet[2765]: E0123 00:09:00.646512 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-jmrqc_kube-system(a758f720-5521-43b4-9b45-c628b5675ddd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-jmrqc_kube-system(a758f720-5521-43b4-9b45-c628b5675ddd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c530b253b1b24af25f12108b939e565b2f005da46fe421a98361aefaa93585a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-jmrqc" podUID="a758f720-5521-43b4-9b45-c628b5675ddd" Jan 23 00:09:00.667883 containerd[1518]: time="2026-01-23T00:09:00.667833005Z" level=error msg="Failed to destroy network for sandbox \"87815d89b5e4295ce7daf6006429b7cac288a6ff3bc47547fc8bb6cbd0121f0b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:09:00.674724 containerd[1518]: time="2026-01-23T00:09:00.674566619Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5df54fc4cc-6ldlj,Uid:535d2a42-9500-4db2-9b67-fdfed8e04eb2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"87815d89b5e4295ce7daf6006429b7cac288a6ff3bc47547fc8bb6cbd0121f0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:09:00.675681 kubelet[2765]: E0123 00:09:00.675141 2765 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87815d89b5e4295ce7daf6006429b7cac288a6ff3bc47547fc8bb6cbd0121f0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:09:00.675681 kubelet[2765]: E0123 00:09:00.675213 2765 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87815d89b5e4295ce7daf6006429b7cac288a6ff3bc47547fc8bb6cbd0121f0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5df54fc4cc-6ldlj" Jan 23 00:09:00.675681 kubelet[2765]: E0123 00:09:00.675233 2765 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87815d89b5e4295ce7daf6006429b7cac288a6ff3bc47547fc8bb6cbd0121f0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5df54fc4cc-6ldlj" Jan 23 00:09:00.676961 kubelet[2765]: E0123 00:09:00.675306 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5df54fc4cc-6ldlj_calico-system(535d2a42-9500-4db2-9b67-fdfed8e04eb2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5df54fc4cc-6ldlj_calico-system(535d2a42-9500-4db2-9b67-fdfed8e04eb2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87815d89b5e4295ce7daf6006429b7cac288a6ff3bc47547fc8bb6cbd0121f0b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5df54fc4cc-6ldlj" podUID="535d2a42-9500-4db2-9b67-fdfed8e04eb2" Jan 23 00:09:00.684555 containerd[1518]: time="2026-01-23T00:09:00.684497852Z" level=error msg="Failed to destroy network for sandbox \"892701df65d047ea4aa4a52d8687ef146acc2b742b45d41d7f4b8c3721442897\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:09:00.688364 containerd[1518]: time="2026-01-23T00:09:00.688292349Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccfb6488b-prfmf,Uid:04dc4181-cb72-4a34-bab6-405893931b00,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"892701df65d047ea4aa4a52d8687ef146acc2b742b45d41d7f4b8c3721442897\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:09:00.689355 kubelet[2765]: E0123 00:09:00.688710 2765 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"892701df65d047ea4aa4a52d8687ef146acc2b742b45d41d7f4b8c3721442897\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:09:00.689355 kubelet[2765]: E0123 00:09:00.688771 2765 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"892701df65d047ea4aa4a52d8687ef146acc2b742b45d41d7f4b8c3721442897\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ccfb6488b-prfmf" Jan 23 00:09:00.689355 kubelet[2765]: E0123 00:09:00.688813 2765 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"892701df65d047ea4aa4a52d8687ef146acc2b742b45d41d7f4b8c3721442897\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ccfb6488b-prfmf" Jan 23 00:09:00.689545 kubelet[2765]: E0123 00:09:00.688876 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ccfb6488b-prfmf_calico-apiserver(04dc4181-cb72-4a34-bab6-405893931b00)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ccfb6488b-prfmf_calico-apiserver(04dc4181-cb72-4a34-bab6-405893931b00)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"892701df65d047ea4aa4a52d8687ef146acc2b742b45d41d7f4b8c3721442897\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ccfb6488b-prfmf" podUID="04dc4181-cb72-4a34-bab6-405893931b00" Jan 23 00:09:00.692287 containerd[1518]: time="2026-01-23T00:09:00.692242116Z" level=error msg="Failed to destroy network for sandbox \"705e4117368f73b0b5324d2686fc3c0c1f46f3e289a8df3628fa6e358a54ab61\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:09:00.694822 containerd[1518]: time="2026-01-23T00:09:00.694749942Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79b9d654f9-f5q7q,Uid:de599f6a-f928-4066-9266-9bc7100de609,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"705e4117368f73b0b5324d2686fc3c0c1f46f3e289a8df3628fa6e358a54ab61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:09:00.695591 kubelet[2765]: E0123 00:09:00.695223 2765 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"705e4117368f73b0b5324d2686fc3c0c1f46f3e289a8df3628fa6e358a54ab61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:09:00.695591 kubelet[2765]: E0123 00:09:00.695280 2765 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"705e4117368f73b0b5324d2686fc3c0c1f46f3e289a8df3628fa6e358a54ab61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79b9d654f9-f5q7q" Jan 23 00:09:00.695591 kubelet[2765]: E0123 00:09:00.695299 2765 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"705e4117368f73b0b5324d2686fc3c0c1f46f3e289a8df3628fa6e358a54ab61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79b9d654f9-f5q7q" Jan 23 00:09:00.695930 kubelet[2765]: E0123 00:09:00.695347 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-79b9d654f9-f5q7q_calico-system(de599f6a-f928-4066-9266-9bc7100de609)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-79b9d654f9-f5q7q_calico-system(de599f6a-f928-4066-9266-9bc7100de609)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"705e4117368f73b0b5324d2686fc3c0c1f46f3e289a8df3628fa6e358a54ab61\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-79b9d654f9-f5q7q" podUID="de599f6a-f928-4066-9266-9bc7100de609" Jan 23 00:09:05.022480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3260560977.mount: Deactivated successfully. Jan 23 00:09:05.063750 containerd[1518]: time="2026-01-23T00:09:05.062261665Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:09:05.064986 containerd[1518]: time="2026-01-23T00:09:05.064940885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Jan 23 00:09:05.066709 containerd[1518]: time="2026-01-23T00:09:05.066642357Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:09:05.072046 containerd[1518]: time="2026-01-23T00:09:05.071842485Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.571304143s" Jan 23 00:09:05.073430 containerd[1518]: time="2026-01-23T00:09:05.073387885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 23 00:09:05.087205 containerd[1518]: time="2026-01-23T00:09:05.086121901Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:09:05.107651 containerd[1518]: time="2026-01-23T00:09:05.107577142Z" level=info msg="CreateContainer within sandbox \"946b9f4be4e10df3feb5aab18dae3d2e7a08e68e7b2e5322c14042e156912e05\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 00:09:05.170900 containerd[1518]: time="2026-01-23T00:09:05.170848081Z" level=info msg="Container 8f9cd441b271d68ee244acd2eedd821a2d93e13d945a44f4f042005552df9fd6: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:09:05.200129 containerd[1518]: time="2026-01-23T00:09:05.200071557Z" level=info msg="CreateContainer within sandbox \"946b9f4be4e10df3feb5aab18dae3d2e7a08e68e7b2e5322c14042e156912e05\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8f9cd441b271d68ee244acd2eedd821a2d93e13d945a44f4f042005552df9fd6\"" Jan 23 00:09:05.204615 containerd[1518]: time="2026-01-23T00:09:05.204484327Z" level=info msg="StartContainer for \"8f9cd441b271d68ee244acd2eedd821a2d93e13d945a44f4f042005552df9fd6\"" Jan 23 00:09:05.207030 containerd[1518]: time="2026-01-23T00:09:05.206977077Z" level=info msg="connecting to shim 8f9cd441b271d68ee244acd2eedd821a2d93e13d945a44f4f042005552df9fd6" address="unix:///run/containerd/s/19f169cc70fd1d25a3c6cb66abf519f7165a03c10a1e1f0b5dfc19d6b3e5f155" protocol=ttrpc version=3 Jan 23 00:09:05.266967 systemd[1]: Started cri-containerd-8f9cd441b271d68ee244acd2eedd821a2d93e13d945a44f4f042005552df9fd6.scope - libcontainer container 8f9cd441b271d68ee244acd2eedd821a2d93e13d945a44f4f042005552df9fd6. Jan 23 00:09:05.358404 containerd[1518]: time="2026-01-23T00:09:05.358364181Z" level=info msg="StartContainer for \"8f9cd441b271d68ee244acd2eedd821a2d93e13d945a44f4f042005552df9fd6\" returns successfully" Jan 23 00:09:05.524509 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 00:09:05.524636 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 00:09:05.559217 kubelet[2765]: I0123 00:09:05.558951 2765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zrq8d" podStartSLOduration=1.285373575 podStartE2EDuration="13.558930119s" podCreationTimestamp="2026-01-23 00:08:52 +0000 UTC" firstStartedPulling="2026-01-23 00:08:52.800921924 +0000 UTC m=+29.681299879" lastFinishedPulling="2026-01-23 00:09:05.074478508 +0000 UTC m=+41.954856423" observedRunningTime="2026-01-23 00:09:05.557403439 +0000 UTC m=+42.437781394" watchObservedRunningTime="2026-01-23 00:09:05.558930119 +0000 UTC m=+42.439308114" Jan 23 00:09:05.796658 kubelet[2765]: I0123 00:09:05.796493 2765 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de599f6a-f928-4066-9266-9bc7100de609-whisker-ca-bundle\") pod \"de599f6a-f928-4066-9266-9bc7100de609\" (UID: \"de599f6a-f928-4066-9266-9bc7100de609\") " Jan 23 00:09:05.797366 kubelet[2765]: I0123 00:09:05.797337 2765 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7xcr\" (UniqueName: \"kubernetes.io/projected/de599f6a-f928-4066-9266-9bc7100de609-kube-api-access-j7xcr\") pod \"de599f6a-f928-4066-9266-9bc7100de609\" (UID: \"de599f6a-f928-4066-9266-9bc7100de609\") " Jan 23 00:09:05.797841 kubelet[2765]: I0123 00:09:05.797818 2765 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/de599f6a-f928-4066-9266-9bc7100de609-whisker-backend-key-pair\") pod \"de599f6a-f928-4066-9266-9bc7100de609\" (UID: \"de599f6a-f928-4066-9266-9bc7100de609\") " Jan 23 00:09:05.798734 kubelet[2765]: I0123 00:09:05.797616 2765 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de599f6a-f928-4066-9266-9bc7100de609-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "de599f6a-f928-4066-9266-9bc7100de609" (UID: "de599f6a-f928-4066-9266-9bc7100de609"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 00:09:05.801920 kubelet[2765]: I0123 00:09:05.801862 2765 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de599f6a-f928-4066-9266-9bc7100de609-kube-api-access-j7xcr" (OuterVolumeSpecName: "kube-api-access-j7xcr") pod "de599f6a-f928-4066-9266-9bc7100de609" (UID: "de599f6a-f928-4066-9266-9bc7100de609"). InnerVolumeSpecName "kube-api-access-j7xcr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 00:09:05.803171 kubelet[2765]: I0123 00:09:05.803068 2765 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de599f6a-f928-4066-9266-9bc7100de609-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "de599f6a-f928-4066-9266-9bc7100de609" (UID: "de599f6a-f928-4066-9266-9bc7100de609"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 00:09:05.898275 kubelet[2765]: I0123 00:09:05.898203 2765 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de599f6a-f928-4066-9266-9bc7100de609-whisker-ca-bundle\") on node \"ci-4459-2-2-n-8734b5e787\" DevicePath \"\"" Jan 23 00:09:05.898275 kubelet[2765]: I0123 00:09:05.898241 2765 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j7xcr\" (UniqueName: \"kubernetes.io/projected/de599f6a-f928-4066-9266-9bc7100de609-kube-api-access-j7xcr\") on node \"ci-4459-2-2-n-8734b5e787\" DevicePath \"\"" Jan 23 00:09:05.898275 kubelet[2765]: I0123 00:09:05.898251 2765 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/de599f6a-f928-4066-9266-9bc7100de609-whisker-backend-key-pair\") on node \"ci-4459-2-2-n-8734b5e787\" DevicePath \"\"" Jan 23 00:09:06.023709 systemd[1]: var-lib-kubelet-pods-de599f6a\x2df928\x2d4066\x2d9266\x2d9bc7100de609-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj7xcr.mount: Deactivated successfully. Jan 23 00:09:06.023821 systemd[1]: var-lib-kubelet-pods-de599f6a\x2df928\x2d4066\x2d9266\x2d9bc7100de609-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 00:09:06.518793 kubelet[2765]: I0123 00:09:06.518110 2765 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 00:09:06.524363 systemd[1]: Removed slice kubepods-besteffort-podde599f6a_f928_4066_9266_9bc7100de609.slice - libcontainer container kubepods-besteffort-podde599f6a_f928_4066_9266_9bc7100de609.slice. Jan 23 00:09:06.606083 systemd[1]: Created slice kubepods-besteffort-podc35d1acc_4390_4333_8520_ad8b62c4e6ab.slice - libcontainer container kubepods-besteffort-podc35d1acc_4390_4333_8520_ad8b62c4e6ab.slice. Jan 23 00:09:06.703524 kubelet[2765]: I0123 00:09:06.703464 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c35d1acc-4390-4333-8520-ad8b62c4e6ab-whisker-backend-key-pair\") pod \"whisker-5c6fbdd5b7-6st7k\" (UID: \"c35d1acc-4390-4333-8520-ad8b62c4e6ab\") " pod="calico-system/whisker-5c6fbdd5b7-6st7k" Jan 23 00:09:06.704349 kubelet[2765]: I0123 00:09:06.703862 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c35d1acc-4390-4333-8520-ad8b62c4e6ab-whisker-ca-bundle\") pod \"whisker-5c6fbdd5b7-6st7k\" (UID: \"c35d1acc-4390-4333-8520-ad8b62c4e6ab\") " pod="calico-system/whisker-5c6fbdd5b7-6st7k" Jan 23 00:09:06.704349 kubelet[2765]: I0123 00:09:06.703940 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h85ms\" (UniqueName: \"kubernetes.io/projected/c35d1acc-4390-4333-8520-ad8b62c4e6ab-kube-api-access-h85ms\") pod \"whisker-5c6fbdd5b7-6st7k\" (UID: \"c35d1acc-4390-4333-8520-ad8b62c4e6ab\") " pod="calico-system/whisker-5c6fbdd5b7-6st7k" Jan 23 00:09:06.912767 containerd[1518]: time="2026-01-23T00:09:06.912717648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c6fbdd5b7-6st7k,Uid:c35d1acc-4390-4333-8520-ad8b62c4e6ab,Namespace:calico-system,Attempt:0,}" Jan 23 00:09:07.126759 systemd-networkd[1422]: calibc9dfcbf3b2: Link UP Jan 23 00:09:07.127559 systemd-networkd[1422]: calibc9dfcbf3b2: Gained carrier Jan 23 00:09:07.149077 containerd[1518]: 2026-01-23 00:09:06.943 [INFO][3926] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 00:09:07.149077 containerd[1518]: 2026-01-23 00:09:07.004 [INFO][3926] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--n--8734b5e787-k8s-whisker--5c6fbdd5b7--6st7k-eth0 whisker-5c6fbdd5b7- calico-system c35d1acc-4390-4333-8520-ad8b62c4e6ab 882 0 2026-01-23 00:09:06 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5c6fbdd5b7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459-2-2-n-8734b5e787 whisker-5c6fbdd5b7-6st7k eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calibc9dfcbf3b2 [] [] }} ContainerID="4e9efdca57e11c943e6653ef7fea2502a389aa1825245a5bf5b3f2d2ce3c92c8" Namespace="calico-system" Pod="whisker-5c6fbdd5b7-6st7k" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-whisker--5c6fbdd5b7--6st7k-" Jan 23 00:09:07.149077 containerd[1518]: 2026-01-23 00:09:07.004 [INFO][3926] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4e9efdca57e11c943e6653ef7fea2502a389aa1825245a5bf5b3f2d2ce3c92c8" Namespace="calico-system" Pod="whisker-5c6fbdd5b7-6st7k" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-whisker--5c6fbdd5b7--6st7k-eth0" Jan 23 00:09:07.149077 containerd[1518]: 2026-01-23 00:09:07.055 [INFO][3937] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e9efdca57e11c943e6653ef7fea2502a389aa1825245a5bf5b3f2d2ce3c92c8" HandleID="k8s-pod-network.4e9efdca57e11c943e6653ef7fea2502a389aa1825245a5bf5b3f2d2ce3c92c8" Workload="ci--4459--2--2--n--8734b5e787-k8s-whisker--5c6fbdd5b7--6st7k-eth0" Jan 23 00:09:07.149077 containerd[1518]: 2026-01-23 00:09:07.055 [INFO][3937] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4e9efdca57e11c943e6653ef7fea2502a389aa1825245a5bf5b3f2d2ce3c92c8" HandleID="k8s-pod-network.4e9efdca57e11c943e6653ef7fea2502a389aa1825245a5bf5b3f2d2ce3c92c8" Workload="ci--4459--2--2--n--8734b5e787-k8s-whisker--5c6fbdd5b7--6st7k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b0d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-2-n-8734b5e787", "pod":"whisker-5c6fbdd5b7-6st7k", "timestamp":"2026-01-23 00:09:07.055449329 +0000 UTC"}, Hostname:"ci-4459-2-2-n-8734b5e787", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 00:09:07.149077 containerd[1518]: 2026-01-23 00:09:07.055 [INFO][3937] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 00:09:07.149077 containerd[1518]: 2026-01-23 00:09:07.055 [INFO][3937] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 00:09:07.149077 containerd[1518]: 2026-01-23 00:09:07.055 [INFO][3937] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-n-8734b5e787' Jan 23 00:09:07.149077 containerd[1518]: 2026-01-23 00:09:07.073 [INFO][3937] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4e9efdca57e11c943e6653ef7fea2502a389aa1825245a5bf5b3f2d2ce3c92c8" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:07.149077 containerd[1518]: 2026-01-23 00:09:07.081 [INFO][3937] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:07.149077 containerd[1518]: 2026-01-23 00:09:07.087 [INFO][3937] ipam/ipam.go 511: Trying affinity for 192.168.3.64/26 host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:07.149077 containerd[1518]: 2026-01-23 00:09:07.090 [INFO][3937] ipam/ipam.go 158: Attempting to load block cidr=192.168.3.64/26 host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:07.149077 containerd[1518]: 2026-01-23 00:09:07.093 [INFO][3937] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.3.64/26 host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:07.149077 containerd[1518]: 2026-01-23 00:09:07.093 [INFO][3937] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.3.64/26 handle="k8s-pod-network.4e9efdca57e11c943e6653ef7fea2502a389aa1825245a5bf5b3f2d2ce3c92c8" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:07.149077 containerd[1518]: 2026-01-23 00:09:07.096 [INFO][3937] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4e9efdca57e11c943e6653ef7fea2502a389aa1825245a5bf5b3f2d2ce3c92c8 Jan 23 00:09:07.149077 containerd[1518]: 2026-01-23 00:09:07.105 [INFO][3937] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.3.64/26 handle="k8s-pod-network.4e9efdca57e11c943e6653ef7fea2502a389aa1825245a5bf5b3f2d2ce3c92c8" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:07.149077 containerd[1518]: 2026-01-23 00:09:07.113 [INFO][3937] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.3.65/26] block=192.168.3.64/26 handle="k8s-pod-network.4e9efdca57e11c943e6653ef7fea2502a389aa1825245a5bf5b3f2d2ce3c92c8" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:07.149077 containerd[1518]: 2026-01-23 00:09:07.113 [INFO][3937] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.3.65/26] handle="k8s-pod-network.4e9efdca57e11c943e6653ef7fea2502a389aa1825245a5bf5b3f2d2ce3c92c8" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:07.149077 containerd[1518]: 2026-01-23 00:09:07.113 [INFO][3937] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 00:09:07.149077 containerd[1518]: 2026-01-23 00:09:07.113 [INFO][3937] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.3.65/26] IPv6=[] ContainerID="4e9efdca57e11c943e6653ef7fea2502a389aa1825245a5bf5b3f2d2ce3c92c8" HandleID="k8s-pod-network.4e9efdca57e11c943e6653ef7fea2502a389aa1825245a5bf5b3f2d2ce3c92c8" Workload="ci--4459--2--2--n--8734b5e787-k8s-whisker--5c6fbdd5b7--6st7k-eth0" Jan 23 00:09:07.149951 containerd[1518]: 2026-01-23 00:09:07.116 [INFO][3926] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4e9efdca57e11c943e6653ef7fea2502a389aa1825245a5bf5b3f2d2ce3c92c8" Namespace="calico-system" Pod="whisker-5c6fbdd5b7-6st7k" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-whisker--5c6fbdd5b7--6st7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--n--8734b5e787-k8s-whisker--5c6fbdd5b7--6st7k-eth0", GenerateName:"whisker-5c6fbdd5b7-", Namespace:"calico-system", SelfLink:"", UID:"c35d1acc-4390-4333-8520-ad8b62c4e6ab", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 9, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c6fbdd5b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-n-8734b5e787", ContainerID:"", Pod:"whisker-5c6fbdd5b7-6st7k", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.3.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibc9dfcbf3b2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:09:07.149951 containerd[1518]: 2026-01-23 00:09:07.116 [INFO][3926] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.65/32] ContainerID="4e9efdca57e11c943e6653ef7fea2502a389aa1825245a5bf5b3f2d2ce3c92c8" Namespace="calico-system" Pod="whisker-5c6fbdd5b7-6st7k" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-whisker--5c6fbdd5b7--6st7k-eth0" Jan 23 00:09:07.149951 containerd[1518]: 2026-01-23 00:09:07.116 [INFO][3926] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc9dfcbf3b2 ContainerID="4e9efdca57e11c943e6653ef7fea2502a389aa1825245a5bf5b3f2d2ce3c92c8" Namespace="calico-system" Pod="whisker-5c6fbdd5b7-6st7k" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-whisker--5c6fbdd5b7--6st7k-eth0" Jan 23 00:09:07.149951 containerd[1518]: 2026-01-23 00:09:07.127 [INFO][3926] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4e9efdca57e11c943e6653ef7fea2502a389aa1825245a5bf5b3f2d2ce3c92c8" Namespace="calico-system" Pod="whisker-5c6fbdd5b7-6st7k" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-whisker--5c6fbdd5b7--6st7k-eth0" Jan 23 00:09:07.149951 containerd[1518]: 2026-01-23 00:09:07.129 [INFO][3926] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4e9efdca57e11c943e6653ef7fea2502a389aa1825245a5bf5b3f2d2ce3c92c8" Namespace="calico-system" Pod="whisker-5c6fbdd5b7-6st7k" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-whisker--5c6fbdd5b7--6st7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--n--8734b5e787-k8s-whisker--5c6fbdd5b7--6st7k-eth0", GenerateName:"whisker-5c6fbdd5b7-", Namespace:"calico-system", SelfLink:"", UID:"c35d1acc-4390-4333-8520-ad8b62c4e6ab", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 9, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c6fbdd5b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-n-8734b5e787", ContainerID:"4e9efdca57e11c943e6653ef7fea2502a389aa1825245a5bf5b3f2d2ce3c92c8", Pod:"whisker-5c6fbdd5b7-6st7k", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.3.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibc9dfcbf3b2", MAC:"46:78:9f:27:bf:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:09:07.149951 containerd[1518]: 2026-01-23 00:09:07.144 [INFO][3926] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4e9efdca57e11c943e6653ef7fea2502a389aa1825245a5bf5b3f2d2ce3c92c8" Namespace="calico-system" Pod="whisker-5c6fbdd5b7-6st7k" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-whisker--5c6fbdd5b7--6st7k-eth0" Jan 23 00:09:07.203992 containerd[1518]: time="2026-01-23T00:09:07.203869612Z" level=info msg="connecting to shim 4e9efdca57e11c943e6653ef7fea2502a389aa1825245a5bf5b3f2d2ce3c92c8" address="unix:///run/containerd/s/9ed086421206b9c7a8c6a3dabdd5b5a5cf34c42fce64c7a6ceacf92d0bfb8727" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:09:07.240105 systemd[1]: Started cri-containerd-4e9efdca57e11c943e6653ef7fea2502a389aa1825245a5bf5b3f2d2ce3c92c8.scope - libcontainer container 4e9efdca57e11c943e6653ef7fea2502a389aa1825245a5bf5b3f2d2ce3c92c8. Jan 23 00:09:07.277595 kubelet[2765]: I0123 00:09:07.277547 2765 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de599f6a-f928-4066-9266-9bc7100de609" path="/var/lib/kubelet/pods/de599f6a-f928-4066-9266-9bc7100de609/volumes" Jan 23 00:09:07.301657 containerd[1518]: time="2026-01-23T00:09:07.301535113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c6fbdd5b7-6st7k,Uid:c35d1acc-4390-4333-8520-ad8b62c4e6ab,Namespace:calico-system,Attempt:0,} returns sandbox id \"4e9efdca57e11c943e6653ef7fea2502a389aa1825245a5bf5b3f2d2ce3c92c8\"" Jan 23 00:09:07.306825 containerd[1518]: time="2026-01-23T00:09:07.306583561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 00:09:07.665546 containerd[1518]: time="2026-01-23T00:09:07.665304237Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:09:07.667035 containerd[1518]: time="2026-01-23T00:09:07.666817327Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 00:09:07.667035 containerd[1518]: time="2026-01-23T00:09:07.666969400Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 00:09:07.667273 kubelet[2765]: E0123 00:09:07.667176 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 00:09:07.667273 kubelet[2765]: E0123 00:09:07.667224 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 00:09:07.669560 kubelet[2765]: E0123 00:09:07.669499 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5c6fbdd5b7-6st7k_calico-system(c35d1acc-4390-4333-8520-ad8b62c4e6ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 00:09:07.670939 containerd[1518]: time="2026-01-23T00:09:07.670895459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 00:09:08.005886 containerd[1518]: time="2026-01-23T00:09:08.005572857Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:09:08.007740 containerd[1518]: time="2026-01-23T00:09:08.007585290Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 00:09:08.007740 containerd[1518]: time="2026-01-23T00:09:08.007612209Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 00:09:08.008287 kubelet[2765]: E0123 00:09:08.007995 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 00:09:08.008287 kubelet[2765]: E0123 00:09:08.008049 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 00:09:08.008287 kubelet[2765]: E0123 00:09:08.008137 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5c6fbdd5b7-6st7k_calico-system(c35d1acc-4390-4333-8520-ad8b62c4e6ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 00:09:08.010242 kubelet[2765]: E0123 00:09:08.010171 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6fbdd5b7-6st7k" podUID="c35d1acc-4390-4333-8520-ad8b62c4e6ab" Jan 23 00:09:08.531792 kubelet[2765]: E0123 00:09:08.531621 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6fbdd5b7-6st7k" podUID="c35d1acc-4390-4333-8520-ad8b62c4e6ab" Jan 23 00:09:08.815816 systemd-networkd[1422]: calibc9dfcbf3b2: Gained IPv6LL Jan 23 00:09:09.695390 kubelet[2765]: I0123 00:09:09.694911 2765 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 00:09:10.940230 systemd-networkd[1422]: vxlan.calico: Link UP Jan 23 00:09:10.940240 systemd-networkd[1422]: vxlan.calico: Gained carrier Jan 23 00:09:11.284544 containerd[1518]: time="2026-01-23T00:09:11.284298148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccfb6488b-mxfqt,Uid:4412ff60-1893-4e70-a14c-c509d31ae479,Namespace:calico-apiserver,Attempt:0,}" Jan 23 00:09:11.291830 containerd[1518]: time="2026-01-23T00:09:11.289103380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5df54fc4cc-6ldlj,Uid:535d2a42-9500-4db2-9b67-fdfed8e04eb2,Namespace:calico-system,Attempt:0,}" Jan 23 00:09:11.470106 systemd-networkd[1422]: calieff829b3482: Link UP Jan 23 00:09:11.471600 systemd-networkd[1422]: calieff829b3482: Gained carrier Jan 23 00:09:11.501629 containerd[1518]: 2026-01-23 00:09:11.368 [INFO][4197] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--n--8734b5e787-k8s-calico--apiserver--ccfb6488b--mxfqt-eth0 calico-apiserver-ccfb6488b- calico-apiserver 4412ff60-1893-4e70-a14c-c509d31ae479 818 0 2026-01-23 00:08:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:ccfb6488b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-2-n-8734b5e787 calico-apiserver-ccfb6488b-mxfqt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calieff829b3482 [] [] }} ContainerID="dc57e56d766dba63624036f2279e9aecd527c023ada0bcbbf9397c95cb717d0f" Namespace="calico-apiserver" Pod="calico-apiserver-ccfb6488b-mxfqt" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-calico--apiserver--ccfb6488b--mxfqt-" Jan 23 00:09:11.501629 containerd[1518]: 2026-01-23 00:09:11.369 [INFO][4197] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dc57e56d766dba63624036f2279e9aecd527c023ada0bcbbf9397c95cb717d0f" Namespace="calico-apiserver" Pod="calico-apiserver-ccfb6488b-mxfqt" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-calico--apiserver--ccfb6488b--mxfqt-eth0" Jan 23 00:09:11.501629 containerd[1518]: 2026-01-23 00:09:11.409 [INFO][4219] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dc57e56d766dba63624036f2279e9aecd527c023ada0bcbbf9397c95cb717d0f" HandleID="k8s-pod-network.dc57e56d766dba63624036f2279e9aecd527c023ada0bcbbf9397c95cb717d0f" Workload="ci--4459--2--2--n--8734b5e787-k8s-calico--apiserver--ccfb6488b--mxfqt-eth0" Jan 23 00:09:11.501629 containerd[1518]: 2026-01-23 00:09:11.409 [INFO][4219] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dc57e56d766dba63624036f2279e9aecd527c023ada0bcbbf9397c95cb717d0f" HandleID="k8s-pod-network.dc57e56d766dba63624036f2279e9aecd527c023ada0bcbbf9397c95cb717d0f" Workload="ci--4459--2--2--n--8734b5e787-k8s-calico--apiserver--ccfb6488b--mxfqt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3870), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-2-2-n-8734b5e787", "pod":"calico-apiserver-ccfb6488b-mxfqt", "timestamp":"2026-01-23 00:09:11.409108022 +0000 UTC"}, Hostname:"ci-4459-2-2-n-8734b5e787", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 00:09:11.501629 containerd[1518]: 2026-01-23 00:09:11.409 [INFO][4219] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 00:09:11.501629 containerd[1518]: 2026-01-23 00:09:11.409 [INFO][4219] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 00:09:11.501629 containerd[1518]: 2026-01-23 00:09:11.409 [INFO][4219] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-n-8734b5e787' Jan 23 00:09:11.501629 containerd[1518]: 2026-01-23 00:09:11.421 [INFO][4219] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dc57e56d766dba63624036f2279e9aecd527c023ada0bcbbf9397c95cb717d0f" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:11.501629 containerd[1518]: 2026-01-23 00:09:11.427 [INFO][4219] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:11.501629 containerd[1518]: 2026-01-23 00:09:11.433 [INFO][4219] ipam/ipam.go 511: Trying affinity for 192.168.3.64/26 host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:11.501629 containerd[1518]: 2026-01-23 00:09:11.436 [INFO][4219] ipam/ipam.go 158: Attempting to load block cidr=192.168.3.64/26 host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:11.501629 containerd[1518]: 2026-01-23 00:09:11.439 [INFO][4219] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.3.64/26 host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:11.501629 containerd[1518]: 2026-01-23 00:09:11.440 [INFO][4219] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.3.64/26 handle="k8s-pod-network.dc57e56d766dba63624036f2279e9aecd527c023ada0bcbbf9397c95cb717d0f" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:11.501629 containerd[1518]: 2026-01-23 00:09:11.442 [INFO][4219] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dc57e56d766dba63624036f2279e9aecd527c023ada0bcbbf9397c95cb717d0f Jan 23 00:09:11.501629 containerd[1518]: 2026-01-23 00:09:11.447 [INFO][4219] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.3.64/26 handle="k8s-pod-network.dc57e56d766dba63624036f2279e9aecd527c023ada0bcbbf9397c95cb717d0f" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:11.501629 containerd[1518]: 2026-01-23 00:09:11.456 [INFO][4219] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.3.66/26] block=192.168.3.64/26 handle="k8s-pod-network.dc57e56d766dba63624036f2279e9aecd527c023ada0bcbbf9397c95cb717d0f" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:11.501629 containerd[1518]: 2026-01-23 00:09:11.456 [INFO][4219] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.3.66/26] handle="k8s-pod-network.dc57e56d766dba63624036f2279e9aecd527c023ada0bcbbf9397c95cb717d0f" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:11.501629 containerd[1518]: 2026-01-23 00:09:11.456 [INFO][4219] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 00:09:11.501629 containerd[1518]: 2026-01-23 00:09:11.456 [INFO][4219] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.3.66/26] IPv6=[] ContainerID="dc57e56d766dba63624036f2279e9aecd527c023ada0bcbbf9397c95cb717d0f" HandleID="k8s-pod-network.dc57e56d766dba63624036f2279e9aecd527c023ada0bcbbf9397c95cb717d0f" Workload="ci--4459--2--2--n--8734b5e787-k8s-calico--apiserver--ccfb6488b--mxfqt-eth0" Jan 23 00:09:11.502328 containerd[1518]: 2026-01-23 00:09:11.463 [INFO][4197] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dc57e56d766dba63624036f2279e9aecd527c023ada0bcbbf9397c95cb717d0f" Namespace="calico-apiserver" Pod="calico-apiserver-ccfb6488b-mxfqt" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-calico--apiserver--ccfb6488b--mxfqt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--n--8734b5e787-k8s-calico--apiserver--ccfb6488b--mxfqt-eth0", GenerateName:"calico-apiserver-ccfb6488b-", Namespace:"calico-apiserver", SelfLink:"", UID:"4412ff60-1893-4e70-a14c-c509d31ae479", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 8, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ccfb6488b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-n-8734b5e787", ContainerID:"", Pod:"calico-apiserver-ccfb6488b-mxfqt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieff829b3482", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:09:11.502328 containerd[1518]: 2026-01-23 00:09:11.463 [INFO][4197] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.66/32] ContainerID="dc57e56d766dba63624036f2279e9aecd527c023ada0bcbbf9397c95cb717d0f" Namespace="calico-apiserver" Pod="calico-apiserver-ccfb6488b-mxfqt" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-calico--apiserver--ccfb6488b--mxfqt-eth0" Jan 23 00:09:11.502328 containerd[1518]: 2026-01-23 00:09:11.463 [INFO][4197] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieff829b3482 ContainerID="dc57e56d766dba63624036f2279e9aecd527c023ada0bcbbf9397c95cb717d0f" Namespace="calico-apiserver" Pod="calico-apiserver-ccfb6488b-mxfqt" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-calico--apiserver--ccfb6488b--mxfqt-eth0" Jan 23 00:09:11.502328 containerd[1518]: 2026-01-23 00:09:11.484 [INFO][4197] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dc57e56d766dba63624036f2279e9aecd527c023ada0bcbbf9397c95cb717d0f" Namespace="calico-apiserver" Pod="calico-apiserver-ccfb6488b-mxfqt" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-calico--apiserver--ccfb6488b--mxfqt-eth0" Jan 23 00:09:11.502328 containerd[1518]: 2026-01-23 00:09:11.484 [INFO][4197] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dc57e56d766dba63624036f2279e9aecd527c023ada0bcbbf9397c95cb717d0f" Namespace="calico-apiserver" Pod="calico-apiserver-ccfb6488b-mxfqt" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-calico--apiserver--ccfb6488b--mxfqt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--n--8734b5e787-k8s-calico--apiserver--ccfb6488b--mxfqt-eth0", GenerateName:"calico-apiserver-ccfb6488b-", Namespace:"calico-apiserver", SelfLink:"", UID:"4412ff60-1893-4e70-a14c-c509d31ae479", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 8, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ccfb6488b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-n-8734b5e787", ContainerID:"dc57e56d766dba63624036f2279e9aecd527c023ada0bcbbf9397c95cb717d0f", Pod:"calico-apiserver-ccfb6488b-mxfqt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieff829b3482", MAC:"06:5b:f9:80:c0:8f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:09:11.502328 containerd[1518]: 2026-01-23 00:09:11.494 [INFO][4197] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dc57e56d766dba63624036f2279e9aecd527c023ada0bcbbf9397c95cb717d0f" Namespace="calico-apiserver" Pod="calico-apiserver-ccfb6488b-mxfqt" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-calico--apiserver--ccfb6488b--mxfqt-eth0" Jan 23 00:09:11.580504 systemd-networkd[1422]: calie230b64d268: Link UP Jan 23 00:09:11.582527 systemd-networkd[1422]: calie230b64d268: Gained carrier Jan 23 00:09:11.598730 containerd[1518]: time="2026-01-23T00:09:11.597607789Z" level=info msg="connecting to shim dc57e56d766dba63624036f2279e9aecd527c023ada0bcbbf9397c95cb717d0f" address="unix:///run/containerd/s/8a63e8d39a05bf065503e00d17a97bfab9973fa1f62497f9a3061e6c762a25e1" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:09:11.611293 containerd[1518]: 2026-01-23 00:09:11.368 [INFO][4198] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--n--8734b5e787-k8s-calico--kube--controllers--5df54fc4cc--6ldlj-eth0 calico-kube-controllers-5df54fc4cc- calico-system 535d2a42-9500-4db2-9b67-fdfed8e04eb2 816 0 2026-01-23 00:08:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5df54fc4cc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459-2-2-n-8734b5e787 calico-kube-controllers-5df54fc4cc-6ldlj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie230b64d268 [] [] }} ContainerID="512ea221cc722600250df1024525f07cf74783130e198a132d75fdee4fb75cd7" Namespace="calico-system" Pod="calico-kube-controllers-5df54fc4cc-6ldlj" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-calico--kube--controllers--5df54fc4cc--6ldlj-" Jan 23 00:09:11.611293 containerd[1518]: 2026-01-23 00:09:11.368 [INFO][4198] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="512ea221cc722600250df1024525f07cf74783130e198a132d75fdee4fb75cd7" Namespace="calico-system" Pod="calico-kube-controllers-5df54fc4cc-6ldlj" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-calico--kube--controllers--5df54fc4cc--6ldlj-eth0" Jan 23 00:09:11.611293 containerd[1518]: 2026-01-23 00:09:11.410 [INFO][4218] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="512ea221cc722600250df1024525f07cf74783130e198a132d75fdee4fb75cd7" HandleID="k8s-pod-network.512ea221cc722600250df1024525f07cf74783130e198a132d75fdee4fb75cd7" Workload="ci--4459--2--2--n--8734b5e787-k8s-calico--kube--controllers--5df54fc4cc--6ldlj-eth0" Jan 23 00:09:11.611293 containerd[1518]: 2026-01-23 00:09:11.410 [INFO][4218] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="512ea221cc722600250df1024525f07cf74783130e198a132d75fdee4fb75cd7" HandleID="k8s-pod-network.512ea221cc722600250df1024525f07cf74783130e198a132d75fdee4fb75cd7" Workload="ci--4459--2--2--n--8734b5e787-k8s-calico--kube--controllers--5df54fc4cc--6ldlj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2f90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-2-n-8734b5e787", "pod":"calico-kube-controllers-5df54fc4cc-6ldlj", "timestamp":"2026-01-23 00:09:11.41061609 +0000 UTC"}, Hostname:"ci-4459-2-2-n-8734b5e787", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 00:09:11.611293 containerd[1518]: 2026-01-23 00:09:11.411 [INFO][4218] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 00:09:11.611293 containerd[1518]: 2026-01-23 00:09:11.456 [INFO][4218] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 00:09:11.611293 containerd[1518]: 2026-01-23 00:09:11.457 [INFO][4218] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-n-8734b5e787' Jan 23 00:09:11.611293 containerd[1518]: 2026-01-23 00:09:11.521 [INFO][4218] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.512ea221cc722600250df1024525f07cf74783130e198a132d75fdee4fb75cd7" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:11.611293 containerd[1518]: 2026-01-23 00:09:11.529 [INFO][4218] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:11.611293 containerd[1518]: 2026-01-23 00:09:11.537 [INFO][4218] ipam/ipam.go 511: Trying affinity for 192.168.3.64/26 host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:11.611293 containerd[1518]: 2026-01-23 00:09:11.544 [INFO][4218] ipam/ipam.go 158: Attempting to load block cidr=192.168.3.64/26 host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:11.611293 containerd[1518]: 2026-01-23 00:09:11.549 [INFO][4218] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.3.64/26 host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:11.611293 containerd[1518]: 2026-01-23 00:09:11.549 [INFO][4218] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.3.64/26 handle="k8s-pod-network.512ea221cc722600250df1024525f07cf74783130e198a132d75fdee4fb75cd7" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:11.611293 containerd[1518]: 2026-01-23 00:09:11.552 [INFO][4218] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.512ea221cc722600250df1024525f07cf74783130e198a132d75fdee4fb75cd7 Jan 23 00:09:11.611293 containerd[1518]: 2026-01-23 00:09:11.559 [INFO][4218] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.3.64/26 handle="k8s-pod-network.512ea221cc722600250df1024525f07cf74783130e198a132d75fdee4fb75cd7" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:11.611293 containerd[1518]: 2026-01-23 00:09:11.572 [INFO][4218] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.3.67/26] block=192.168.3.64/26 handle="k8s-pod-network.512ea221cc722600250df1024525f07cf74783130e198a132d75fdee4fb75cd7" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:11.611293 containerd[1518]: 2026-01-23 00:09:11.572 [INFO][4218] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.3.67/26] handle="k8s-pod-network.512ea221cc722600250df1024525f07cf74783130e198a132d75fdee4fb75cd7" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:11.611293 containerd[1518]: 2026-01-23 00:09:11.573 [INFO][4218] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 00:09:11.611293 containerd[1518]: 2026-01-23 00:09:11.573 [INFO][4218] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.3.67/26] IPv6=[] ContainerID="512ea221cc722600250df1024525f07cf74783130e198a132d75fdee4fb75cd7" HandleID="k8s-pod-network.512ea221cc722600250df1024525f07cf74783130e198a132d75fdee4fb75cd7" Workload="ci--4459--2--2--n--8734b5e787-k8s-calico--kube--controllers--5df54fc4cc--6ldlj-eth0" Jan 23 00:09:11.612539 containerd[1518]: 2026-01-23 00:09:11.577 [INFO][4198] cni-plugin/k8s.go 418: Populated endpoint ContainerID="512ea221cc722600250df1024525f07cf74783130e198a132d75fdee4fb75cd7" Namespace="calico-system" Pod="calico-kube-controllers-5df54fc4cc-6ldlj" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-calico--kube--controllers--5df54fc4cc--6ldlj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--n--8734b5e787-k8s-calico--kube--controllers--5df54fc4cc--6ldlj-eth0", GenerateName:"calico-kube-controllers-5df54fc4cc-", Namespace:"calico-system", SelfLink:"", UID:"535d2a42-9500-4db2-9b67-fdfed8e04eb2", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 8, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5df54fc4cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-n-8734b5e787", ContainerID:"", Pod:"calico-kube-controllers-5df54fc4cc-6ldlj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.3.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie230b64d268", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:09:11.612539 containerd[1518]: 2026-01-23 00:09:11.577 [INFO][4198] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.67/32] ContainerID="512ea221cc722600250df1024525f07cf74783130e198a132d75fdee4fb75cd7" Namespace="calico-system" Pod="calico-kube-controllers-5df54fc4cc-6ldlj" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-calico--kube--controllers--5df54fc4cc--6ldlj-eth0" Jan 23 00:09:11.612539 containerd[1518]: 2026-01-23 00:09:11.577 [INFO][4198] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie230b64d268 ContainerID="512ea221cc722600250df1024525f07cf74783130e198a132d75fdee4fb75cd7" Namespace="calico-system" Pod="calico-kube-controllers-5df54fc4cc-6ldlj" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-calico--kube--controllers--5df54fc4cc--6ldlj-eth0" Jan 23 00:09:11.612539 containerd[1518]: 2026-01-23 00:09:11.582 [INFO][4198] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="512ea221cc722600250df1024525f07cf74783130e198a132d75fdee4fb75cd7" Namespace="calico-system" Pod="calico-kube-controllers-5df54fc4cc-6ldlj" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-calico--kube--controllers--5df54fc4cc--6ldlj-eth0" Jan 23 00:09:11.612539 containerd[1518]: 2026-01-23 00:09:11.586 [INFO][4198] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="512ea221cc722600250df1024525f07cf74783130e198a132d75fdee4fb75cd7" Namespace="calico-system" Pod="calico-kube-controllers-5df54fc4cc-6ldlj" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-calico--kube--controllers--5df54fc4cc--6ldlj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--n--8734b5e787-k8s-calico--kube--controllers--5df54fc4cc--6ldlj-eth0", GenerateName:"calico-kube-controllers-5df54fc4cc-", Namespace:"calico-system", SelfLink:"", UID:"535d2a42-9500-4db2-9b67-fdfed8e04eb2", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 8, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5df54fc4cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-n-8734b5e787", ContainerID:"512ea221cc722600250df1024525f07cf74783130e198a132d75fdee4fb75cd7", Pod:"calico-kube-controllers-5df54fc4cc-6ldlj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.3.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie230b64d268", MAC:"2e:31:70:80:d4:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:09:11.612539 containerd[1518]: 2026-01-23 00:09:11.608 [INFO][4198] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="512ea221cc722600250df1024525f07cf74783130e198a132d75fdee4fb75cd7" Namespace="calico-system" Pod="calico-kube-controllers-5df54fc4cc-6ldlj" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-calico--kube--controllers--5df54fc4cc--6ldlj-eth0" Jan 23 00:09:11.666417 containerd[1518]: time="2026-01-23T00:09:11.666290107Z" level=info msg="connecting to shim 512ea221cc722600250df1024525f07cf74783130e198a132d75fdee4fb75cd7" address="unix:///run/containerd/s/23acc39a184e402b0cbf465ee9608a61b00ae6bf34476e80be8199f9215b3e54" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:09:11.682955 systemd[1]: Started cri-containerd-dc57e56d766dba63624036f2279e9aecd527c023ada0bcbbf9397c95cb717d0f.scope - libcontainer container dc57e56d766dba63624036f2279e9aecd527c023ada0bcbbf9397c95cb717d0f. Jan 23 00:09:11.710106 systemd[1]: Started cri-containerd-512ea221cc722600250df1024525f07cf74783130e198a132d75fdee4fb75cd7.scope - libcontainer container 512ea221cc722600250df1024525f07cf74783130e198a132d75fdee4fb75cd7. Jan 23 00:09:11.759052 containerd[1518]: time="2026-01-23T00:09:11.757337602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccfb6488b-mxfqt,Uid:4412ff60-1893-4e70-a14c-c509d31ae479,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"dc57e56d766dba63624036f2279e9aecd527c023ada0bcbbf9397c95cb717d0f\"" Jan 23 00:09:11.762603 containerd[1518]: time="2026-01-23T00:09:11.762541780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 00:09:11.788700 containerd[1518]: time="2026-01-23T00:09:11.788617388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5df54fc4cc-6ldlj,Uid:535d2a42-9500-4db2-9b67-fdfed8e04eb2,Namespace:calico-system,Attempt:0,} returns sandbox id \"512ea221cc722600250df1024525f07cf74783130e198a132d75fdee4fb75cd7\"" Jan 23 00:09:12.107614 containerd[1518]: time="2026-01-23T00:09:12.107535987Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:09:12.109523 containerd[1518]: time="2026-01-23T00:09:12.109452605Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 00:09:12.109599 containerd[1518]: time="2026-01-23T00:09:12.109556042Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 00:09:12.109915 kubelet[2765]: E0123 00:09:12.109802 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:09:12.109915 kubelet[2765]: E0123 00:09:12.109854 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:09:12.110385 kubelet[2765]: E0123 00:09:12.110046 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-ccfb6488b-mxfqt_calico-apiserver(4412ff60-1893-4e70-a14c-c509d31ae479): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 00:09:12.110385 kubelet[2765]: E0123 00:09:12.110082 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ccfb6488b-mxfqt" podUID="4412ff60-1893-4e70-a14c-c509d31ae479" Jan 23 00:09:12.112515 containerd[1518]: time="2026-01-23T00:09:12.111574176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 00:09:12.278787 containerd[1518]: time="2026-01-23T00:09:12.277917544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccfb6488b-prfmf,Uid:04dc4181-cb72-4a34-bab6-405893931b00,Namespace:calico-apiserver,Attempt:0,}" Jan 23 00:09:12.281722 containerd[1518]: time="2026-01-23T00:09:12.281209157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jmrqc,Uid:a758f720-5521-43b4-9b45-c628b5675ddd,Namespace:kube-system,Attempt:0,}" Jan 23 00:09:12.470782 containerd[1518]: time="2026-01-23T00:09:12.470543659Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:09:12.480108 containerd[1518]: time="2026-01-23T00:09:12.479980033Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 00:09:12.480108 containerd[1518]: time="2026-01-23T00:09:12.480041591Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 00:09:12.481141 kubelet[2765]: E0123 00:09:12.480458 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 00:09:12.481141 kubelet[2765]: E0123 00:09:12.480504 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 00:09:12.481141 kubelet[2765]: E0123 00:09:12.480585 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5df54fc4cc-6ldlj_calico-system(535d2a42-9500-4db2-9b67-fdfed8e04eb2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 00:09:12.481141 kubelet[2765]: E0123 00:09:12.480618 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5df54fc4cc-6ldlj" podUID="535d2a42-9500-4db2-9b67-fdfed8e04eb2" Jan 23 00:09:12.551242 kubelet[2765]: E0123 00:09:12.550835 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5df54fc4cc-6ldlj" podUID="535d2a42-9500-4db2-9b67-fdfed8e04eb2" Jan 23 00:09:12.551242 kubelet[2765]: E0123 00:09:12.551009 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ccfb6488b-mxfqt" podUID="4412ff60-1893-4e70-a14c-c509d31ae479" Jan 23 00:09:12.561379 systemd-networkd[1422]: cali59559792fb6: Link UP Jan 23 00:09:12.561510 systemd-networkd[1422]: cali59559792fb6: Gained carrier Jan 23 00:09:12.596180 containerd[1518]: 2026-01-23 00:09:12.390 [INFO][4353] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--n--8734b5e787-k8s-coredns--66bc5c9577--jmrqc-eth0 coredns-66bc5c9577- kube-system a758f720-5521-43b4-9b45-c628b5675ddd 814 0 2026-01-23 00:08:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-2-n-8734b5e787 coredns-66bc5c9577-jmrqc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali59559792fb6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="d3751c0fd4cf0c051e75c11e84ae1d36bfb627124a0ccc1250d971e59eec7684" Namespace="kube-system" Pod="coredns-66bc5c9577-jmrqc" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-coredns--66bc5c9577--jmrqc-" Jan 23 00:09:12.596180 containerd[1518]: 2026-01-23 00:09:12.391 [INFO][4353] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d3751c0fd4cf0c051e75c11e84ae1d36bfb627124a0ccc1250d971e59eec7684" Namespace="kube-system" Pod="coredns-66bc5c9577-jmrqc" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-coredns--66bc5c9577--jmrqc-eth0" Jan 23 00:09:12.596180 containerd[1518]: 2026-01-23 00:09:12.458 [INFO][4372] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d3751c0fd4cf0c051e75c11e84ae1d36bfb627124a0ccc1250d971e59eec7684" HandleID="k8s-pod-network.d3751c0fd4cf0c051e75c11e84ae1d36bfb627124a0ccc1250d971e59eec7684" Workload="ci--4459--2--2--n--8734b5e787-k8s-coredns--66bc5c9577--jmrqc-eth0" Jan 23 00:09:12.596180 containerd[1518]: 2026-01-23 00:09:12.458 [INFO][4372] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d3751c0fd4cf0c051e75c11e84ae1d36bfb627124a0ccc1250d971e59eec7684" HandleID="k8s-pod-network.d3751c0fd4cf0c051e75c11e84ae1d36bfb627124a0ccc1250d971e59eec7684" Workload="ci--4459--2--2--n--8734b5e787-k8s-coredns--66bc5c9577--jmrqc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3010), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-2-n-8734b5e787", "pod":"coredns-66bc5c9577-jmrqc", "timestamp":"2026-01-23 00:09:12.458316256 +0000 UTC"}, Hostname:"ci-4459-2-2-n-8734b5e787", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 00:09:12.596180 containerd[1518]: 2026-01-23 00:09:12.458 [INFO][4372] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 00:09:12.596180 containerd[1518]: 2026-01-23 00:09:12.458 [INFO][4372] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 00:09:12.596180 containerd[1518]: 2026-01-23 00:09:12.458 [INFO][4372] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-n-8734b5e787' Jan 23 00:09:12.596180 containerd[1518]: 2026-01-23 00:09:12.482 [INFO][4372] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d3751c0fd4cf0c051e75c11e84ae1d36bfb627124a0ccc1250d971e59eec7684" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:12.596180 containerd[1518]: 2026-01-23 00:09:12.507 [INFO][4372] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:12.596180 containerd[1518]: 2026-01-23 00:09:12.517 [INFO][4372] ipam/ipam.go 511: Trying affinity for 192.168.3.64/26 host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:12.596180 containerd[1518]: 2026-01-23 00:09:12.520 [INFO][4372] ipam/ipam.go 158: Attempting to load block cidr=192.168.3.64/26 host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:12.596180 containerd[1518]: 2026-01-23 00:09:12.524 [INFO][4372] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.3.64/26 host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:12.596180 containerd[1518]: 2026-01-23 00:09:12.524 [INFO][4372] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.3.64/26 handle="k8s-pod-network.d3751c0fd4cf0c051e75c11e84ae1d36bfb627124a0ccc1250d971e59eec7684" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:12.596180 containerd[1518]: 2026-01-23 00:09:12.527 [INFO][4372] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d3751c0fd4cf0c051e75c11e84ae1d36bfb627124a0ccc1250d971e59eec7684 Jan 23 00:09:12.596180 containerd[1518]: 2026-01-23 00:09:12.534 [INFO][4372] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.3.64/26 handle="k8s-pod-network.d3751c0fd4cf0c051e75c11e84ae1d36bfb627124a0ccc1250d971e59eec7684" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:12.596180 containerd[1518]: 2026-01-23 00:09:12.547 [INFO][4372] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.3.68/26] block=192.168.3.64/26 handle="k8s-pod-network.d3751c0fd4cf0c051e75c11e84ae1d36bfb627124a0ccc1250d971e59eec7684" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:12.596180 containerd[1518]: 2026-01-23 00:09:12.547 [INFO][4372] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.3.68/26] handle="k8s-pod-network.d3751c0fd4cf0c051e75c11e84ae1d36bfb627124a0ccc1250d971e59eec7684" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:12.596180 containerd[1518]: 2026-01-23 00:09:12.547 [INFO][4372] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 00:09:12.596180 containerd[1518]: 2026-01-23 00:09:12.547 [INFO][4372] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.3.68/26] IPv6=[] ContainerID="d3751c0fd4cf0c051e75c11e84ae1d36bfb627124a0ccc1250d971e59eec7684" HandleID="k8s-pod-network.d3751c0fd4cf0c051e75c11e84ae1d36bfb627124a0ccc1250d971e59eec7684" Workload="ci--4459--2--2--n--8734b5e787-k8s-coredns--66bc5c9577--jmrqc-eth0" Jan 23 00:09:12.598375 containerd[1518]: 2026-01-23 00:09:12.556 [INFO][4353] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d3751c0fd4cf0c051e75c11e84ae1d36bfb627124a0ccc1250d971e59eec7684" Namespace="kube-system" Pod="coredns-66bc5c9577-jmrqc" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-coredns--66bc5c9577--jmrqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--n--8734b5e787-k8s-coredns--66bc5c9577--jmrqc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a758f720-5521-43b4-9b45-c628b5675ddd", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 8, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-n-8734b5e787", ContainerID:"", Pod:"coredns-66bc5c9577-jmrqc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali59559792fb6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:09:12.598375 containerd[1518]: 2026-01-23 00:09:12.556 [INFO][4353] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.68/32] ContainerID="d3751c0fd4cf0c051e75c11e84ae1d36bfb627124a0ccc1250d971e59eec7684" Namespace="kube-system" Pod="coredns-66bc5c9577-jmrqc" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-coredns--66bc5c9577--jmrqc-eth0" Jan 23 00:09:12.598375 containerd[1518]: 2026-01-23 00:09:12.556 [INFO][4353] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali59559792fb6 ContainerID="d3751c0fd4cf0c051e75c11e84ae1d36bfb627124a0ccc1250d971e59eec7684" Namespace="kube-system" Pod="coredns-66bc5c9577-jmrqc" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-coredns--66bc5c9577--jmrqc-eth0" Jan 23 00:09:12.598375 containerd[1518]: 2026-01-23 00:09:12.560 [INFO][4353] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d3751c0fd4cf0c051e75c11e84ae1d36bfb627124a0ccc1250d971e59eec7684" Namespace="kube-system" Pod="coredns-66bc5c9577-jmrqc" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-coredns--66bc5c9577--jmrqc-eth0" Jan 23 00:09:12.598531 containerd[1518]: 2026-01-23 00:09:12.560 [INFO][4353] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d3751c0fd4cf0c051e75c11e84ae1d36bfb627124a0ccc1250d971e59eec7684" Namespace="kube-system" Pod="coredns-66bc5c9577-jmrqc" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-coredns--66bc5c9577--jmrqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--n--8734b5e787-k8s-coredns--66bc5c9577--jmrqc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a758f720-5521-43b4-9b45-c628b5675ddd", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 8, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-n-8734b5e787", ContainerID:"d3751c0fd4cf0c051e75c11e84ae1d36bfb627124a0ccc1250d971e59eec7684", Pod:"coredns-66bc5c9577-jmrqc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali59559792fb6", MAC:"3a:33:a6:23:85:b9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:09:12.598531 containerd[1518]: 2026-01-23 00:09:12.593 [INFO][4353] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d3751c0fd4cf0c051e75c11e84ae1d36bfb627124a0ccc1250d971e59eec7684" Namespace="kube-system" Pod="coredns-66bc5c9577-jmrqc" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-coredns--66bc5c9577--jmrqc-eth0" Jan 23 00:09:12.654842 systemd-networkd[1422]: calie230b64d268: Gained IPv6LL Jan 23 00:09:12.655132 systemd-networkd[1422]: vxlan.calico: Gained IPv6LL Jan 23 00:09:12.674633 containerd[1518]: time="2026-01-23T00:09:12.674576445Z" level=info msg="connecting to shim d3751c0fd4cf0c051e75c11e84ae1d36bfb627124a0ccc1250d971e59eec7684" address="unix:///run/containerd/s/7e312f57ae671acbc673cccd147c2cdaa0b23e285f0dd91fc81652245d4aa6cd" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:09:12.721962 systemd[1]: Started cri-containerd-d3751c0fd4cf0c051e75c11e84ae1d36bfb627124a0ccc1250d971e59eec7684.scope - libcontainer container d3751c0fd4cf0c051e75c11e84ae1d36bfb627124a0ccc1250d971e59eec7684. Jan 23 00:09:12.756334 systemd-networkd[1422]: cali70bd38d3bfa: Link UP Jan 23 00:09:12.762362 systemd-networkd[1422]: cali70bd38d3bfa: Gained carrier Jan 23 00:09:12.789931 containerd[1518]: 2026-01-23 00:09:12.408 [INFO][4348] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--n--8734b5e787-k8s-calico--apiserver--ccfb6488b--prfmf-eth0 calico-apiserver-ccfb6488b- calico-apiserver 04dc4181-cb72-4a34-bab6-405893931b00 817 0 2026-01-23 00:08:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:ccfb6488b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-2-n-8734b5e787 calico-apiserver-ccfb6488b-prfmf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali70bd38d3bfa [] [] }} ContainerID="c77936d0422f31b20e778e43bbfe73d53dd5d69dc218f014cbad625a22980108" Namespace="calico-apiserver" Pod="calico-apiserver-ccfb6488b-prfmf" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-calico--apiserver--ccfb6488b--prfmf-" Jan 23 00:09:12.789931 containerd[1518]: 2026-01-23 00:09:12.410 [INFO][4348] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c77936d0422f31b20e778e43bbfe73d53dd5d69dc218f014cbad625a22980108" Namespace="calico-apiserver" Pod="calico-apiserver-ccfb6488b-prfmf" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-calico--apiserver--ccfb6488b--prfmf-eth0" Jan 23 00:09:12.789931 containerd[1518]: 2026-01-23 00:09:12.473 [INFO][4377] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c77936d0422f31b20e778e43bbfe73d53dd5d69dc218f014cbad625a22980108" HandleID="k8s-pod-network.c77936d0422f31b20e778e43bbfe73d53dd5d69dc218f014cbad625a22980108" Workload="ci--4459--2--2--n--8734b5e787-k8s-calico--apiserver--ccfb6488b--prfmf-eth0" Jan 23 00:09:12.789931 containerd[1518]: 2026-01-23 00:09:12.473 [INFO][4377] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c77936d0422f31b20e778e43bbfe73d53dd5d69dc218f014cbad625a22980108" HandleID="k8s-pod-network.c77936d0422f31b20e778e43bbfe73d53dd5d69dc218f014cbad625a22980108" Workload="ci--4459--2--2--n--8734b5e787-k8s-calico--apiserver--ccfb6488b--prfmf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c400), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-2-2-n-8734b5e787", "pod":"calico-apiserver-ccfb6488b-prfmf", "timestamp":"2026-01-23 00:09:12.473060138 +0000 UTC"}, Hostname:"ci-4459-2-2-n-8734b5e787", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 00:09:12.789931 containerd[1518]: 2026-01-23 00:09:12.473 [INFO][4377] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 00:09:12.789931 containerd[1518]: 2026-01-23 00:09:12.547 [INFO][4377] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 00:09:12.789931 containerd[1518]: 2026-01-23 00:09:12.548 [INFO][4377] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-n-8734b5e787' Jan 23 00:09:12.789931 containerd[1518]: 2026-01-23 00:09:12.600 [INFO][4377] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c77936d0422f31b20e778e43bbfe73d53dd5d69dc218f014cbad625a22980108" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:12.789931 containerd[1518]: 2026-01-23 00:09:12.630 [INFO][4377] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:12.789931 containerd[1518]: 2026-01-23 00:09:12.673 [INFO][4377] ipam/ipam.go 511: Trying affinity for 192.168.3.64/26 host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:12.789931 containerd[1518]: 2026-01-23 00:09:12.691 [INFO][4377] ipam/ipam.go 158: Attempting to load block cidr=192.168.3.64/26 host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:12.789931 containerd[1518]: 2026-01-23 00:09:12.698 [INFO][4377] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.3.64/26 host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:12.789931 containerd[1518]: 2026-01-23 00:09:12.698 [INFO][4377] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.3.64/26 handle="k8s-pod-network.c77936d0422f31b20e778e43bbfe73d53dd5d69dc218f014cbad625a22980108" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:12.789931 containerd[1518]: 2026-01-23 00:09:12.702 [INFO][4377] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c77936d0422f31b20e778e43bbfe73d53dd5d69dc218f014cbad625a22980108 Jan 23 00:09:12.789931 containerd[1518]: 2026-01-23 00:09:12.721 [INFO][4377] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.3.64/26 handle="k8s-pod-network.c77936d0422f31b20e778e43bbfe73d53dd5d69dc218f014cbad625a22980108" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:12.789931 containerd[1518]: 2026-01-23 00:09:12.736 [INFO][4377] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.3.69/26] block=192.168.3.64/26 handle="k8s-pod-network.c77936d0422f31b20e778e43bbfe73d53dd5d69dc218f014cbad625a22980108" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:12.789931 containerd[1518]: 2026-01-23 00:09:12.736 [INFO][4377] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.3.69/26] handle="k8s-pod-network.c77936d0422f31b20e778e43bbfe73d53dd5d69dc218f014cbad625a22980108" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:12.789931 containerd[1518]: 2026-01-23 00:09:12.736 [INFO][4377] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 00:09:12.789931 containerd[1518]: 2026-01-23 00:09:12.736 [INFO][4377] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.3.69/26] IPv6=[] ContainerID="c77936d0422f31b20e778e43bbfe73d53dd5d69dc218f014cbad625a22980108" HandleID="k8s-pod-network.c77936d0422f31b20e778e43bbfe73d53dd5d69dc218f014cbad625a22980108" Workload="ci--4459--2--2--n--8734b5e787-k8s-calico--apiserver--ccfb6488b--prfmf-eth0" Jan 23 00:09:12.790505 containerd[1518]: 2026-01-23 00:09:12.740 [INFO][4348] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c77936d0422f31b20e778e43bbfe73d53dd5d69dc218f014cbad625a22980108" Namespace="calico-apiserver" Pod="calico-apiserver-ccfb6488b-prfmf" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-calico--apiserver--ccfb6488b--prfmf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--n--8734b5e787-k8s-calico--apiserver--ccfb6488b--prfmf-eth0", GenerateName:"calico-apiserver-ccfb6488b-", Namespace:"calico-apiserver", SelfLink:"", UID:"04dc4181-cb72-4a34-bab6-405893931b00", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 8, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ccfb6488b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-n-8734b5e787", ContainerID:"", Pod:"calico-apiserver-ccfb6488b-prfmf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali70bd38d3bfa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:09:12.790505 containerd[1518]: 2026-01-23 00:09:12.740 [INFO][4348] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.69/32] ContainerID="c77936d0422f31b20e778e43bbfe73d53dd5d69dc218f014cbad625a22980108" Namespace="calico-apiserver" Pod="calico-apiserver-ccfb6488b-prfmf" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-calico--apiserver--ccfb6488b--prfmf-eth0" Jan 23 00:09:12.790505 containerd[1518]: 2026-01-23 00:09:12.741 [INFO][4348] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali70bd38d3bfa ContainerID="c77936d0422f31b20e778e43bbfe73d53dd5d69dc218f014cbad625a22980108" Namespace="calico-apiserver" Pod="calico-apiserver-ccfb6488b-prfmf" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-calico--apiserver--ccfb6488b--prfmf-eth0" Jan 23 00:09:12.790505 containerd[1518]: 2026-01-23 00:09:12.761 [INFO][4348] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c77936d0422f31b20e778e43bbfe73d53dd5d69dc218f014cbad625a22980108" Namespace="calico-apiserver" Pod="calico-apiserver-ccfb6488b-prfmf" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-calico--apiserver--ccfb6488b--prfmf-eth0" Jan 23 00:09:12.790505 containerd[1518]: 2026-01-23 00:09:12.764 [INFO][4348] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c77936d0422f31b20e778e43bbfe73d53dd5d69dc218f014cbad625a22980108" Namespace="calico-apiserver" Pod="calico-apiserver-ccfb6488b-prfmf" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-calico--apiserver--ccfb6488b--prfmf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--n--8734b5e787-k8s-calico--apiserver--ccfb6488b--prfmf-eth0", GenerateName:"calico-apiserver-ccfb6488b-", Namespace:"calico-apiserver", SelfLink:"", UID:"04dc4181-cb72-4a34-bab6-405893931b00", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 8, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ccfb6488b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-n-8734b5e787", ContainerID:"c77936d0422f31b20e778e43bbfe73d53dd5d69dc218f014cbad625a22980108", Pod:"calico-apiserver-ccfb6488b-prfmf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali70bd38d3bfa", MAC:"9a:aa:e2:c7:1a:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:09:12.790505 containerd[1518]: 2026-01-23 00:09:12.785 [INFO][4348] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c77936d0422f31b20e778e43bbfe73d53dd5d69dc218f014cbad625a22980108" Namespace="calico-apiserver" Pod="calico-apiserver-ccfb6488b-prfmf" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-calico--apiserver--ccfb6488b--prfmf-eth0" Jan 23 00:09:12.847454 containerd[1518]: time="2026-01-23T00:09:12.846654907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jmrqc,Uid:a758f720-5521-43b4-9b45-c628b5675ddd,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3751c0fd4cf0c051e75c11e84ae1d36bfb627124a0ccc1250d971e59eec7684\"" Jan 23 00:09:12.860075 containerd[1518]: time="2026-01-23T00:09:12.860017794Z" level=info msg="connecting to shim c77936d0422f31b20e778e43bbfe73d53dd5d69dc218f014cbad625a22980108" address="unix:///run/containerd/s/671a92cec34e1fd2d08b0e924723c6b619d02a620d96eeab676edabdc5cbc79e" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:09:12.867784 containerd[1518]: time="2026-01-23T00:09:12.867703784Z" level=info msg="CreateContainer within sandbox \"d3751c0fd4cf0c051e75c11e84ae1d36bfb627124a0ccc1250d971e59eec7684\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 00:09:12.909260 containerd[1518]: time="2026-01-23T00:09:12.908418185Z" level=info msg="Container a5baeab0e218001204f6aadbf2e8f80580c4f05130ec0c1bfbb71b1401a9a7fb: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:09:12.911256 systemd-networkd[1422]: calieff829b3482: Gained IPv6LL Jan 23 00:09:12.912937 systemd[1]: Started cri-containerd-c77936d0422f31b20e778e43bbfe73d53dd5d69dc218f014cbad625a22980108.scope - libcontainer container c77936d0422f31b20e778e43bbfe73d53dd5d69dc218f014cbad625a22980108. Jan 23 00:09:12.921408 containerd[1518]: time="2026-01-23T00:09:12.921356845Z" level=info msg="CreateContainer within sandbox \"d3751c0fd4cf0c051e75c11e84ae1d36bfb627124a0ccc1250d971e59eec7684\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a5baeab0e218001204f6aadbf2e8f80580c4f05130ec0c1bfbb71b1401a9a7fb\"" Jan 23 00:09:12.923041 containerd[1518]: time="2026-01-23T00:09:12.922994792Z" level=info msg="StartContainer for \"a5baeab0e218001204f6aadbf2e8f80580c4f05130ec0c1bfbb71b1401a9a7fb\"" Jan 23 00:09:12.925950 containerd[1518]: time="2026-01-23T00:09:12.924919690Z" level=info msg="connecting to shim a5baeab0e218001204f6aadbf2e8f80580c4f05130ec0c1bfbb71b1401a9a7fb" address="unix:///run/containerd/s/7e312f57ae671acbc673cccd147c2cdaa0b23e285f0dd91fc81652245d4aa6cd" protocol=ttrpc version=3 Jan 23 00:09:12.955961 systemd[1]: Started cri-containerd-a5baeab0e218001204f6aadbf2e8f80580c4f05130ec0c1bfbb71b1401a9a7fb.scope - libcontainer container a5baeab0e218001204f6aadbf2e8f80580c4f05130ec0c1bfbb71b1401a9a7fb. Jan 23 00:09:12.993015 containerd[1518]: time="2026-01-23T00:09:12.992414782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccfb6488b-prfmf,Uid:04dc4181-cb72-4a34-bab6-405893931b00,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c77936d0422f31b20e778e43bbfe73d53dd5d69dc218f014cbad625a22980108\"" Jan 23 00:09:12.998655 containerd[1518]: time="2026-01-23T00:09:12.998593141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 00:09:13.002331 containerd[1518]: time="2026-01-23T00:09:13.002295105Z" level=info msg="StartContainer for \"a5baeab0e218001204f6aadbf2e8f80580c4f05130ec0c1bfbb71b1401a9a7fb\" returns successfully" Jan 23 00:09:13.286870 containerd[1518]: time="2026-01-23T00:09:13.286578434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ckhsz,Uid:8a80960d-00a3-4ff3-b4b4-d91b05db2e77,Namespace:kube-system,Attempt:0,}" Jan 23 00:09:13.347647 containerd[1518]: time="2026-01-23T00:09:13.347590008Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:09:13.352471 containerd[1518]: time="2026-01-23T00:09:13.351138821Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 00:09:13.352727 containerd[1518]: time="2026-01-23T00:09:13.351200339Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 00:09:13.353398 kubelet[2765]: E0123 00:09:13.353274 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:09:13.353398 kubelet[2765]: E0123 00:09:13.353333 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:09:13.354226 kubelet[2765]: E0123 00:09:13.353814 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-ccfb6488b-prfmf_calico-apiserver(04dc4181-cb72-4a34-bab6-405893931b00): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 00:09:13.354226 kubelet[2765]: E0123 00:09:13.353896 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ccfb6488b-prfmf" podUID="04dc4181-cb72-4a34-bab6-405893931b00" Jan 23 00:09:13.461174 systemd-networkd[1422]: calia6b4ac31782: Link UP Jan 23 00:09:13.463478 systemd-networkd[1422]: calia6b4ac31782: Gained carrier Jan 23 00:09:13.497915 containerd[1518]: 2026-01-23 00:09:13.351 [INFO][4526] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--n--8734b5e787-k8s-coredns--66bc5c9577--ckhsz-eth0 coredns-66bc5c9577- kube-system 8a80960d-00a3-4ff3-b4b4-d91b05db2e77 813 0 2026-01-23 00:08:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-2-n-8734b5e787 coredns-66bc5c9577-ckhsz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia6b4ac31782 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="f05cb172fd0335cc2f4f7cb42b6b1437135992982d0f6ced5c39edbd0ec6fc96" Namespace="kube-system" Pod="coredns-66bc5c9577-ckhsz" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-coredns--66bc5c9577--ckhsz-" Jan 23 00:09:13.497915 containerd[1518]: 2026-01-23 00:09:13.351 [INFO][4526] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f05cb172fd0335cc2f4f7cb42b6b1437135992982d0f6ced5c39edbd0ec6fc96" Namespace="kube-system" Pod="coredns-66bc5c9577-ckhsz" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-coredns--66bc5c9577--ckhsz-eth0" Jan 23 00:09:13.497915 containerd[1518]: 2026-01-23 00:09:13.396 [INFO][4537] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f05cb172fd0335cc2f4f7cb42b6b1437135992982d0f6ced5c39edbd0ec6fc96" HandleID="k8s-pod-network.f05cb172fd0335cc2f4f7cb42b6b1437135992982d0f6ced5c39edbd0ec6fc96" Workload="ci--4459--2--2--n--8734b5e787-k8s-coredns--66bc5c9577--ckhsz-eth0" Jan 23 00:09:13.497915 containerd[1518]: 2026-01-23 00:09:13.396 [INFO][4537] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f05cb172fd0335cc2f4f7cb42b6b1437135992982d0f6ced5c39edbd0ec6fc96" HandleID="k8s-pod-network.f05cb172fd0335cc2f4f7cb42b6b1437135992982d0f6ced5c39edbd0ec6fc96" Workload="ci--4459--2--2--n--8734b5e787-k8s-coredns--66bc5c9577--ckhsz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cf5e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-2-n-8734b5e787", "pod":"coredns-66bc5c9577-ckhsz", "timestamp":"2026-01-23 00:09:13.396250031 +0000 UTC"}, Hostname:"ci-4459-2-2-n-8734b5e787", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 00:09:13.497915 containerd[1518]: 2026-01-23 00:09:13.396 [INFO][4537] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 00:09:13.497915 containerd[1518]: 2026-01-23 00:09:13.396 [INFO][4537] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 00:09:13.497915 containerd[1518]: 2026-01-23 00:09:13.396 [INFO][4537] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-n-8734b5e787' Jan 23 00:09:13.497915 containerd[1518]: 2026-01-23 00:09:13.408 [INFO][4537] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f05cb172fd0335cc2f4f7cb42b6b1437135992982d0f6ced5c39edbd0ec6fc96" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:13.497915 containerd[1518]: 2026-01-23 00:09:13.416 [INFO][4537] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:13.497915 containerd[1518]: 2026-01-23 00:09:13.426 [INFO][4537] ipam/ipam.go 511: Trying affinity for 192.168.3.64/26 host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:13.497915 containerd[1518]: 2026-01-23 00:09:13.429 [INFO][4537] ipam/ipam.go 158: Attempting to load block cidr=192.168.3.64/26 host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:13.497915 containerd[1518]: 2026-01-23 00:09:13.434 [INFO][4537] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.3.64/26 host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:13.497915 containerd[1518]: 2026-01-23 00:09:13.434 [INFO][4537] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.3.64/26 handle="k8s-pod-network.f05cb172fd0335cc2f4f7cb42b6b1437135992982d0f6ced5c39edbd0ec6fc96" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:13.497915 containerd[1518]: 2026-01-23 00:09:13.439 [INFO][4537] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f05cb172fd0335cc2f4f7cb42b6b1437135992982d0f6ced5c39edbd0ec6fc96 Jan 23 00:09:13.497915 containerd[1518]: 2026-01-23 00:09:13.445 [INFO][4537] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.3.64/26 handle="k8s-pod-network.f05cb172fd0335cc2f4f7cb42b6b1437135992982d0f6ced5c39edbd0ec6fc96" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:13.497915 containerd[1518]: 2026-01-23 00:09:13.453 [INFO][4537] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.3.70/26] block=192.168.3.64/26 handle="k8s-pod-network.f05cb172fd0335cc2f4f7cb42b6b1437135992982d0f6ced5c39edbd0ec6fc96" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:13.497915 containerd[1518]: 2026-01-23 00:09:13.453 [INFO][4537] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.3.70/26] handle="k8s-pod-network.f05cb172fd0335cc2f4f7cb42b6b1437135992982d0f6ced5c39edbd0ec6fc96" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:13.497915 containerd[1518]: 2026-01-23 00:09:13.454 [INFO][4537] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 00:09:13.497915 containerd[1518]: 2026-01-23 00:09:13.454 [INFO][4537] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.3.70/26] IPv6=[] ContainerID="f05cb172fd0335cc2f4f7cb42b6b1437135992982d0f6ced5c39edbd0ec6fc96" HandleID="k8s-pod-network.f05cb172fd0335cc2f4f7cb42b6b1437135992982d0f6ced5c39edbd0ec6fc96" Workload="ci--4459--2--2--n--8734b5e787-k8s-coredns--66bc5c9577--ckhsz-eth0" Jan 23 00:09:13.498896 containerd[1518]: 2026-01-23 00:09:13.457 [INFO][4526] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f05cb172fd0335cc2f4f7cb42b6b1437135992982d0f6ced5c39edbd0ec6fc96" Namespace="kube-system" Pod="coredns-66bc5c9577-ckhsz" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-coredns--66bc5c9577--ckhsz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--n--8734b5e787-k8s-coredns--66bc5c9577--ckhsz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8a80960d-00a3-4ff3-b4b4-d91b05db2e77", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 8, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-n-8734b5e787", ContainerID:"", Pod:"coredns-66bc5c9577-ckhsz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6b4ac31782", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:09:13.498896 containerd[1518]: 2026-01-23 00:09:13.457 [INFO][4526] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.70/32] ContainerID="f05cb172fd0335cc2f4f7cb42b6b1437135992982d0f6ced5c39edbd0ec6fc96" Namespace="kube-system" Pod="coredns-66bc5c9577-ckhsz" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-coredns--66bc5c9577--ckhsz-eth0" Jan 23 00:09:13.498896 containerd[1518]: 2026-01-23 00:09:13.457 [INFO][4526] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia6b4ac31782 ContainerID="f05cb172fd0335cc2f4f7cb42b6b1437135992982d0f6ced5c39edbd0ec6fc96" Namespace="kube-system" Pod="coredns-66bc5c9577-ckhsz" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-coredns--66bc5c9577--ckhsz-eth0" Jan 23 00:09:13.498896 containerd[1518]: 2026-01-23 00:09:13.466 [INFO][4526] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f05cb172fd0335cc2f4f7cb42b6b1437135992982d0f6ced5c39edbd0ec6fc96" Namespace="kube-system" Pod="coredns-66bc5c9577-ckhsz" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-coredns--66bc5c9577--ckhsz-eth0" Jan 23 00:09:13.500159 containerd[1518]: 2026-01-23 00:09:13.468 [INFO][4526] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f05cb172fd0335cc2f4f7cb42b6b1437135992982d0f6ced5c39edbd0ec6fc96" Namespace="kube-system" Pod="coredns-66bc5c9577-ckhsz" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-coredns--66bc5c9577--ckhsz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--n--8734b5e787-k8s-coredns--66bc5c9577--ckhsz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8a80960d-00a3-4ff3-b4b4-d91b05db2e77", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 8, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-n-8734b5e787", ContainerID:"f05cb172fd0335cc2f4f7cb42b6b1437135992982d0f6ced5c39edbd0ec6fc96", Pod:"coredns-66bc5c9577-ckhsz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6b4ac31782", MAC:"ba:b5:73:fb:a1:33", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:09:13.500159 containerd[1518]: 2026-01-23 00:09:13.490 [INFO][4526] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f05cb172fd0335cc2f4f7cb42b6b1437135992982d0f6ced5c39edbd0ec6fc96" Namespace="kube-system" Pod="coredns-66bc5c9577-ckhsz" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-coredns--66bc5c9577--ckhsz-eth0" Jan 23 00:09:13.549795 containerd[1518]: time="2026-01-23T00:09:13.549630399Z" level=info msg="connecting to shim f05cb172fd0335cc2f4f7cb42b6b1437135992982d0f6ced5c39edbd0ec6fc96" address="unix:///run/containerd/s/90c2585fb2e1fc3061d941769e34ba3b7357bfbbad1ef78780437b2ddfae6f5a" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:09:13.555359 kubelet[2765]: E0123 00:09:13.555299 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ccfb6488b-prfmf" podUID="04dc4181-cb72-4a34-bab6-405893931b00" Jan 23 00:09:13.572396 kubelet[2765]: E0123 00:09:13.572310 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ccfb6488b-mxfqt" podUID="4412ff60-1893-4e70-a14c-c509d31ae479" Jan 23 00:09:13.586154 kubelet[2765]: E0123 00:09:13.586005 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5df54fc4cc-6ldlj" podUID="535d2a42-9500-4db2-9b67-fdfed8e04eb2" Jan 23 00:09:13.625586 systemd[1]: Started cri-containerd-f05cb172fd0335cc2f4f7cb42b6b1437135992982d0f6ced5c39edbd0ec6fc96.scope - libcontainer container f05cb172fd0335cc2f4f7cb42b6b1437135992982d0f6ced5c39edbd0ec6fc96. Jan 23 00:09:13.710438 containerd[1518]: time="2026-01-23T00:09:13.710381186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ckhsz,Uid:8a80960d-00a3-4ff3-b4b4-d91b05db2e77,Namespace:kube-system,Attempt:0,} returns sandbox id \"f05cb172fd0335cc2f4f7cb42b6b1437135992982d0f6ced5c39edbd0ec6fc96\"" Jan 23 00:09:13.717795 containerd[1518]: time="2026-01-23T00:09:13.717759045Z" level=info msg="CreateContainer within sandbox \"f05cb172fd0335cc2f4f7cb42b6b1437135992982d0f6ced5c39edbd0ec6fc96\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 00:09:13.732364 containerd[1518]: time="2026-01-23T00:09:13.732328529Z" level=info msg="Container da7cb2f5693f514ed6e59d6c8091bb9f436279c77c889c9b39d734a46c852104: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:09:13.741004 containerd[1518]: time="2026-01-23T00:09:13.740881793Z" level=info msg="CreateContainer within sandbox \"f05cb172fd0335cc2f4f7cb42b6b1437135992982d0f6ced5c39edbd0ec6fc96\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"da7cb2f5693f514ed6e59d6c8091bb9f436279c77c889c9b39d734a46c852104\"" Jan 23 00:09:13.741998 containerd[1518]: time="2026-01-23T00:09:13.741940721Z" level=info msg="StartContainer for \"da7cb2f5693f514ed6e59d6c8091bb9f436279c77c889c9b39d734a46c852104\"" Jan 23 00:09:13.743385 containerd[1518]: time="2026-01-23T00:09:13.743338399Z" level=info msg="connecting to shim da7cb2f5693f514ed6e59d6c8091bb9f436279c77c889c9b39d734a46c852104" address="unix:///run/containerd/s/90c2585fb2e1fc3061d941769e34ba3b7357bfbbad1ef78780437b2ddfae6f5a" protocol=ttrpc version=3 Jan 23 00:09:13.772048 systemd[1]: Started cri-containerd-da7cb2f5693f514ed6e59d6c8091bb9f436279c77c889c9b39d734a46c852104.scope - libcontainer container da7cb2f5693f514ed6e59d6c8091bb9f436279c77c889c9b39d734a46c852104. Jan 23 00:09:13.808093 containerd[1518]: time="2026-01-23T00:09:13.807954265Z" level=info msg="StartContainer for \"da7cb2f5693f514ed6e59d6c8091bb9f436279c77c889c9b39d734a46c852104\" returns successfully" Jan 23 00:09:14.256040 systemd-networkd[1422]: cali59559792fb6: Gained IPv6LL Jan 23 00:09:14.279650 containerd[1518]: time="2026-01-23T00:09:14.278121135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-vc8hr,Uid:e04e4457-45b8-4cc6-bb9d-cbd9eec7c520,Namespace:calico-system,Attempt:0,}" Jan 23 00:09:14.279650 containerd[1518]: time="2026-01-23T00:09:14.278737438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fd26x,Uid:40189ccf-4a54-4a06-a382-10a9d6df2d28,Namespace:calico-system,Attempt:0,}" Jan 23 00:09:14.447842 systemd-networkd[1422]: cali70bd38d3bfa: Gained IPv6LL Jan 23 00:09:14.518721 systemd-networkd[1422]: calid1340b356db: Link UP Jan 23 00:09:14.519975 systemd-networkd[1422]: calid1340b356db: Gained carrier Jan 23 00:09:14.539722 kubelet[2765]: I0123 00:09:14.539639 2765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-jmrqc" podStartSLOduration=45.539617734 podStartE2EDuration="45.539617734s" podCreationTimestamp="2026-01-23 00:08:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:09:13.678118592 +0000 UTC m=+50.558496547" watchObservedRunningTime="2026-01-23 00:09:14.539617734 +0000 UTC m=+51.419995689" Jan 23 00:09:14.547644 containerd[1518]: 2026-01-23 00:09:14.370 [INFO][4635] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--n--8734b5e787-k8s-goldmane--7c778bb748--vc8hr-eth0 goldmane-7c778bb748- calico-system e04e4457-45b8-4cc6-bb9d-cbd9eec7c520 815 0 2026-01-23 00:08:49 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459-2-2-n-8734b5e787 goldmane-7c778bb748-vc8hr eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid1340b356db [] [] }} ContainerID="d0bae831a1786df07b2b40bdd720ad04f010f09c04f1540fd3b5c371c776cf33" Namespace="calico-system" Pod="goldmane-7c778bb748-vc8hr" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-goldmane--7c778bb748--vc8hr-" Jan 23 00:09:14.547644 containerd[1518]: 2026-01-23 00:09:14.371 [INFO][4635] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d0bae831a1786df07b2b40bdd720ad04f010f09c04f1540fd3b5c371c776cf33" Namespace="calico-system" Pod="goldmane-7c778bb748-vc8hr" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-goldmane--7c778bb748--vc8hr-eth0" Jan 23 00:09:14.547644 containerd[1518]: 2026-01-23 00:09:14.419 [INFO][4663] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d0bae831a1786df07b2b40bdd720ad04f010f09c04f1540fd3b5c371c776cf33" HandleID="k8s-pod-network.d0bae831a1786df07b2b40bdd720ad04f010f09c04f1540fd3b5c371c776cf33" Workload="ci--4459--2--2--n--8734b5e787-k8s-goldmane--7c778bb748--vc8hr-eth0" Jan 23 00:09:14.547644 containerd[1518]: 2026-01-23 00:09:14.419 [INFO][4663] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d0bae831a1786df07b2b40bdd720ad04f010f09c04f1540fd3b5c371c776cf33" HandleID="k8s-pod-network.d0bae831a1786df07b2b40bdd720ad04f010f09c04f1540fd3b5c371c776cf33" Workload="ci--4459--2--2--n--8734b5e787-k8s-goldmane--7c778bb748--vc8hr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cafe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-2-n-8734b5e787", "pod":"goldmane-7c778bb748-vc8hr", "timestamp":"2026-01-23 00:09:14.419482122 +0000 UTC"}, Hostname:"ci-4459-2-2-n-8734b5e787", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 00:09:14.547644 containerd[1518]: 2026-01-23 00:09:14.419 [INFO][4663] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 00:09:14.547644 containerd[1518]: 2026-01-23 00:09:14.419 [INFO][4663] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 00:09:14.547644 containerd[1518]: 2026-01-23 00:09:14.419 [INFO][4663] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-n-8734b5e787' Jan 23 00:09:14.547644 containerd[1518]: 2026-01-23 00:09:14.434 [INFO][4663] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d0bae831a1786df07b2b40bdd720ad04f010f09c04f1540fd3b5c371c776cf33" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:14.547644 containerd[1518]: 2026-01-23 00:09:14.449 [INFO][4663] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:14.547644 containerd[1518]: 2026-01-23 00:09:14.479 [INFO][4663] ipam/ipam.go 511: Trying affinity for 192.168.3.64/26 host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:14.547644 containerd[1518]: 2026-01-23 00:09:14.484 [INFO][4663] ipam/ipam.go 158: Attempting to load block cidr=192.168.3.64/26 host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:14.547644 containerd[1518]: 2026-01-23 00:09:14.488 [INFO][4663] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.3.64/26 host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:14.547644 containerd[1518]: 2026-01-23 00:09:14.488 [INFO][4663] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.3.64/26 handle="k8s-pod-network.d0bae831a1786df07b2b40bdd720ad04f010f09c04f1540fd3b5c371c776cf33" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:14.547644 containerd[1518]: 2026-01-23 00:09:14.492 [INFO][4663] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d0bae831a1786df07b2b40bdd720ad04f010f09c04f1540fd3b5c371c776cf33 Jan 23 00:09:14.547644 containerd[1518]: 2026-01-23 00:09:14.498 [INFO][4663] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.3.64/26 handle="k8s-pod-network.d0bae831a1786df07b2b40bdd720ad04f010f09c04f1540fd3b5c371c776cf33" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:14.547644 containerd[1518]: 2026-01-23 00:09:14.508 [INFO][4663] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.3.71/26] block=192.168.3.64/26 handle="k8s-pod-network.d0bae831a1786df07b2b40bdd720ad04f010f09c04f1540fd3b5c371c776cf33" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:14.547644 containerd[1518]: 2026-01-23 00:09:14.509 [INFO][4663] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.3.71/26] handle="k8s-pod-network.d0bae831a1786df07b2b40bdd720ad04f010f09c04f1540fd3b5c371c776cf33" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:14.547644 containerd[1518]: 2026-01-23 00:09:14.509 [INFO][4663] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 00:09:14.547644 containerd[1518]: 2026-01-23 00:09:14.509 [INFO][4663] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.3.71/26] IPv6=[] ContainerID="d0bae831a1786df07b2b40bdd720ad04f010f09c04f1540fd3b5c371c776cf33" HandleID="k8s-pod-network.d0bae831a1786df07b2b40bdd720ad04f010f09c04f1540fd3b5c371c776cf33" Workload="ci--4459--2--2--n--8734b5e787-k8s-goldmane--7c778bb748--vc8hr-eth0" Jan 23 00:09:14.550104 containerd[1518]: 2026-01-23 00:09:14.513 [INFO][4635] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d0bae831a1786df07b2b40bdd720ad04f010f09c04f1540fd3b5c371c776cf33" Namespace="calico-system" Pod="goldmane-7c778bb748-vc8hr" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-goldmane--7c778bb748--vc8hr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--n--8734b5e787-k8s-goldmane--7c778bb748--vc8hr-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"e04e4457-45b8-4cc6-bb9d-cbd9eec7c520", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-n-8734b5e787", ContainerID:"", Pod:"goldmane-7c778bb748-vc8hr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.3.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid1340b356db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:09:14.550104 containerd[1518]: 2026-01-23 00:09:14.513 [INFO][4635] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.71/32] ContainerID="d0bae831a1786df07b2b40bdd720ad04f010f09c04f1540fd3b5c371c776cf33" Namespace="calico-system" Pod="goldmane-7c778bb748-vc8hr" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-goldmane--7c778bb748--vc8hr-eth0" Jan 23 00:09:14.550104 containerd[1518]: 2026-01-23 00:09:14.513 [INFO][4635] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid1340b356db ContainerID="d0bae831a1786df07b2b40bdd720ad04f010f09c04f1540fd3b5c371c776cf33" Namespace="calico-system" Pod="goldmane-7c778bb748-vc8hr" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-goldmane--7c778bb748--vc8hr-eth0" Jan 23 00:09:14.550104 containerd[1518]: 2026-01-23 00:09:14.522 [INFO][4635] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d0bae831a1786df07b2b40bdd720ad04f010f09c04f1540fd3b5c371c776cf33" Namespace="calico-system" Pod="goldmane-7c778bb748-vc8hr" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-goldmane--7c778bb748--vc8hr-eth0" Jan 23 00:09:14.550104 containerd[1518]: 2026-01-23 00:09:14.523 [INFO][4635] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d0bae831a1786df07b2b40bdd720ad04f010f09c04f1540fd3b5c371c776cf33" Namespace="calico-system" Pod="goldmane-7c778bb748-vc8hr" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-goldmane--7c778bb748--vc8hr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--n--8734b5e787-k8s-goldmane--7c778bb748--vc8hr-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"e04e4457-45b8-4cc6-bb9d-cbd9eec7c520", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 8, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-n-8734b5e787", ContainerID:"d0bae831a1786df07b2b40bdd720ad04f010f09c04f1540fd3b5c371c776cf33", Pod:"goldmane-7c778bb748-vc8hr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.3.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid1340b356db", MAC:"ae:7b:e3:02:4d:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:09:14.550104 containerd[1518]: 2026-01-23 00:09:14.543 [INFO][4635] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d0bae831a1786df07b2b40bdd720ad04f010f09c04f1540fd3b5c371c776cf33" Namespace="calico-system" Pod="goldmane-7c778bb748-vc8hr" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-goldmane--7c778bb748--vc8hr-eth0" Jan 23 00:09:14.578401 kubelet[2765]: E0123 00:09:14.578356 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ccfb6488b-prfmf" podUID="04dc4181-cb72-4a34-bab6-405893931b00" Jan 23 00:09:14.617346 containerd[1518]: time="2026-01-23T00:09:14.617283195Z" level=info msg="connecting to shim d0bae831a1786df07b2b40bdd720ad04f010f09c04f1540fd3b5c371c776cf33" address="unix:///run/containerd/s/5b991f3356fed2a341de211d73a00ef7a8f2f0de9fc279550ee9ecdf203eb7d6" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:09:14.628378 systemd-networkd[1422]: cali37a782a54a7: Link UP Jan 23 00:09:14.629162 systemd-networkd[1422]: cali37a782a54a7: Gained carrier Jan 23 00:09:14.650087 kubelet[2765]: I0123 00:09:14.650011 2765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ckhsz" podStartSLOduration=45.649959415 podStartE2EDuration="45.649959415s" podCreationTimestamp="2026-01-23 00:08:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:09:14.647991869 +0000 UTC m=+51.528369824" watchObservedRunningTime="2026-01-23 00:09:14.649959415 +0000 UTC m=+51.530337370" Jan 23 00:09:14.665188 containerd[1518]: 2026-01-23 00:09:14.375 [INFO][4644] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--n--8734b5e787-k8s-csi--node--driver--fd26x-eth0 csi-node-driver- calico-system 40189ccf-4a54-4a06-a382-10a9d6df2d28 726 0 2026-01-23 00:08:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459-2-2-n-8734b5e787 csi-node-driver-fd26x eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali37a782a54a7 [] [] }} ContainerID="cb5a7534bc251bb23319fef4d7d7abe34fab14e009fbddfc28ada599bc3009d2" Namespace="calico-system" Pod="csi-node-driver-fd26x" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-csi--node--driver--fd26x-" Jan 23 00:09:14.665188 containerd[1518]: 2026-01-23 00:09:14.376 [INFO][4644] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cb5a7534bc251bb23319fef4d7d7abe34fab14e009fbddfc28ada599bc3009d2" Namespace="calico-system" Pod="csi-node-driver-fd26x" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-csi--node--driver--fd26x-eth0" Jan 23 00:09:14.665188 containerd[1518]: 2026-01-23 00:09:14.481 [INFO][4668] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cb5a7534bc251bb23319fef4d7d7abe34fab14e009fbddfc28ada599bc3009d2" HandleID="k8s-pod-network.cb5a7534bc251bb23319fef4d7d7abe34fab14e009fbddfc28ada599bc3009d2" Workload="ci--4459--2--2--n--8734b5e787-k8s-csi--node--driver--fd26x-eth0" Jan 23 00:09:14.665188 containerd[1518]: 2026-01-23 00:09:14.482 [INFO][4668] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cb5a7534bc251bb23319fef4d7d7abe34fab14e009fbddfc28ada599bc3009d2" HandleID="k8s-pod-network.cb5a7534bc251bb23319fef4d7d7abe34fab14e009fbddfc28ada599bc3009d2" Workload="ci--4459--2--2--n--8734b5e787-k8s-csi--node--driver--fd26x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003305e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-2-n-8734b5e787", "pod":"csi-node-driver-fd26x", "timestamp":"2026-01-23 00:09:14.481871164 +0000 UTC"}, Hostname:"ci-4459-2-2-n-8734b5e787", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 00:09:14.665188 containerd[1518]: 2026-01-23 00:09:14.482 [INFO][4668] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 00:09:14.665188 containerd[1518]: 2026-01-23 00:09:14.509 [INFO][4668] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 00:09:14.665188 containerd[1518]: 2026-01-23 00:09:14.510 [INFO][4668] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-n-8734b5e787' Jan 23 00:09:14.665188 containerd[1518]: 2026-01-23 00:09:14.535 [INFO][4668] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cb5a7534bc251bb23319fef4d7d7abe34fab14e009fbddfc28ada599bc3009d2" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:14.665188 containerd[1518]: 2026-01-23 00:09:14.550 [INFO][4668] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:14.665188 containerd[1518]: 2026-01-23 00:09:14.562 [INFO][4668] ipam/ipam.go 511: Trying affinity for 192.168.3.64/26 host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:14.665188 containerd[1518]: 2026-01-23 00:09:14.570 [INFO][4668] ipam/ipam.go 158: Attempting to load block cidr=192.168.3.64/26 host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:14.665188 containerd[1518]: 2026-01-23 00:09:14.578 [INFO][4668] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.3.64/26 host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:14.665188 containerd[1518]: 2026-01-23 00:09:14.578 [INFO][4668] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.3.64/26 handle="k8s-pod-network.cb5a7534bc251bb23319fef4d7d7abe34fab14e009fbddfc28ada599bc3009d2" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:14.665188 containerd[1518]: 2026-01-23 00:09:14.588 [INFO][4668] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cb5a7534bc251bb23319fef4d7d7abe34fab14e009fbddfc28ada599bc3009d2 Jan 23 00:09:14.665188 containerd[1518]: 2026-01-23 00:09:14.602 [INFO][4668] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.3.64/26 handle="k8s-pod-network.cb5a7534bc251bb23319fef4d7d7abe34fab14e009fbddfc28ada599bc3009d2" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:14.665188 containerd[1518]: 2026-01-23 00:09:14.613 [INFO][4668] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.3.72/26] block=192.168.3.64/26 handle="k8s-pod-network.cb5a7534bc251bb23319fef4d7d7abe34fab14e009fbddfc28ada599bc3009d2" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:14.665188 containerd[1518]: 2026-01-23 00:09:14.614 [INFO][4668] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.3.72/26] handle="k8s-pod-network.cb5a7534bc251bb23319fef4d7d7abe34fab14e009fbddfc28ada599bc3009d2" host="ci-4459-2-2-n-8734b5e787" Jan 23 00:09:14.665188 containerd[1518]: 2026-01-23 00:09:14.614 [INFO][4668] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 00:09:14.665188 containerd[1518]: 2026-01-23 00:09:14.614 [INFO][4668] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.3.72/26] IPv6=[] ContainerID="cb5a7534bc251bb23319fef4d7d7abe34fab14e009fbddfc28ada599bc3009d2" HandleID="k8s-pod-network.cb5a7534bc251bb23319fef4d7d7abe34fab14e009fbddfc28ada599bc3009d2" Workload="ci--4459--2--2--n--8734b5e787-k8s-csi--node--driver--fd26x-eth0" Jan 23 00:09:14.665780 containerd[1518]: 2026-01-23 00:09:14.621 [INFO][4644] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cb5a7534bc251bb23319fef4d7d7abe34fab14e009fbddfc28ada599bc3009d2" Namespace="calico-system" Pod="csi-node-driver-fd26x" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-csi--node--driver--fd26x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--n--8734b5e787-k8s-csi--node--driver--fd26x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"40189ccf-4a54-4a06-a382-10a9d6df2d28", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 8, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-n-8734b5e787", ContainerID:"", Pod:"csi-node-driver-fd26x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.3.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali37a782a54a7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:09:14.665780 containerd[1518]: 2026-01-23 00:09:14.621 [INFO][4644] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.3.72/32] ContainerID="cb5a7534bc251bb23319fef4d7d7abe34fab14e009fbddfc28ada599bc3009d2" Namespace="calico-system" Pod="csi-node-driver-fd26x" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-csi--node--driver--fd26x-eth0" Jan 23 00:09:14.665780 containerd[1518]: 2026-01-23 00:09:14.621 [INFO][4644] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali37a782a54a7 ContainerID="cb5a7534bc251bb23319fef4d7d7abe34fab14e009fbddfc28ada599bc3009d2" Namespace="calico-system" Pod="csi-node-driver-fd26x" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-csi--node--driver--fd26x-eth0" Jan 23 00:09:14.665780 containerd[1518]: 2026-01-23 00:09:14.629 [INFO][4644] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cb5a7534bc251bb23319fef4d7d7abe34fab14e009fbddfc28ada599bc3009d2" Namespace="calico-system" Pod="csi-node-driver-fd26x" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-csi--node--driver--fd26x-eth0" Jan 23 00:09:14.665780 containerd[1518]: 2026-01-23 00:09:14.633 [INFO][4644] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cb5a7534bc251bb23319fef4d7d7abe34fab14e009fbddfc28ada599bc3009d2" Namespace="calico-system" Pod="csi-node-driver-fd26x" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-csi--node--driver--fd26x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--n--8734b5e787-k8s-csi--node--driver--fd26x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"40189ccf-4a54-4a06-a382-10a9d6df2d28", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 8, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-n-8734b5e787", ContainerID:"cb5a7534bc251bb23319fef4d7d7abe34fab14e009fbddfc28ada599bc3009d2", Pod:"csi-node-driver-fd26x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.3.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali37a782a54a7", MAC:"72:d3:c0:5a:cc:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:09:14.665780 containerd[1518]: 2026-01-23 00:09:14.651 [INFO][4644] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cb5a7534bc251bb23319fef4d7d7abe34fab14e009fbddfc28ada599bc3009d2" Namespace="calico-system" Pod="csi-node-driver-fd26x" WorkloadEndpoint="ci--4459--2--2--n--8734b5e787-k8s-csi--node--driver--fd26x-eth0" Jan 23 00:09:14.725016 systemd[1]: Started cri-containerd-d0bae831a1786df07b2b40bdd720ad04f010f09c04f1540fd3b5c371c776cf33.scope - libcontainer container d0bae831a1786df07b2b40bdd720ad04f010f09c04f1540fd3b5c371c776cf33. Jan 23 00:09:14.729327 containerd[1518]: time="2026-01-23T00:09:14.729267351Z" level=info msg="connecting to shim cb5a7534bc251bb23319fef4d7d7abe34fab14e009fbddfc28ada599bc3009d2" address="unix:///run/containerd/s/8fe70d9885fea28436c97526afbc41b4919411e0729714f5cfeb444f1e18d38a" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:09:14.773036 systemd[1]: Started cri-containerd-cb5a7534bc251bb23319fef4d7d7abe34fab14e009fbddfc28ada599bc3009d2.scope - libcontainer container cb5a7534bc251bb23319fef4d7d7abe34fab14e009fbddfc28ada599bc3009d2. Jan 23 00:09:14.801185 containerd[1518]: time="2026-01-23T00:09:14.800927618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-vc8hr,Uid:e04e4457-45b8-4cc6-bb9d-cbd9eec7c520,Namespace:calico-system,Attempt:0,} returns sandbox id \"d0bae831a1786df07b2b40bdd720ad04f010f09c04f1540fd3b5c371c776cf33\"" Jan 23 00:09:14.807500 containerd[1518]: time="2026-01-23T00:09:14.807456078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 00:09:14.817473 containerd[1518]: time="2026-01-23T00:09:14.817434283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fd26x,Uid:40189ccf-4a54-4a06-a382-10a9d6df2d28,Namespace:calico-system,Attempt:0,} returns sandbox id \"cb5a7534bc251bb23319fef4d7d7abe34fab14e009fbddfc28ada599bc3009d2\"" Jan 23 00:09:14.831573 systemd-networkd[1422]: calia6b4ac31782: Gained IPv6LL Jan 23 00:09:15.148103 containerd[1518]: time="2026-01-23T00:09:15.147976724Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:09:15.149852 containerd[1518]: time="2026-01-23T00:09:15.149762599Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 00:09:15.150188 containerd[1518]: time="2026-01-23T00:09:15.149886995Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 00:09:15.150250 kubelet[2765]: E0123 00:09:15.150127 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 00:09:15.150250 kubelet[2765]: E0123 00:09:15.150184 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 00:09:15.150590 kubelet[2765]: E0123 00:09:15.150511 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-vc8hr_calico-system(e04e4457-45b8-4cc6-bb9d-cbd9eec7c520): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 00:09:15.150867 kubelet[2765]: E0123 00:09:15.150633 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-vc8hr" podUID="e04e4457-45b8-4cc6-bb9d-cbd9eec7c520" Jan 23 00:09:15.151524 containerd[1518]: time="2026-01-23T00:09:15.151127964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 00:09:15.485778 containerd[1518]: time="2026-01-23T00:09:15.485624411Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:09:15.488526 containerd[1518]: time="2026-01-23T00:09:15.488395261Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 00:09:15.488526 containerd[1518]: time="2026-01-23T00:09:15.488433660Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 00:09:15.489714 kubelet[2765]: E0123 00:09:15.488712 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 00:09:15.489714 kubelet[2765]: E0123 00:09:15.488789 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 00:09:15.489714 kubelet[2765]: E0123 00:09:15.488884 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-fd26x_calico-system(40189ccf-4a54-4a06-a382-10a9d6df2d28): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 00:09:15.491500 containerd[1518]: time="2026-01-23T00:09:15.491457544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 00:09:15.581703 kubelet[2765]: E0123 00:09:15.581456 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-vc8hr" podUID="e04e4457-45b8-4cc6-bb9d-cbd9eec7c520" Jan 23 00:09:15.866736 containerd[1518]: time="2026-01-23T00:09:15.866625486Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:09:15.869281 containerd[1518]: time="2026-01-23T00:09:15.869148262Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 00:09:15.869281 containerd[1518]: time="2026-01-23T00:09:15.869249019Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 00:09:15.869523 kubelet[2765]: E0123 00:09:15.869473 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 00:09:15.869587 kubelet[2765]: E0123 00:09:15.869534 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 00:09:15.870767 kubelet[2765]: E0123 00:09:15.869642 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-fd26x_calico-system(40189ccf-4a54-4a06-a382-10a9d6df2d28): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 00:09:15.870767 kubelet[2765]: E0123 00:09:15.870108 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fd26x" podUID="40189ccf-4a54-4a06-a382-10a9d6df2d28" Jan 23 00:09:16.110938 systemd-networkd[1422]: calid1340b356db: Gained IPv6LL Jan 23 00:09:16.112918 systemd-networkd[1422]: cali37a782a54a7: Gained IPv6LL Jan 23 00:09:16.585508 kubelet[2765]: E0123 00:09:16.585328 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fd26x" podUID="40189ccf-4a54-4a06-a382-10a9d6df2d28" Jan 23 00:09:16.585508 kubelet[2765]: E0123 00:09:16.585377 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-vc8hr" podUID="e04e4457-45b8-4cc6-bb9d-cbd9eec7c520" Jan 23 00:09:17.339246 kubelet[2765]: I0123 00:09:17.339090 2765 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 00:09:22.278438 containerd[1518]: time="2026-01-23T00:09:22.278227405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 00:09:22.639777 containerd[1518]: time="2026-01-23T00:09:22.639723926Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:09:22.642279 containerd[1518]: time="2026-01-23T00:09:22.642124420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 00:09:22.642279 containerd[1518]: time="2026-01-23T00:09:22.642149619Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 00:09:22.642618 kubelet[2765]: E0123 00:09:22.642569 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 00:09:22.643592 kubelet[2765]: E0123 00:09:22.643012 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 00:09:22.643592 kubelet[2765]: E0123 00:09:22.643115 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5c6fbdd5b7-6st7k_calico-system(c35d1acc-4390-4333-8520-ad8b62c4e6ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 00:09:22.644526 containerd[1518]: time="2026-01-23T00:09:22.644490514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 00:09:23.000610 containerd[1518]: time="2026-01-23T00:09:23.000471975Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:09:23.002236 containerd[1518]: time="2026-01-23T00:09:23.002154959Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 00:09:23.002520 containerd[1518]: time="2026-01-23T00:09:23.002210959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 00:09:23.002703 kubelet[2765]: E0123 00:09:23.002595 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 00:09:23.002780 kubelet[2765]: E0123 00:09:23.002677 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 00:09:23.003436 kubelet[2765]: E0123 00:09:23.002970 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5c6fbdd5b7-6st7k_calico-system(c35d1acc-4390-4333-8520-ad8b62c4e6ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 00:09:23.003436 kubelet[2765]: E0123 00:09:23.003396 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6fbdd5b7-6st7k" podUID="c35d1acc-4390-4333-8520-ad8b62c4e6ab" Jan 23 00:09:24.277469 containerd[1518]: time="2026-01-23T00:09:24.277208961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 00:09:24.609728 containerd[1518]: time="2026-01-23T00:09:24.609599538Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:09:24.611368 containerd[1518]: time="2026-01-23T00:09:24.611307286Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 00:09:24.611624 containerd[1518]: time="2026-01-23T00:09:24.611489724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 00:09:24.611876 kubelet[2765]: E0123 00:09:24.611800 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:09:24.612732 kubelet[2765]: E0123 00:09:24.612226 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:09:24.612732 kubelet[2765]: E0123 00:09:24.612428 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-ccfb6488b-mxfqt_calico-apiserver(4412ff60-1893-4e70-a14c-c509d31ae479): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 00:09:24.612732 kubelet[2765]: E0123 00:09:24.612466 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ccfb6488b-mxfqt" podUID="4412ff60-1893-4e70-a14c-c509d31ae479" Jan 23 00:09:24.614712 containerd[1518]: time="2026-01-23T00:09:24.613270712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 00:09:24.956600 containerd[1518]: time="2026-01-23T00:09:24.956337211Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:09:24.958239 containerd[1518]: time="2026-01-23T00:09:24.958090558Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 00:09:24.958239 containerd[1518]: time="2026-01-23T00:09:24.958127598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 00:09:24.958596 kubelet[2765]: E0123 00:09:24.958550 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 00:09:24.958670 kubelet[2765]: E0123 00:09:24.958608 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 00:09:24.958727 kubelet[2765]: E0123 00:09:24.958700 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5df54fc4cc-6ldlj_calico-system(535d2a42-9500-4db2-9b67-fdfed8e04eb2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 00:09:24.958798 kubelet[2765]: E0123 00:09:24.958744 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5df54fc4cc-6ldlj" podUID="535d2a42-9500-4db2-9b67-fdfed8e04eb2" Jan 23 00:09:25.277742 containerd[1518]: time="2026-01-23T00:09:25.276389185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 00:09:25.619524 containerd[1518]: time="2026-01-23T00:09:25.619424466Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:09:25.621410 containerd[1518]: time="2026-01-23T00:09:25.621332495Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 00:09:25.621497 containerd[1518]: time="2026-01-23T00:09:25.621456895Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 00:09:25.621777 kubelet[2765]: E0123 00:09:25.621726 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:09:25.622340 kubelet[2765]: E0123 00:09:25.621792 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:09:25.622340 kubelet[2765]: E0123 00:09:25.621985 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-ccfb6488b-prfmf_calico-apiserver(04dc4181-cb72-4a34-bab6-405893931b00): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 00:09:25.622340 kubelet[2765]: E0123 00:09:25.622047 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ccfb6488b-prfmf" podUID="04dc4181-cb72-4a34-bab6-405893931b00" Jan 23 00:09:27.277121 containerd[1518]: time="2026-01-23T00:09:27.276874285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 00:09:27.621276 containerd[1518]: time="2026-01-23T00:09:27.621229071Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:09:27.622613 containerd[1518]: time="2026-01-23T00:09:27.622543028Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 00:09:27.622861 containerd[1518]: time="2026-01-23T00:09:27.622577988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 00:09:27.622953 kubelet[2765]: E0123 00:09:27.622840 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 00:09:27.622953 kubelet[2765]: E0123 00:09:27.622893 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 00:09:27.623855 kubelet[2765]: E0123 00:09:27.623177 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-vc8hr_calico-system(e04e4457-45b8-4cc6-bb9d-cbd9eec7c520): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 00:09:27.623855 kubelet[2765]: E0123 00:09:27.623220 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-vc8hr" podUID="e04e4457-45b8-4cc6-bb9d-cbd9eec7c520" Jan 23 00:09:29.283139 containerd[1518]: time="2026-01-23T00:09:29.283080815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 00:09:29.632629 containerd[1518]: time="2026-01-23T00:09:29.632574728Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:09:29.634176 containerd[1518]: time="2026-01-23T00:09:29.634089009Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 00:09:29.634395 containerd[1518]: time="2026-01-23T00:09:29.634189449Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 00:09:29.634456 kubelet[2765]: E0123 00:09:29.634393 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 00:09:29.635933 kubelet[2765]: E0123 00:09:29.634445 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 00:09:29.635933 kubelet[2765]: E0123 00:09:29.634562 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-fd26x_calico-system(40189ccf-4a54-4a06-a382-10a9d6df2d28): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 00:09:29.636698 containerd[1518]: time="2026-01-23T00:09:29.636645131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 00:09:30.177726 containerd[1518]: time="2026-01-23T00:09:30.177540866Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:09:30.179643 containerd[1518]: time="2026-01-23T00:09:30.179471150Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 00:09:30.179643 containerd[1518]: time="2026-01-23T00:09:30.179595431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 00:09:30.179963 kubelet[2765]: E0123 00:09:30.179823 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 00:09:30.179963 kubelet[2765]: E0123 00:09:30.179871 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 00:09:30.179963 kubelet[2765]: E0123 00:09:30.179943 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-fd26x_calico-system(40189ccf-4a54-4a06-a382-10a9d6df2d28): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 00:09:30.180134 kubelet[2765]: E0123 00:09:30.179982 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fd26x" podUID="40189ccf-4a54-4a06-a382-10a9d6df2d28" Jan 23 00:09:36.277714 kubelet[2765]: E0123 00:09:36.276957 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ccfb6488b-prfmf" podUID="04dc4181-cb72-4a34-bab6-405893931b00" Jan 23 00:09:36.277714 kubelet[2765]: E0123 00:09:36.277189 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ccfb6488b-mxfqt" podUID="4412ff60-1893-4e70-a14c-c509d31ae479" Jan 23 00:09:37.277170 kubelet[2765]: E0123 00:09:37.277113 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5df54fc4cc-6ldlj" podUID="535d2a42-9500-4db2-9b67-fdfed8e04eb2" Jan 23 00:09:38.278527 kubelet[2765]: E0123 00:09:38.278372 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6fbdd5b7-6st7k" podUID="c35d1acc-4390-4333-8520-ad8b62c4e6ab" Jan 23 00:09:40.278115 kubelet[2765]: E0123 00:09:40.277945 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-vc8hr" podUID="e04e4457-45b8-4cc6-bb9d-cbd9eec7c520" Jan 23 00:09:44.277164 kubelet[2765]: E0123 00:09:44.276640 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fd26x" podUID="40189ccf-4a54-4a06-a382-10a9d6df2d28" Jan 23 00:09:48.284645 containerd[1518]: time="2026-01-23T00:09:48.284585269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 00:09:48.626672 containerd[1518]: time="2026-01-23T00:09:48.626626180Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:09:48.628880 containerd[1518]: time="2026-01-23T00:09:48.628820467Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 00:09:48.628994 containerd[1518]: time="2026-01-23T00:09:48.628968710Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 00:09:48.629216 kubelet[2765]: E0123 00:09:48.629169 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 00:09:48.630142 kubelet[2765]: E0123 00:09:48.629228 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 00:09:48.630142 kubelet[2765]: E0123 00:09:48.629313 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5df54fc4cc-6ldlj_calico-system(535d2a42-9500-4db2-9b67-fdfed8e04eb2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 00:09:48.630142 kubelet[2765]: E0123 00:09:48.629347 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5df54fc4cc-6ldlj" podUID="535d2a42-9500-4db2-9b67-fdfed8e04eb2" Jan 23 00:09:49.278489 containerd[1518]: time="2026-01-23T00:09:49.276882128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 00:09:49.630303 containerd[1518]: time="2026-01-23T00:09:49.630130040Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:09:49.632798 containerd[1518]: time="2026-01-23T00:09:49.632733458Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 00:09:49.633355 containerd[1518]: time="2026-01-23T00:09:49.632775779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 00:09:49.633443 kubelet[2765]: E0123 00:09:49.633248 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:09:49.633443 kubelet[2765]: E0123 00:09:49.633302 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:09:49.633443 kubelet[2765]: E0123 00:09:49.633374 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-ccfb6488b-prfmf_calico-apiserver(04dc4181-cb72-4a34-bab6-405893931b00): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 00:09:49.633443 kubelet[2765]: E0123 00:09:49.633405 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ccfb6488b-prfmf" podUID="04dc4181-cb72-4a34-bab6-405893931b00" Jan 23 00:09:50.279454 containerd[1518]: time="2026-01-23T00:09:50.279359954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 00:09:50.618069 containerd[1518]: time="2026-01-23T00:09:50.618011839Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:09:50.619878 containerd[1518]: time="2026-01-23T00:09:50.619771759Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 00:09:50.619878 containerd[1518]: time="2026-01-23T00:09:50.619839841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 00:09:50.621885 kubelet[2765]: E0123 00:09:50.620059 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 00:09:50.621885 kubelet[2765]: E0123 00:09:50.620105 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 00:09:50.621885 kubelet[2765]: E0123 00:09:50.620272 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5c6fbdd5b7-6st7k_calico-system(c35d1acc-4390-4333-8520-ad8b62c4e6ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 00:09:50.622612 containerd[1518]: time="2026-01-23T00:09:50.622339219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 00:09:50.969809 containerd[1518]: time="2026-01-23T00:09:50.968397235Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:09:50.971251 containerd[1518]: time="2026-01-23T00:09:50.970793330Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 00:09:50.971251 containerd[1518]: time="2026-01-23T00:09:50.970899693Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 00:09:50.971766 kubelet[2765]: E0123 00:09:50.971498 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:09:50.971766 kubelet[2765]: E0123 00:09:50.971567 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:09:50.972130 kubelet[2765]: E0123 00:09:50.971766 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-ccfb6488b-mxfqt_calico-apiserver(4412ff60-1893-4e70-a14c-c509d31ae479): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 00:09:50.972130 kubelet[2765]: E0123 00:09:50.971813 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ccfb6488b-mxfqt" podUID="4412ff60-1893-4e70-a14c-c509d31ae479" Jan 23 00:09:50.972961 containerd[1518]: time="2026-01-23T00:09:50.972481809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 00:09:51.317245 containerd[1518]: time="2026-01-23T00:09:51.316830220Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:09:51.318732 containerd[1518]: time="2026-01-23T00:09:51.318660744Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 00:09:51.318852 containerd[1518]: time="2026-01-23T00:09:51.318784747Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 00:09:51.319111 kubelet[2765]: E0123 00:09:51.319074 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 00:09:51.319295 kubelet[2765]: E0123 00:09:51.319249 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 00:09:51.320701 kubelet[2765]: E0123 00:09:51.320541 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5c6fbdd5b7-6st7k_calico-system(c35d1acc-4390-4333-8520-ad8b62c4e6ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 00:09:51.320893 kubelet[2765]: E0123 00:09:51.320830 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6fbdd5b7-6st7k" podUID="c35d1acc-4390-4333-8520-ad8b62c4e6ab" Jan 23 00:09:53.279304 containerd[1518]: time="2026-01-23T00:09:53.279257284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 00:09:53.620996 containerd[1518]: time="2026-01-23T00:09:53.620939335Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:09:53.624703 containerd[1518]: time="2026-01-23T00:09:53.624616988Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 00:09:53.624703 containerd[1518]: time="2026-01-23T00:09:53.624664789Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 00:09:53.625087 kubelet[2765]: E0123 00:09:53.624909 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 00:09:53.625087 kubelet[2765]: E0123 00:09:53.625058 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 00:09:53.631579 kubelet[2765]: E0123 00:09:53.625164 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-vc8hr_calico-system(e04e4457-45b8-4cc6-bb9d-cbd9eec7c520): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 00:09:53.631747 kubelet[2765]: E0123 00:09:53.631581 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-vc8hr" podUID="e04e4457-45b8-4cc6-bb9d-cbd9eec7c520" Jan 23 00:09:59.276815 containerd[1518]: time="2026-01-23T00:09:59.276667269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 00:09:59.619745 containerd[1518]: time="2026-01-23T00:09:59.619668998Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:09:59.621516 containerd[1518]: time="2026-01-23T00:09:59.621412008Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 00:09:59.621516 containerd[1518]: time="2026-01-23T00:09:59.621478010Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 00:09:59.621936 kubelet[2765]: E0123 00:09:59.621866 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 00:09:59.621936 kubelet[2765]: E0123 00:09:59.621916 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 00:09:59.622730 kubelet[2765]: E0123 00:09:59.622407 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-fd26x_calico-system(40189ccf-4a54-4a06-a382-10a9d6df2d28): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 00:09:59.624971 containerd[1518]: time="2026-01-23T00:09:59.624924111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 00:09:59.971472 containerd[1518]: time="2026-01-23T00:09:59.971320698Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:09:59.973275 containerd[1518]: time="2026-01-23T00:09:59.973149791Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 00:09:59.973275 containerd[1518]: time="2026-01-23T00:09:59.973230593Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 00:09:59.973701 kubelet[2765]: E0123 00:09:59.973642 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 00:09:59.974876 kubelet[2765]: E0123 00:09:59.974756 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 00:09:59.975054 kubelet[2765]: E0123 00:09:59.975033 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-fd26x_calico-system(40189ccf-4a54-4a06-a382-10a9d6df2d28): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 00:09:59.975283 kubelet[2765]: E0123 00:09:59.975110 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fd26x" podUID="40189ccf-4a54-4a06-a382-10a9d6df2d28" Jan 23 00:10:02.279767 kubelet[2765]: E0123 00:10:02.278881 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ccfb6488b-prfmf" podUID="04dc4181-cb72-4a34-bab6-405893931b00" Jan 23 00:10:03.279531 kubelet[2765]: E0123 00:10:03.278106 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5df54fc4cc-6ldlj" podUID="535d2a42-9500-4db2-9b67-fdfed8e04eb2" Jan 23 00:10:05.277388 kubelet[2765]: E0123 00:10:05.277187 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ccfb6488b-mxfqt" podUID="4412ff60-1893-4e70-a14c-c509d31ae479" Jan 23 00:10:05.279872 kubelet[2765]: E0123 00:10:05.279829 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6fbdd5b7-6st7k" podUID="c35d1acc-4390-4333-8520-ad8b62c4e6ab" Jan 23 00:10:07.278563 kubelet[2765]: E0123 00:10:07.277391 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-vc8hr" podUID="e04e4457-45b8-4cc6-bb9d-cbd9eec7c520" Jan 23 00:10:14.277782 kubelet[2765]: E0123 00:10:14.277341 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fd26x" podUID="40189ccf-4a54-4a06-a382-10a9d6df2d28" Jan 23 00:10:15.279217 kubelet[2765]: E0123 00:10:15.277924 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5df54fc4cc-6ldlj" podUID="535d2a42-9500-4db2-9b67-fdfed8e04eb2" Jan 23 00:10:16.275512 kubelet[2765]: E0123 00:10:16.275451 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ccfb6488b-prfmf" podUID="04dc4181-cb72-4a34-bab6-405893931b00" Jan 23 00:10:18.277742 kubelet[2765]: E0123 00:10:18.276268 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ccfb6488b-mxfqt" podUID="4412ff60-1893-4e70-a14c-c509d31ae479" Jan 23 00:10:20.279001 kubelet[2765]: E0123 00:10:20.278919 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6fbdd5b7-6st7k" podUID="c35d1acc-4390-4333-8520-ad8b62c4e6ab" Jan 23 00:10:21.278250 kubelet[2765]: E0123 00:10:21.278185 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-vc8hr" podUID="e04e4457-45b8-4cc6-bb9d-cbd9eec7c520" Jan 23 00:10:26.276697 kubelet[2765]: E0123 00:10:26.276557 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fd26x" podUID="40189ccf-4a54-4a06-a382-10a9d6df2d28" Jan 23 00:10:29.277345 kubelet[2765]: E0123 00:10:29.277027 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ccfb6488b-mxfqt" podUID="4412ff60-1893-4e70-a14c-c509d31ae479" Jan 23 00:10:29.279276 kubelet[2765]: E0123 00:10:29.278234 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ccfb6488b-prfmf" podUID="04dc4181-cb72-4a34-bab6-405893931b00" Jan 23 00:10:30.277323 containerd[1518]: time="2026-01-23T00:10:30.277145799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 00:10:30.627192 containerd[1518]: time="2026-01-23T00:10:30.626925111Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:10:30.628656 containerd[1518]: time="2026-01-23T00:10:30.628510495Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 00:10:30.628656 containerd[1518]: time="2026-01-23T00:10:30.628615179Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 00:10:30.628886 kubelet[2765]: E0123 00:10:30.628809 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 00:10:30.628886 kubelet[2765]: E0123 00:10:30.628858 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 00:10:30.629227 kubelet[2765]: E0123 00:10:30.628944 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5df54fc4cc-6ldlj_calico-system(535d2a42-9500-4db2-9b67-fdfed8e04eb2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 00:10:30.629227 kubelet[2765]: E0123 00:10:30.628980 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5df54fc4cc-6ldlj" podUID="535d2a42-9500-4db2-9b67-fdfed8e04eb2" Jan 23 00:10:32.276266 kubelet[2765]: E0123 00:10:32.275769 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-vc8hr" podUID="e04e4457-45b8-4cc6-bb9d-cbd9eec7c520" Jan 23 00:10:34.277387 containerd[1518]: time="2026-01-23T00:10:34.276176209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 00:10:34.611579 containerd[1518]: time="2026-01-23T00:10:34.611525407Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:10:34.614530 containerd[1518]: time="2026-01-23T00:10:34.614441967Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 00:10:34.614722 containerd[1518]: time="2026-01-23T00:10:34.614633055Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 00:10:34.614970 kubelet[2765]: E0123 00:10:34.614921 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 00:10:34.615368 kubelet[2765]: E0123 00:10:34.614978 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 00:10:34.615368 kubelet[2765]: E0123 00:10:34.615054 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5c6fbdd5b7-6st7k_calico-system(c35d1acc-4390-4333-8520-ad8b62c4e6ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 00:10:34.617423 containerd[1518]: time="2026-01-23T00:10:34.617386968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 00:10:34.985917 containerd[1518]: time="2026-01-23T00:10:34.985771761Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:10:34.989085 containerd[1518]: time="2026-01-23T00:10:34.988945252Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 00:10:34.989085 containerd[1518]: time="2026-01-23T00:10:34.989056696Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 00:10:34.989464 kubelet[2765]: E0123 00:10:34.989414 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 00:10:34.989628 kubelet[2765]: E0123 00:10:34.989570 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 00:10:34.989771 kubelet[2765]: E0123 00:10:34.989751 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5c6fbdd5b7-6st7k_calico-system(c35d1acc-4390-4333-8520-ad8b62c4e6ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 00:10:34.989890 kubelet[2765]: E0123 00:10:34.989863 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6fbdd5b7-6st7k" podUID="c35d1acc-4390-4333-8520-ad8b62c4e6ab" Jan 23 00:10:38.276111 kubelet[2765]: E0123 00:10:38.275916 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fd26x" podUID="40189ccf-4a54-4a06-a382-10a9d6df2d28" Jan 23 00:10:40.276746 containerd[1518]: time="2026-01-23T00:10:40.276634869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 00:10:40.620625 containerd[1518]: time="2026-01-23T00:10:40.619988145Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:10:40.621852 containerd[1518]: time="2026-01-23T00:10:40.621763340Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 00:10:40.621954 containerd[1518]: time="2026-01-23T00:10:40.621925987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 00:10:40.622166 kubelet[2765]: E0123 00:10:40.622119 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:10:40.623035 kubelet[2765]: E0123 00:10:40.622845 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:10:40.623035 kubelet[2765]: E0123 00:10:40.622959 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-ccfb6488b-prfmf_calico-apiserver(04dc4181-cb72-4a34-bab6-405893931b00): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 00:10:40.623035 kubelet[2765]: E0123 00:10:40.622995 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ccfb6488b-prfmf" podUID="04dc4181-cb72-4a34-bab6-405893931b00" Jan 23 00:10:43.279103 containerd[1518]: time="2026-01-23T00:10:43.278938554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 00:10:43.648561 containerd[1518]: time="2026-01-23T00:10:43.648464533Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:10:43.651111 containerd[1518]: time="2026-01-23T00:10:43.650974240Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 00:10:43.651111 containerd[1518]: time="2026-01-23T00:10:43.651034363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 00:10:43.651294 kubelet[2765]: E0123 00:10:43.651245 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:10:43.651658 kubelet[2765]: E0123 00:10:43.651298 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:10:43.651658 kubelet[2765]: E0123 00:10:43.651378 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-ccfb6488b-mxfqt_calico-apiserver(4412ff60-1893-4e70-a14c-c509d31ae479): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 00:10:43.651658 kubelet[2765]: E0123 00:10:43.651424 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ccfb6488b-mxfqt" podUID="4412ff60-1893-4e70-a14c-c509d31ae479" Jan 23 00:10:44.276029 kubelet[2765]: E0123 00:10:44.275975 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5df54fc4cc-6ldlj" podUID="535d2a42-9500-4db2-9b67-fdfed8e04eb2" Jan 23 00:10:45.278352 containerd[1518]: time="2026-01-23T00:10:45.277816971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 00:10:45.640874 containerd[1518]: time="2026-01-23T00:10:45.640681846Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:10:45.642532 containerd[1518]: time="2026-01-23T00:10:45.642473163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 00:10:45.642907 containerd[1518]: time="2026-01-23T00:10:45.642513684Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 00:10:45.642967 kubelet[2765]: E0123 00:10:45.642811 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 00:10:45.642967 kubelet[2765]: E0123 00:10:45.642873 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 00:10:45.643572 kubelet[2765]: E0123 00:10:45.642977 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-vc8hr_calico-system(e04e4457-45b8-4cc6-bb9d-cbd9eec7c520): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 00:10:45.643572 kubelet[2765]: E0123 00:10:45.643010 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-vc8hr" podUID="e04e4457-45b8-4cc6-bb9d-cbd9eec7c520" Jan 23 00:10:50.277850 kubelet[2765]: E0123 00:10:50.277750 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6fbdd5b7-6st7k" podUID="c35d1acc-4390-4333-8520-ad8b62c4e6ab" Jan 23 00:10:51.277284 containerd[1518]: time="2026-01-23T00:10:51.277244670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 00:10:51.621874 containerd[1518]: time="2026-01-23T00:10:51.621794329Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:10:51.626643 containerd[1518]: time="2026-01-23T00:10:51.626573657Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 00:10:51.626929 containerd[1518]: time="2026-01-23T00:10:51.626764745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 00:10:51.628718 kubelet[2765]: E0123 00:10:51.627382 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 00:10:51.628718 kubelet[2765]: E0123 00:10:51.627433 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 00:10:51.628718 kubelet[2765]: E0123 00:10:51.627505 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-fd26x_calico-system(40189ccf-4a54-4a06-a382-10a9d6df2d28): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 00:10:51.630062 containerd[1518]: time="2026-01-23T00:10:51.630017007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 00:10:51.995175 containerd[1518]: time="2026-01-23T00:10:51.994759704Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:10:51.997087 containerd[1518]: time="2026-01-23T00:10:51.996943039Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 00:10:51.997087 containerd[1518]: time="2026-01-23T00:10:51.996996201Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 00:10:51.998416 kubelet[2765]: E0123 00:10:51.997236 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 00:10:51.998416 kubelet[2765]: E0123 00:10:51.997284 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 00:10:51.998416 kubelet[2765]: E0123 00:10:51.997377 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-fd26x_calico-system(40189ccf-4a54-4a06-a382-10a9d6df2d28): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 00:10:51.998674 kubelet[2765]: E0123 00:10:51.997419 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fd26x" podUID="40189ccf-4a54-4a06-a382-10a9d6df2d28" Jan 23 00:10:53.277714 kubelet[2765]: E0123 00:10:53.276807 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ccfb6488b-prfmf" podUID="04dc4181-cb72-4a34-bab6-405893931b00" Jan 23 00:10:54.846513 systemd[1]: Started sshd@7-88.198.161.46:22-68.220.241.50:47430.service - OpenSSH per-connection server daemon (68.220.241.50:47430). Jan 23 00:10:55.282404 kubelet[2765]: E0123 00:10:55.281124 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ccfb6488b-mxfqt" podUID="4412ff60-1893-4e70-a14c-c509d31ae479" Jan 23 00:10:55.521800 sshd[4994]: Accepted publickey for core from 68.220.241.50 port 47430 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:10:55.524385 sshd-session[4994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:10:55.533395 systemd-logind[1494]: New session 8 of user core. Jan 23 00:10:55.537024 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 00:10:56.083890 sshd[4997]: Connection closed by 68.220.241.50 port 47430 Jan 23 00:10:56.084309 sshd-session[4994]: pam_unix(sshd:session): session closed for user core Jan 23 00:10:56.091320 systemd-logind[1494]: Session 8 logged out. Waiting for processes to exit. Jan 23 00:10:56.093086 systemd[1]: sshd@7-88.198.161.46:22-68.220.241.50:47430.service: Deactivated successfully. Jan 23 00:10:56.095477 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 00:10:56.101383 systemd-logind[1494]: Removed session 8. Jan 23 00:10:57.279823 kubelet[2765]: E0123 00:10:57.279761 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-vc8hr" podUID="e04e4457-45b8-4cc6-bb9d-cbd9eec7c520" Jan 23 00:10:59.282176 kubelet[2765]: E0123 00:10:59.281765 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5df54fc4cc-6ldlj" podUID="535d2a42-9500-4db2-9b67-fdfed8e04eb2" Jan 23 00:11:01.190180 systemd[1]: Started sshd@8-88.198.161.46:22-68.220.241.50:47444.service - OpenSSH per-connection server daemon (68.220.241.50:47444). Jan 23 00:11:01.809846 sshd[5012]: Accepted publickey for core from 68.220.241.50 port 47444 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:11:01.812349 sshd-session[5012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:11:01.823391 systemd-logind[1494]: New session 9 of user core. Jan 23 00:11:01.831979 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 00:11:02.339775 sshd[5015]: Connection closed by 68.220.241.50 port 47444 Jan 23 00:11:02.340488 sshd-session[5012]: pam_unix(sshd:session): session closed for user core Jan 23 00:11:02.347251 systemd[1]: sshd@8-88.198.161.46:22-68.220.241.50:47444.service: Deactivated successfully. Jan 23 00:11:02.353461 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 00:11:02.357190 systemd-logind[1494]: Session 9 logged out. Waiting for processes to exit. Jan 23 00:11:02.362291 systemd-logind[1494]: Removed session 9. Jan 23 00:11:03.279502 kubelet[2765]: E0123 00:11:03.279445 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6fbdd5b7-6st7k" podUID="c35d1acc-4390-4333-8520-ad8b62c4e6ab" Jan 23 00:11:04.280817 kubelet[2765]: E0123 00:11:04.280509 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fd26x" podUID="40189ccf-4a54-4a06-a382-10a9d6df2d28" Jan 23 00:11:06.276706 kubelet[2765]: E0123 00:11:06.276651 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ccfb6488b-prfmf" podUID="04dc4181-cb72-4a34-bab6-405893931b00" Jan 23 00:11:07.461021 systemd[1]: Started sshd@9-88.198.161.46:22-68.220.241.50:45438.service - OpenSSH per-connection server daemon (68.220.241.50:45438). Jan 23 00:11:08.123344 sshd[5028]: Accepted publickey for core from 68.220.241.50 port 45438 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:11:08.124958 sshd-session[5028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:11:08.135004 systemd-logind[1494]: New session 10 of user core. Jan 23 00:11:08.142926 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 00:11:08.699641 sshd[5032]: Connection closed by 68.220.241.50 port 45438 Jan 23 00:11:08.701433 sshd-session[5028]: pam_unix(sshd:session): session closed for user core Jan 23 00:11:08.707066 systemd[1]: sshd@9-88.198.161.46:22-68.220.241.50:45438.service: Deactivated successfully. Jan 23 00:11:08.712748 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 00:11:08.718695 systemd-logind[1494]: Session 10 logged out. Waiting for processes to exit. Jan 23 00:11:08.720654 systemd-logind[1494]: Removed session 10. Jan 23 00:11:08.804775 systemd[1]: Started sshd@10-88.198.161.46:22-68.220.241.50:45448.service - OpenSSH per-connection server daemon (68.220.241.50:45448). Jan 23 00:11:09.277468 kubelet[2765]: E0123 00:11:09.277235 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ccfb6488b-mxfqt" podUID="4412ff60-1893-4e70-a14c-c509d31ae479" Jan 23 00:11:09.433533 sshd[5044]: Accepted publickey for core from 68.220.241.50 port 45448 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:11:09.437403 sshd-session[5044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:11:09.446870 systemd-logind[1494]: New session 11 of user core. Jan 23 00:11:09.451956 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 00:11:09.985729 sshd[5047]: Connection closed by 68.220.241.50 port 45448 Jan 23 00:11:09.986268 sshd-session[5044]: pam_unix(sshd:session): session closed for user core Jan 23 00:11:09.991592 systemd[1]: sshd@10-88.198.161.46:22-68.220.241.50:45448.service: Deactivated successfully. Jan 23 00:11:09.995622 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 00:11:09.998287 systemd-logind[1494]: Session 11 logged out. Waiting for processes to exit. Jan 23 00:11:10.000087 systemd-logind[1494]: Removed session 11. Jan 23 00:11:10.091795 systemd[1]: Started sshd@11-88.198.161.46:22-68.220.241.50:45464.service - OpenSSH per-connection server daemon (68.220.241.50:45464). Jan 23 00:11:10.278294 kubelet[2765]: E0123 00:11:10.278146 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-vc8hr" podUID="e04e4457-45b8-4cc6-bb9d-cbd9eec7c520" Jan 23 00:11:10.710642 sshd[5057]: Accepted publickey for core from 68.220.241.50 port 45464 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:11:10.712351 sshd-session[5057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:11:10.720853 systemd-logind[1494]: New session 12 of user core. Jan 23 00:11:10.726895 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 00:11:11.261250 sshd[5060]: Connection closed by 68.220.241.50 port 45464 Jan 23 00:11:11.263812 sshd-session[5057]: pam_unix(sshd:session): session closed for user core Jan 23 00:11:11.269890 systemd[1]: sshd@11-88.198.161.46:22-68.220.241.50:45464.service: Deactivated successfully. Jan 23 00:11:11.274743 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 00:11:11.281155 systemd-logind[1494]: Session 12 logged out. Waiting for processes to exit. Jan 23 00:11:11.284900 systemd-logind[1494]: Removed session 12. Jan 23 00:11:14.275739 kubelet[2765]: E0123 00:11:14.275491 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5df54fc4cc-6ldlj" podUID="535d2a42-9500-4db2-9b67-fdfed8e04eb2" Jan 23 00:11:14.279183 kubelet[2765]: E0123 00:11:14.279137 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6fbdd5b7-6st7k" podUID="c35d1acc-4390-4333-8520-ad8b62c4e6ab" Jan 23 00:11:16.373920 systemd[1]: Started sshd@12-88.198.161.46:22-68.220.241.50:54744.service - OpenSSH per-connection server daemon (68.220.241.50:54744). Jan 23 00:11:17.022618 sshd[5077]: Accepted publickey for core from 68.220.241.50 port 54744 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:11:17.025755 sshd-session[5077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:11:17.032974 systemd-logind[1494]: New session 13 of user core. Jan 23 00:11:17.037930 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 00:11:17.281014 kubelet[2765]: E0123 00:11:17.280385 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fd26x" podUID="40189ccf-4a54-4a06-a382-10a9d6df2d28" Jan 23 00:11:17.566349 sshd[5080]: Connection closed by 68.220.241.50 port 54744 Jan 23 00:11:17.566079 sshd-session[5077]: pam_unix(sshd:session): session closed for user core Jan 23 00:11:17.572381 systemd-logind[1494]: Session 13 logged out. Waiting for processes to exit. Jan 23 00:11:17.574159 systemd[1]: sshd@12-88.198.161.46:22-68.220.241.50:54744.service: Deactivated successfully. Jan 23 00:11:17.577392 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 00:11:17.583187 systemd-logind[1494]: Removed session 13. Jan 23 00:11:21.278091 kubelet[2765]: E0123 00:11:21.278028 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ccfb6488b-prfmf" podUID="04dc4181-cb72-4a34-bab6-405893931b00" Jan 23 00:11:22.275650 kubelet[2765]: E0123 00:11:22.275482 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ccfb6488b-mxfqt" podUID="4412ff60-1893-4e70-a14c-c509d31ae479" Jan 23 00:11:22.677929 systemd[1]: Started sshd@13-88.198.161.46:22-68.220.241.50:33178.service - OpenSSH per-connection server daemon (68.220.241.50:33178). Jan 23 00:11:23.313755 sshd[5117]: Accepted publickey for core from 68.220.241.50 port 33178 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:11:23.316088 sshd-session[5117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:11:23.325634 systemd-logind[1494]: New session 14 of user core. Jan 23 00:11:23.332537 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 00:11:23.864086 sshd[5122]: Connection closed by 68.220.241.50 port 33178 Jan 23 00:11:23.864963 sshd-session[5117]: pam_unix(sshd:session): session closed for user core Jan 23 00:11:23.872745 systemd-logind[1494]: Session 14 logged out. Waiting for processes to exit. Jan 23 00:11:23.873946 systemd[1]: sshd@13-88.198.161.46:22-68.220.241.50:33178.service: Deactivated successfully. Jan 23 00:11:23.877526 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 00:11:23.883755 systemd-logind[1494]: Removed session 14. Jan 23 00:11:25.283483 kubelet[2765]: E0123 00:11:25.282830 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-vc8hr" podUID="e04e4457-45b8-4cc6-bb9d-cbd9eec7c520" Jan 23 00:11:25.288556 kubelet[2765]: E0123 00:11:25.288281 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6fbdd5b7-6st7k" podUID="c35d1acc-4390-4333-8520-ad8b62c4e6ab" Jan 23 00:11:28.277953 kubelet[2765]: E0123 00:11:28.275577 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5df54fc4cc-6ldlj" podUID="535d2a42-9500-4db2-9b67-fdfed8e04eb2" Jan 23 00:11:28.978570 systemd[1]: Started sshd@14-88.198.161.46:22-68.220.241.50:33188.service - OpenSSH per-connection server daemon (68.220.241.50:33188). Jan 23 00:11:29.617703 sshd[5134]: Accepted publickey for core from 68.220.241.50 port 33188 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:11:29.620424 sshd-session[5134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:11:29.625766 systemd-logind[1494]: New session 15 of user core. Jan 23 00:11:29.639506 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 00:11:30.184712 sshd[5137]: Connection closed by 68.220.241.50 port 33188 Jan 23 00:11:30.185874 sshd-session[5134]: pam_unix(sshd:session): session closed for user core Jan 23 00:11:30.192435 systemd[1]: sshd@14-88.198.161.46:22-68.220.241.50:33188.service: Deactivated successfully. Jan 23 00:11:30.198266 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 00:11:30.202785 systemd-logind[1494]: Session 15 logged out. Waiting for processes to exit. Jan 23 00:11:30.206322 systemd-logind[1494]: Removed session 15. Jan 23 00:11:30.299432 systemd[1]: Started sshd@15-88.198.161.46:22-68.220.241.50:33196.service - OpenSSH per-connection server daemon (68.220.241.50:33196). Jan 23 00:11:30.940628 sshd[5150]: Accepted publickey for core from 68.220.241.50 port 33196 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:11:30.943219 sshd-session[5150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:11:30.950779 systemd-logind[1494]: New session 16 of user core. Jan 23 00:11:30.955980 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 00:11:31.289026 kubelet[2765]: E0123 00:11:31.288887 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fd26x" podUID="40189ccf-4a54-4a06-a382-10a9d6df2d28" Jan 23 00:11:31.668825 sshd[5153]: Connection closed by 68.220.241.50 port 33196 Jan 23 00:11:31.669220 sshd-session[5150]: pam_unix(sshd:session): session closed for user core Jan 23 00:11:31.675381 systemd[1]: sshd@15-88.198.161.46:22-68.220.241.50:33196.service: Deactivated successfully. Jan 23 00:11:31.680889 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 00:11:31.684616 systemd-logind[1494]: Session 16 logged out. Waiting for processes to exit. Jan 23 00:11:31.689994 systemd-logind[1494]: Removed session 16. Jan 23 00:11:31.776047 systemd[1]: Started sshd@16-88.198.161.46:22-68.220.241.50:33200.service - OpenSSH per-connection server daemon (68.220.241.50:33200). Jan 23 00:11:32.404478 sshd[5163]: Accepted publickey for core from 68.220.241.50 port 33200 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:11:32.408137 sshd-session[5163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:11:32.416757 systemd-logind[1494]: New session 17 of user core. Jan 23 00:11:32.422970 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 00:11:33.747551 sshd[5166]: Connection closed by 68.220.241.50 port 33200 Jan 23 00:11:33.749435 sshd-session[5163]: pam_unix(sshd:session): session closed for user core Jan 23 00:11:33.756868 systemd[1]: sshd@16-88.198.161.46:22-68.220.241.50:33200.service: Deactivated successfully. Jan 23 00:11:33.760660 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 00:11:33.762450 systemd-logind[1494]: Session 17 logged out. Waiting for processes to exit. Jan 23 00:11:33.767042 systemd-logind[1494]: Removed session 17. Jan 23 00:11:33.871191 systemd[1]: Started sshd@17-88.198.161.46:22-68.220.241.50:37352.service - OpenSSH per-connection server daemon (68.220.241.50:37352). Jan 23 00:11:34.547425 sshd[5182]: Accepted publickey for core from 68.220.241.50 port 37352 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:11:34.550354 sshd-session[5182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:11:34.556029 systemd-logind[1494]: New session 18 of user core. Jan 23 00:11:34.560922 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 00:11:35.277494 sshd[5185]: Connection closed by 68.220.241.50 port 37352 Jan 23 00:11:35.278065 sshd-session[5182]: pam_unix(sshd:session): session closed for user core Jan 23 00:11:35.280277 kubelet[2765]: E0123 00:11:35.278965 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ccfb6488b-prfmf" podUID="04dc4181-cb72-4a34-bab6-405893931b00" Jan 23 00:11:35.281147 kubelet[2765]: E0123 00:11:35.280641 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ccfb6488b-mxfqt" podUID="4412ff60-1893-4e70-a14c-c509d31ae479" Jan 23 00:11:35.286940 systemd[1]: sshd@17-88.198.161.46:22-68.220.241.50:37352.service: Deactivated successfully. Jan 23 00:11:35.291198 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 00:11:35.298303 systemd-logind[1494]: Session 18 logged out. Waiting for processes to exit. Jan 23 00:11:35.299918 systemd-logind[1494]: Removed session 18. Jan 23 00:11:35.379122 systemd[1]: Started sshd@18-88.198.161.46:22-68.220.241.50:37358.service - OpenSSH per-connection server daemon (68.220.241.50:37358). Jan 23 00:11:36.007749 sshd[5197]: Accepted publickey for core from 68.220.241.50 port 37358 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:11:36.010296 sshd-session[5197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:11:36.018385 systemd-logind[1494]: New session 19 of user core. Jan 23 00:11:36.024934 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 00:11:36.530719 sshd[5200]: Connection closed by 68.220.241.50 port 37358 Jan 23 00:11:36.531641 sshd-session[5197]: pam_unix(sshd:session): session closed for user core Jan 23 00:11:36.539649 systemd[1]: sshd@18-88.198.161.46:22-68.220.241.50:37358.service: Deactivated successfully. Jan 23 00:11:36.545283 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 00:11:36.547877 systemd-logind[1494]: Session 19 logged out. Waiting for processes to exit. Jan 23 00:11:36.551140 systemd-logind[1494]: Removed session 19. Jan 23 00:11:37.278671 kubelet[2765]: E0123 00:11:37.276396 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6fbdd5b7-6st7k" podUID="c35d1acc-4390-4333-8520-ad8b62c4e6ab" Jan 23 00:11:39.283500 kubelet[2765]: E0123 00:11:39.283160 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-vc8hr" podUID="e04e4457-45b8-4cc6-bb9d-cbd9eec7c520" Jan 23 00:11:41.640807 systemd[1]: Started sshd@19-88.198.161.46:22-68.220.241.50:37366.service - OpenSSH per-connection server daemon (68.220.241.50:37366). Jan 23 00:11:42.275352 kubelet[2765]: E0123 00:11:42.274947 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5df54fc4cc-6ldlj" podUID="535d2a42-9500-4db2-9b67-fdfed8e04eb2" Jan 23 00:11:42.280050 sshd[5214]: Accepted publickey for core from 68.220.241.50 port 37366 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:11:42.281587 sshd-session[5214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:11:42.288663 systemd-logind[1494]: New session 20 of user core. Jan 23 00:11:42.292928 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 00:11:42.795624 sshd[5217]: Connection closed by 68.220.241.50 port 37366 Jan 23 00:11:42.796484 sshd-session[5214]: pam_unix(sshd:session): session closed for user core Jan 23 00:11:42.803029 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 00:11:42.804039 systemd[1]: sshd@19-88.198.161.46:22-68.220.241.50:37366.service: Deactivated successfully. Jan 23 00:11:42.812144 systemd-logind[1494]: Session 20 logged out. Waiting for processes to exit. Jan 23 00:11:42.815751 systemd-logind[1494]: Removed session 20. Jan 23 00:11:46.276369 kubelet[2765]: E0123 00:11:46.276256 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ccfb6488b-prfmf" podUID="04dc4181-cb72-4a34-bab6-405893931b00" Jan 23 00:11:46.278061 kubelet[2765]: E0123 00:11:46.278006 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fd26x" podUID="40189ccf-4a54-4a06-a382-10a9d6df2d28" Jan 23 00:11:47.902008 systemd[1]: Started sshd@20-88.198.161.46:22-68.220.241.50:49572.service - OpenSSH per-connection server daemon (68.220.241.50:49572). Jan 23 00:11:48.275938 kubelet[2765]: E0123 00:11:48.275739 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ccfb6488b-mxfqt" podUID="4412ff60-1893-4e70-a14c-c509d31ae479" Jan 23 00:11:48.279846 kubelet[2765]: E0123 00:11:48.279772 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6fbdd5b7-6st7k" podUID="c35d1acc-4390-4333-8520-ad8b62c4e6ab" Jan 23 00:11:48.525543 sshd[5254]: Accepted publickey for core from 68.220.241.50 port 49572 ssh2: RSA SHA256:wScRSXm5JHKrAeSxAplDhSGBmu9+62e7CgH0oSNisYE Jan 23 00:11:48.527073 sshd-session[5254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:11:48.532558 systemd-logind[1494]: New session 21 of user core. Jan 23 00:11:48.536033 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 00:11:49.069486 sshd[5257]: Connection closed by 68.220.241.50 port 49572 Jan 23 00:11:49.069371 sshd-session[5254]: pam_unix(sshd:session): session closed for user core Jan 23 00:11:49.076128 systemd[1]: sshd@20-88.198.161.46:22-68.220.241.50:49572.service: Deactivated successfully. Jan 23 00:11:49.081004 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 00:11:49.082376 systemd-logind[1494]: Session 21 logged out. Waiting for processes to exit. Jan 23 00:11:49.085570 systemd-logind[1494]: Removed session 21. Jan 23 00:11:50.276454 kubelet[2765]: E0123 00:11:50.276366 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-vc8hr" podUID="e04e4457-45b8-4cc6-bb9d-cbd9eec7c520" Jan 23 00:11:57.278982 containerd[1518]: time="2026-01-23T00:11:57.278300896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 00:11:57.621294 containerd[1518]: time="2026-01-23T00:11:57.620945506Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:11:57.622613 containerd[1518]: time="2026-01-23T00:11:57.622446921Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 00:11:57.622613 containerd[1518]: time="2026-01-23T00:11:57.622571242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 00:11:57.623070 kubelet[2765]: E0123 00:11:57.623016 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 00:11:57.624918 kubelet[2765]: E0123 00:11:57.623496 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 00:11:57.624918 kubelet[2765]: E0123 00:11:57.623603 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5df54fc4cc-6ldlj_calico-system(535d2a42-9500-4db2-9b67-fdfed8e04eb2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 00:11:57.624918 kubelet[2765]: E0123 00:11:57.623650 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5df54fc4cc-6ldlj" podUID="535d2a42-9500-4db2-9b67-fdfed8e04eb2" Jan 23 00:11:59.276898 kubelet[2765]: E0123 00:11:59.275652 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ccfb6488b-prfmf" podUID="04dc4181-cb72-4a34-bab6-405893931b00" Jan 23 00:12:00.276643 containerd[1518]: time="2026-01-23T00:12:00.276594892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 00:12:00.278048 kubelet[2765]: E0123 00:12:00.278002 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fd26x" podUID="40189ccf-4a54-4a06-a382-10a9d6df2d28" Jan 23 00:12:00.634809 containerd[1518]: time="2026-01-23T00:12:00.634740683Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:12:00.637044 containerd[1518]: time="2026-01-23T00:12:00.636976388Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 00:12:00.637202 containerd[1518]: time="2026-01-23T00:12:00.637082669Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 00:12:00.637404 kubelet[2765]: E0123 00:12:00.637367 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 00:12:00.637798 kubelet[2765]: E0123 00:12:00.637761 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 00:12:00.638045 kubelet[2765]: E0123 00:12:00.637976 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5c6fbdd5b7-6st7k_calico-system(c35d1acc-4390-4333-8520-ad8b62c4e6ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 00:12:00.640836 containerd[1518]: time="2026-01-23T00:12:00.640736350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 00:12:00.996799 containerd[1518]: time="2026-01-23T00:12:00.996586675Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:12:01.000703 containerd[1518]: time="2026-01-23T00:12:01.000626720Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 00:12:01.001494 containerd[1518]: time="2026-01-23T00:12:01.000758322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 00:12:01.001561 kubelet[2765]: E0123 00:12:01.000931 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 00:12:01.001561 kubelet[2765]: E0123 00:12:01.000983 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 00:12:01.001561 kubelet[2765]: E0123 00:12:01.001065 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5c6fbdd5b7-6st7k_calico-system(c35d1acc-4390-4333-8520-ad8b62c4e6ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 00:12:01.001671 kubelet[2765]: E0123 00:12:01.001111 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6fbdd5b7-6st7k" podUID="c35d1acc-4390-4333-8520-ad8b62c4e6ab" Jan 23 00:12:01.282473 kubelet[2765]: E0123 00:12:01.281835 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ccfb6488b-mxfqt" podUID="4412ff60-1893-4e70-a14c-c509d31ae479" Jan 23 00:12:05.276728 kubelet[2765]: E0123 00:12:05.276531 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-vc8hr" podUID="e04e4457-45b8-4cc6-bb9d-cbd9eec7c520" Jan 23 00:12:13.280421 kubelet[2765]: E0123 00:12:13.280005 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5df54fc4cc-6ldlj" podUID="535d2a42-9500-4db2-9b67-fdfed8e04eb2" Jan 23 00:12:13.286630 containerd[1518]: time="2026-01-23T00:12:13.283235726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 00:12:13.632790 containerd[1518]: time="2026-01-23T00:12:13.632280230Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:12:13.634981 containerd[1518]: time="2026-01-23T00:12:13.634507826Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 00:12:13.634981 containerd[1518]: time="2026-01-23T00:12:13.634620588Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 00:12:13.635749 kubelet[2765]: E0123 00:12:13.635429 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:12:13.635749 kubelet[2765]: E0123 00:12:13.635494 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:12:13.635970 kubelet[2765]: E0123 00:12:13.635682 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-ccfb6488b-prfmf_calico-apiserver(04dc4181-cb72-4a34-bab6-405893931b00): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 00:12:13.636177 kubelet[2765]: E0123 00:12:13.636120 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ccfb6488b-prfmf" podUID="04dc4181-cb72-4a34-bab6-405893931b00" Jan 23 00:12:14.275771 containerd[1518]: time="2026-01-23T00:12:14.275715365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 00:12:14.621670 containerd[1518]: time="2026-01-23T00:12:14.621427373Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:12:14.624896 containerd[1518]: time="2026-01-23T00:12:14.624777789Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 00:12:14.624896 containerd[1518]: time="2026-01-23T00:12:14.624843790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 00:12:14.625269 kubelet[2765]: E0123 00:12:14.625203 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:12:14.625269 kubelet[2765]: E0123 00:12:14.625260 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:12:14.625620 kubelet[2765]: E0123 00:12:14.625338 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-ccfb6488b-mxfqt_calico-apiserver(4412ff60-1893-4e70-a14c-c509d31ae479): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 00:12:14.625620 kubelet[2765]: E0123 00:12:14.625393 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-ccfb6488b-mxfqt" podUID="4412ff60-1893-4e70-a14c-c509d31ae479" Jan 23 00:12:15.280011 kubelet[2765]: E0123 00:12:15.279656 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6fbdd5b7-6st7k" podUID="c35d1acc-4390-4333-8520-ad8b62c4e6ab" Jan 23 00:12:15.283438 containerd[1518]: time="2026-01-23T00:12:15.282626624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 00:12:15.628474 containerd[1518]: time="2026-01-23T00:12:15.628254267Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:12:15.629939 containerd[1518]: time="2026-01-23T00:12:15.629802253Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 00:12:15.629939 containerd[1518]: time="2026-01-23T00:12:15.629899735Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 00:12:15.630288 kubelet[2765]: E0123 00:12:15.630233 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 00:12:15.630637 kubelet[2765]: E0123 00:12:15.630305 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 00:12:15.630637 kubelet[2765]: E0123 00:12:15.630443 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-fd26x_calico-system(40189ccf-4a54-4a06-a382-10a9d6df2d28): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 00:12:15.632819 containerd[1518]: time="2026-01-23T00:12:15.632022011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 00:12:16.023254 containerd[1518]: time="2026-01-23T00:12:16.023111950Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:12:16.026054 containerd[1518]: time="2026-01-23T00:12:16.025972159Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 00:12:16.026313 containerd[1518]: time="2026-01-23T00:12:16.026042000Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 00:12:16.026907 kubelet[2765]: E0123 00:12:16.026574 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 00:12:16.026907 kubelet[2765]: E0123 00:12:16.026637 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 00:12:16.026907 kubelet[2765]: E0123 00:12:16.026739 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-fd26x_calico-system(40189ccf-4a54-4a06-a382-10a9d6df2d28): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 00:12:16.027223 kubelet[2765]: E0123 00:12:16.026793 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fd26x" podUID="40189ccf-4a54-4a06-a382-10a9d6df2d28" Jan 23 00:12:16.276145 containerd[1518]: time="2026-01-23T00:12:16.275779505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 00:12:16.614182 containerd[1518]: time="2026-01-23T00:12:16.613940734Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:12:16.616001 containerd[1518]: time="2026-01-23T00:12:16.615904888Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 00:12:16.616360 containerd[1518]: time="2026-01-23T00:12:16.615952849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 00:12:16.616938 kubelet[2765]: E0123 00:12:16.616653 2765 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 00:12:16.616938 kubelet[2765]: E0123 00:12:16.616744 2765 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 00:12:16.616938 kubelet[2765]: E0123 00:12:16.616842 2765 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-vc8hr_calico-system(e04e4457-45b8-4cc6-bb9d-cbd9eec7c520): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 00:12:16.616938 kubelet[2765]: E0123 00:12:16.616875 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-vc8hr" podUID="e04e4457-45b8-4cc6-bb9d-cbd9eec7c520" Jan 23 00:12:20.291862 systemd[1]: cri-containerd-ba6589a2b5f0cf356e41e371dd086f3797e63a85be13c854c6e6898fe398423c.scope: Deactivated successfully. Jan 23 00:12:20.292297 systemd[1]: cri-containerd-ba6589a2b5f0cf356e41e371dd086f3797e63a85be13c854c6e6898fe398423c.scope: Consumed 50.464s CPU time, 106.6M memory peak. Jan 23 00:12:20.300351 containerd[1518]: time="2026-01-23T00:12:20.300284678Z" level=info msg="received container exit event container_id:\"ba6589a2b5f0cf356e41e371dd086f3797e63a85be13c854c6e6898fe398423c\" id:\"ba6589a2b5f0cf356e41e371dd086f3797e63a85be13c854c6e6898fe398423c\" pid:3088 exit_status:1 exited_at:{seconds:1769127140 nanos:299770548}" Jan 23 00:12:20.328516 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba6589a2b5f0cf356e41e371dd086f3797e63a85be13c854c6e6898fe398423c-rootfs.mount: Deactivated successfully. Jan 23 00:12:20.398835 systemd[1]: cri-containerd-ac16906ef5252cb87acabdc4e5ab8d82d6837eaf016edfb4adca38db28c06ad8.scope: Deactivated successfully. Jan 23 00:12:20.400178 systemd[1]: cri-containerd-ac16906ef5252cb87acabdc4e5ab8d82d6837eaf016edfb4adca38db28c06ad8.scope: Consumed 5.247s CPU time, 65.9M memory peak, 2.8M read from disk. Jan 23 00:12:20.402665 containerd[1518]: time="2026-01-23T00:12:20.402615692Z" level=info msg="received container exit event container_id:\"ac16906ef5252cb87acabdc4e5ab8d82d6837eaf016edfb4adca38db28c06ad8\" id:\"ac16906ef5252cb87acabdc4e5ab8d82d6837eaf016edfb4adca38db28c06ad8\" pid:2597 exit_status:1 exited_at:{seconds:1769127140 nanos:400665856}" Jan 23 00:12:20.433222 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac16906ef5252cb87acabdc4e5ab8d82d6837eaf016edfb4adca38db28c06ad8-rootfs.mount: Deactivated successfully. Jan 23 00:12:20.626531 kubelet[2765]: E0123 00:12:20.625948 2765 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:51794->10.0.0.2:2379: read: connection timed out" Jan 23 00:12:21.235499 kubelet[2765]: I0123 00:12:21.234804 2765 scope.go:117] "RemoveContainer" containerID="ac16906ef5252cb87acabdc4e5ab8d82d6837eaf016edfb4adca38db28c06ad8" Jan 23 00:12:21.235499 kubelet[2765]: I0123 00:12:21.235281 2765 scope.go:117] "RemoveContainer" containerID="ba6589a2b5f0cf356e41e371dd086f3797e63a85be13c854c6e6898fe398423c" Jan 23 00:12:21.243517 containerd[1518]: time="2026-01-23T00:12:21.242415152Z" level=info msg="CreateContainer within sandbox \"dcabdc23984b492f09526e80736cc17b9501380c6d5d6ae23420ccc5088d4306\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 23 00:12:21.245501 containerd[1518]: time="2026-01-23T00:12:21.244619314Z" level=info msg="CreateContainer within sandbox \"9fec135aa0ffd66d8627e073cc8164fe1366aa88c57cbab99c4dfbdd22582e2b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 23 00:12:21.264560 containerd[1518]: time="2026-01-23T00:12:21.259869681Z" level=info msg="Container 36507fe1bb7b3dd1f221e95b9a32daff136f62067caca82218755ab242c55cea: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:12:21.272932 containerd[1518]: time="2026-01-23T00:12:21.272877486Z" level=info msg="Container d6bdff1601bb46dbfdf5ea639e9316ecd28a1b51a48ea560f22ef152b55cbab9: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:12:21.281056 containerd[1518]: time="2026-01-23T00:12:21.280996159Z" level=info msg="CreateContainer within sandbox \"dcabdc23984b492f09526e80736cc17b9501380c6d5d6ae23420ccc5088d4306\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"36507fe1bb7b3dd1f221e95b9a32daff136f62067caca82218755ab242c55cea\"" Jan 23 00:12:21.284744 containerd[1518]: time="2026-01-23T00:12:21.283783011Z" level=info msg="StartContainer for \"36507fe1bb7b3dd1f221e95b9a32daff136f62067caca82218755ab242c55cea\"" Jan 23 00:12:21.286033 containerd[1518]: time="2026-01-23T00:12:21.285996773Z" level=info msg="connecting to shim 36507fe1bb7b3dd1f221e95b9a32daff136f62067caca82218755ab242c55cea" address="unix:///run/containerd/s/226f1f02f1586669acc6595dee591b5371a17bc0a3855a3a29c79b4c83d451c6" protocol=ttrpc version=3 Jan 23 00:12:21.298366 containerd[1518]: time="2026-01-23T00:12:21.297601631Z" level=info msg="CreateContainer within sandbox \"9fec135aa0ffd66d8627e073cc8164fe1366aa88c57cbab99c4dfbdd22582e2b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"d6bdff1601bb46dbfdf5ea639e9316ecd28a1b51a48ea560f22ef152b55cbab9\"" Jan 23 00:12:21.303703 containerd[1518]: time="2026-01-23T00:12:21.303609304Z" level=info msg="StartContainer for \"d6bdff1601bb46dbfdf5ea639e9316ecd28a1b51a48ea560f22ef152b55cbab9\"" Jan 23 00:12:21.312557 containerd[1518]: time="2026-01-23T00:12:21.312399870Z" level=info msg="connecting to shim d6bdff1601bb46dbfdf5ea639e9316ecd28a1b51a48ea560f22ef152b55cbab9" address="unix:///run/containerd/s/3b9326bf848d18dfcdf6336ea762171ec43363f9c369f60b88fbc63e6d16647d" protocol=ttrpc version=3 Jan 23 00:12:21.329488 systemd[1]: Started cri-containerd-36507fe1bb7b3dd1f221e95b9a32daff136f62067caca82218755ab242c55cea.scope - libcontainer container 36507fe1bb7b3dd1f221e95b9a32daff136f62067caca82218755ab242c55cea. Jan 23 00:12:21.349957 systemd[1]: Started cri-containerd-d6bdff1601bb46dbfdf5ea639e9316ecd28a1b51a48ea560f22ef152b55cbab9.scope - libcontainer container d6bdff1601bb46dbfdf5ea639e9316ecd28a1b51a48ea560f22ef152b55cbab9. Jan 23 00:12:21.407603 containerd[1518]: time="2026-01-23T00:12:21.407537060Z" level=info msg="StartContainer for \"36507fe1bb7b3dd1f221e95b9a32daff136f62067caca82218755ab242c55cea\" returns successfully" Jan 23 00:12:21.430946 containerd[1518]: time="2026-01-23T00:12:21.430882339Z" level=info msg="StartContainer for \"d6bdff1601bb46dbfdf5ea639e9316ecd28a1b51a48ea560f22ef152b55cbab9\" returns successfully" Jan 23 00:12:24.276920 kubelet[2765]: E0123 00:12:24.276773 2765 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5df54fc4cc-6ldlj" podUID="535d2a42-9500-4db2-9b67-fdfed8e04eb2"