Dec 16 12:27:08.802193 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 16 12:27:08.802218 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Dec 12 15:20:48 -00 2025 Dec 16 12:27:08.802229 kernel: KASLR enabled Dec 16 12:27:08.802235 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Dec 16 12:27:08.802241 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390bb018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Dec 16 12:27:08.802246 kernel: random: crng init done Dec 16 12:27:08.802254 kernel: secureboot: Secure boot disabled Dec 16 12:27:08.802260 kernel: ACPI: Early table checksum verification disabled Dec 16 12:27:08.802266 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Dec 16 12:27:08.802272 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Dec 16 12:27:08.802280 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:27:08.802286 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:27:08.802292 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:27:08.802299 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:27:08.802306 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:27:08.802314 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:27:08.802321 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:27:08.802327 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:27:08.802333 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:27:08.802340 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Dec 16 12:27:08.802346 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Dec 16 12:27:08.802353 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 16 12:27:08.802359 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Dec 16 12:27:08.802365 kernel: NODE_DATA(0) allocated [mem 0x13967da00-0x139684fff] Dec 16 12:27:08.802375 kernel: Zone ranges: Dec 16 12:27:08.802382 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 16 12:27:08.802391 kernel: DMA32 empty Dec 16 12:27:08.802410 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Dec 16 12:27:08.802416 kernel: Device empty Dec 16 12:27:08.802422 kernel: Movable zone start for each node Dec 16 12:27:08.802429 kernel: Early memory node ranges Dec 16 12:27:08.802435 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Dec 16 12:27:08.802441 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Dec 16 12:27:08.802448 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Dec 16 12:27:08.802454 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Dec 16 12:27:08.802460 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Dec 16 12:27:08.802467 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Dec 16 12:27:08.802473 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Dec 16 12:27:08.802482 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Dec 16 12:27:08.802488 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Dec 16 12:27:08.802498 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Dec 16 12:27:08.802505 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Dec 16 12:27:08.802512 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Dec 16 12:27:08.802520 kernel: psci: probing for conduit method from ACPI. Dec 16 12:27:08.802527 kernel: psci: PSCIv1.1 detected in firmware. Dec 16 12:27:08.802534 kernel: psci: Using standard PSCI v0.2 function IDs Dec 16 12:27:08.802540 kernel: psci: Trusted OS migration not required Dec 16 12:27:08.802547 kernel: psci: SMC Calling Convention v1.1 Dec 16 12:27:08.802554 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 16 12:27:08.802560 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 16 12:27:08.802567 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 16 12:27:08.802574 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 16 12:27:08.802581 kernel: Detected PIPT I-cache on CPU0 Dec 16 12:27:08.802588 kernel: CPU features: detected: GIC system register CPU interface Dec 16 12:27:08.802595 kernel: CPU features: detected: Spectre-v4 Dec 16 12:27:08.802602 kernel: CPU features: detected: Spectre-BHB Dec 16 12:27:08.802609 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 16 12:27:08.802616 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 16 12:27:08.802622 kernel: CPU features: detected: ARM erratum 1418040 Dec 16 12:27:08.802629 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 16 12:27:08.802636 kernel: alternatives: applying boot alternatives Dec 16 12:27:08.802644 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 16 12:27:08.802651 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 12:27:08.802658 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 12:27:08.802664 kernel: Fallback order for Node 0: 0 Dec 16 12:27:08.802672 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1024000 Dec 16 12:27:08.802679 kernel: Policy zone: Normal Dec 16 12:27:08.802686 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 12:27:08.802692 kernel: software IO TLB: area num 2. Dec 16 12:27:08.802699 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Dec 16 12:27:08.802706 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 12:27:08.802713 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 12:27:08.802720 kernel: rcu: RCU event tracing is enabled. Dec 16 12:27:08.802727 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 12:27:08.802734 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 12:27:08.802741 kernel: Tracing variant of Tasks RCU enabled. Dec 16 12:27:08.802748 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 12:27:08.802757 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 12:27:08.802764 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:27:08.802772 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:27:08.802778 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 16 12:27:08.802785 kernel: GICv3: 256 SPIs implemented Dec 16 12:27:08.802792 kernel: GICv3: 0 Extended SPIs implemented Dec 16 12:27:08.802798 kernel: Root IRQ handler: gic_handle_irq Dec 16 12:27:08.802805 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 16 12:27:08.802812 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 16 12:27:08.802819 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 16 12:27:08.802826 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 16 12:27:08.802834 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100100000 (indirect, esz 8, psz 64K, shr 1) Dec 16 12:27:08.802841 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100110000 (flat, esz 8, psz 64K, shr 1) Dec 16 12:27:08.802848 kernel: GICv3: using LPI property table @0x0000000100120000 Dec 16 12:27:08.802855 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100130000 Dec 16 12:27:08.802862 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 12:27:08.802868 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:27:08.802875 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 16 12:27:08.802882 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 16 12:27:08.802889 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 16 12:27:08.802896 kernel: Console: colour dummy device 80x25 Dec 16 12:27:08.802903 kernel: ACPI: Core revision 20240827 Dec 16 12:27:08.802912 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 16 12:27:08.802919 kernel: pid_max: default: 32768 minimum: 301 Dec 16 12:27:08.802926 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 12:27:08.802934 kernel: landlock: Up and running. Dec 16 12:27:08.802940 kernel: SELinux: Initializing. Dec 16 12:27:08.802948 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:27:08.802954 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:27:08.802962 kernel: rcu: Hierarchical SRCU implementation. Dec 16 12:27:08.802969 kernel: rcu: Max phase no-delay instances is 400. Dec 16 12:27:08.802977 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 12:27:08.802984 kernel: Remapping and enabling EFI services. Dec 16 12:27:08.802991 kernel: smp: Bringing up secondary CPUs ... Dec 16 12:27:08.802998 kernel: Detected PIPT I-cache on CPU1 Dec 16 12:27:08.803005 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 16 12:27:08.803012 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100140000 Dec 16 12:27:08.803019 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 16 12:27:08.803026 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 16 12:27:08.803033 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 12:27:08.803040 kernel: SMP: Total of 2 processors activated. Dec 16 12:27:08.803053 kernel: CPU: All CPU(s) started at EL1 Dec 16 12:27:08.803060 kernel: CPU features: detected: 32-bit EL0 Support Dec 16 12:27:08.803069 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 16 12:27:08.803076 kernel: CPU features: detected: Common not Private translations Dec 16 12:27:08.803084 kernel: CPU features: detected: CRC32 instructions Dec 16 12:27:08.803091 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 16 12:27:08.803098 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 16 12:27:08.803107 kernel: CPU features: detected: LSE atomic instructions Dec 16 12:27:08.804221 kernel: CPU features: detected: Privileged Access Never Dec 16 12:27:08.804231 kernel: CPU features: detected: RAS Extension Support Dec 16 12:27:08.804239 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 16 12:27:08.804247 kernel: alternatives: applying system-wide alternatives Dec 16 12:27:08.804254 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Dec 16 12:27:08.804263 kernel: Memory: 3858852K/4096000K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 215668K reserved, 16384K cma-reserved) Dec 16 12:27:08.804271 kernel: devtmpfs: initialized Dec 16 12:27:08.804279 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 12:27:08.804292 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 12:27:08.804300 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 16 12:27:08.804308 kernel: 0 pages in range for non-PLT usage Dec 16 12:27:08.804315 kernel: 508400 pages in range for PLT usage Dec 16 12:27:08.804323 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 12:27:08.804330 kernel: SMBIOS 3.0.0 present. Dec 16 12:27:08.804338 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Dec 16 12:27:08.804345 kernel: DMI: Memory slots populated: 1/1 Dec 16 12:27:08.804353 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 12:27:08.804362 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 16 12:27:08.804369 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 16 12:27:08.804377 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 16 12:27:08.804384 kernel: audit: initializing netlink subsys (disabled) Dec 16 12:27:08.804392 kernel: audit: type=2000 audit(0.013:1): state=initialized audit_enabled=0 res=1 Dec 16 12:27:08.804446 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 12:27:08.804454 kernel: cpuidle: using governor menu Dec 16 12:27:08.804461 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 16 12:27:08.804469 kernel: ASID allocator initialised with 32768 entries Dec 16 12:27:08.804480 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 12:27:08.804487 kernel: Serial: AMBA PL011 UART driver Dec 16 12:27:08.804546 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 12:27:08.804555 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 12:27:08.804563 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 16 12:27:08.804570 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 16 12:27:08.804578 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 12:27:08.804585 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 12:27:08.804593 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 16 12:27:08.804603 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 16 12:27:08.804611 kernel: ACPI: Added _OSI(Module Device) Dec 16 12:27:08.804618 kernel: ACPI: Added _OSI(Processor Device) Dec 16 12:27:08.804626 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 12:27:08.804633 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 12:27:08.804641 kernel: ACPI: Interpreter enabled Dec 16 12:27:08.804648 kernel: ACPI: Using GIC for interrupt routing Dec 16 12:27:08.804656 kernel: ACPI: MCFG table detected, 1 entries Dec 16 12:27:08.804663 kernel: ACPI: CPU0 has been hot-added Dec 16 12:27:08.804672 kernel: ACPI: CPU1 has been hot-added Dec 16 12:27:08.804679 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 16 12:27:08.804686 kernel: printk: legacy console [ttyAMA0] enabled Dec 16 12:27:08.804694 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 12:27:08.804865 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 12:27:08.804932 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 16 12:27:08.805032 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 16 12:27:08.805097 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 16 12:27:08.805199 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 16 12:27:08.805210 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 16 12:27:08.805218 kernel: PCI host bridge to bus 0000:00 Dec 16 12:27:08.805295 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 16 12:27:08.805355 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 16 12:27:08.805422 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 16 12:27:08.805477 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 12:27:08.805561 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 16 12:27:08.805636 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 conventional PCI endpoint Dec 16 12:27:08.805700 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11289000-0x11289fff] Dec 16 12:27:08.805762 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref] Dec 16 12:27:08.805883 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:27:08.805959 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11288000-0x11288fff] Dec 16 12:27:08.806031 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 16 12:27:08.806096 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Dec 16 12:27:08.808317 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Dec 16 12:27:08.808432 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:27:08.808499 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11287000-0x11287fff] Dec 16 12:27:08.808562 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 16 12:27:08.808625 kernel: pci 0000:00:02.1: bridge window [mem 0x10e00000-0x10ffffff] Dec 16 12:27:08.808702 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:27:08.808778 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11286000-0x11286fff] Dec 16 12:27:08.808842 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 16 12:27:08.808903 kernel: pci 0000:00:02.2: bridge window [mem 0x10c00000-0x10dfffff] Dec 16 12:27:08.808966 kernel: pci 0000:00:02.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Dec 16 12:27:08.809036 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:27:08.809102 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11285000-0x11285fff] Dec 16 12:27:08.810344 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 16 12:27:08.810441 kernel: pci 0000:00:02.3: bridge window [mem 0x10a00000-0x10bfffff] Dec 16 12:27:08.810508 kernel: pci 0000:00:02.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Dec 16 12:27:08.810580 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:27:08.810644 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11284000-0x11284fff] Dec 16 12:27:08.810705 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 16 12:27:08.810773 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Dec 16 12:27:08.810843 kernel: pci 0000:00:02.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Dec 16 12:27:08.810988 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:27:08.811101 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11283000-0x11283fff] Dec 16 12:27:08.813300 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 16 12:27:08.813378 kernel: pci 0000:00:02.5: bridge window [mem 0x10600000-0x107fffff] Dec 16 12:27:08.813457 kernel: pci 0000:00:02.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Dec 16 12:27:08.813532 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:27:08.813641 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11282000-0x11282fff] Dec 16 12:27:08.813714 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 16 12:27:08.813784 kernel: pci 0000:00:02.6: bridge window [mem 0x10400000-0x105fffff] Dec 16 12:27:08.813852 kernel: pci 0000:00:02.6: bridge window [mem 0x8000500000-0x80005fffff 64bit pref] Dec 16 12:27:08.813923 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:27:08.813985 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11281000-0x11281fff] Dec 16 12:27:08.814050 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 16 12:27:08.814140 kernel: pci 0000:00:02.7: bridge window [mem 0x10200000-0x103fffff] Dec 16 12:27:08.814283 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 12:27:08.814352 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11280000-0x11280fff] Dec 16 12:27:08.814426 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 16 12:27:08.814489 kernel: pci 0000:00:03.0: bridge window [mem 0x10000000-0x101fffff] Dec 16 12:27:08.814564 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 conventional PCI endpoint Dec 16 12:27:08.814631 kernel: pci 0000:00:04.0: BAR 0 [io 0x0000-0x0007] Dec 16 12:27:08.814705 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Dec 16 12:27:08.814770 kernel: pci 0000:01:00.0: BAR 1 [mem 0x11000000-0x11000fff] Dec 16 12:27:08.814834 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 16 12:27:08.814897 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Dec 16 12:27:08.814967 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Dec 16 12:27:08.815036 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10e00000-0x10e03fff 64bit] Dec 16 12:27:08.815145 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Dec 16 12:27:08.815216 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10c00000-0x10c00fff] Dec 16 12:27:08.815325 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Dec 16 12:27:08.815447 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Dec 16 12:27:08.815535 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Dec 16 12:27:08.815622 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Dec 16 12:27:08.815694 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff] Dec 16 12:27:08.815758 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Dec 16 12:27:08.815830 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Dec 16 12:27:08.815895 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10600000-0x10600fff] Dec 16 12:27:08.815959 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Dec 16 12:27:08.816029 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Dec 16 12:27:08.816094 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10400000-0x10400fff] Dec 16 12:27:08.816200 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000500000-0x8000503fff 64bit pref] Dec 16 12:27:08.816265 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref] Dec 16 12:27:08.816330 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 16 12:27:08.816392 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Dec 16 12:27:08.816470 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Dec 16 12:27:08.816540 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 16 12:27:08.816605 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 16 12:27:08.816672 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Dec 16 12:27:08.816737 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 16 12:27:08.816801 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Dec 16 12:27:08.816869 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Dec 16 12:27:08.816935 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 16 12:27:08.816997 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Dec 16 12:27:08.817058 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 16 12:27:08.817969 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 16 12:27:08.818052 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Dec 16 12:27:08.819125 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Dec 16 12:27:08.819230 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 16 12:27:08.819857 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Dec 16 12:27:08.819932 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Dec 16 12:27:08.820349 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 16 12:27:08.820457 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Dec 16 12:27:08.820521 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Dec 16 12:27:08.820596 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 16 12:27:08.820658 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Dec 16 12:27:08.820719 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Dec 16 12:27:08.820796 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 16 12:27:08.820904 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Dec 16 12:27:08.820980 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Dec 16 12:27:08.821045 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff]: assigned Dec 16 12:27:08.821140 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Dec 16 12:27:08.821211 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff]: assigned Dec 16 12:27:08.821273 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Dec 16 12:27:08.821338 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff]: assigned Dec 16 12:27:08.821448 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Dec 16 12:27:08.821534 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff]: assigned Dec 16 12:27:08.821598 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Dec 16 12:27:08.821671 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff]: assigned Dec 16 12:27:08.821736 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Dec 16 12:27:08.821799 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Dec 16 12:27:08.821860 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Dec 16 12:27:08.821923 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Dec 16 12:27:08.821988 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Dec 16 12:27:08.822052 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Dec 16 12:27:08.823033 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Dec 16 12:27:08.823198 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff]: assigned Dec 16 12:27:08.823321 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Dec 16 12:27:08.823424 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8001200000-0x8001203fff 64bit pref]: assigned Dec 16 12:27:08.823500 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11200000-0x11200fff]: assigned Dec 16 12:27:08.823565 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11201000-0x11201fff]: assigned Dec 16 12:27:08.823644 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Dec 16 12:27:08.823711 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11202000-0x11202fff]: assigned Dec 16 12:27:08.823773 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Dec 16 12:27:08.823837 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11203000-0x11203fff]: assigned Dec 16 12:27:08.823902 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Dec 16 12:27:08.823966 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11204000-0x11204fff]: assigned Dec 16 12:27:08.824027 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Dec 16 12:27:08.824090 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11205000-0x11205fff]: assigned Dec 16 12:27:08.824182 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Dec 16 12:27:08.824248 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11206000-0x11206fff]: assigned Dec 16 12:27:08.824309 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Dec 16 12:27:08.824372 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11207000-0x11207fff]: assigned Dec 16 12:27:08.824451 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Dec 16 12:27:08.824516 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11208000-0x11208fff]: assigned Dec 16 12:27:08.824577 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Dec 16 12:27:08.824646 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11209000-0x11209fff]: assigned Dec 16 12:27:08.824729 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff]: assigned Dec 16 12:27:08.824827 kernel: pci 0000:00:04.0: BAR 0 [io 0xa000-0xa007]: assigned Dec 16 12:27:08.824899 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Dec 16 12:27:08.824964 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 16 12:27:08.825031 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Dec 16 12:27:08.825093 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 16 12:27:08.825375 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 16 12:27:08.825480 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Dec 16 12:27:08.825548 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Dec 16 12:27:08.825618 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Dec 16 12:27:08.825682 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 16 12:27:08.825743 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 16 12:27:08.825809 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Dec 16 12:27:08.825872 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Dec 16 12:27:08.825942 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Dec 16 12:27:08.826067 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Dec 16 12:27:08.826161 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 16 12:27:08.826227 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 16 12:27:08.826294 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Dec 16 12:27:08.826355 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Dec 16 12:27:08.826464 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Dec 16 12:27:08.826532 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 16 12:27:08.826593 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 16 12:27:08.826687 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Dec 16 12:27:08.826752 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Dec 16 12:27:08.826827 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Dec 16 12:27:08.826892 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff]: assigned Dec 16 12:27:08.826956 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 16 12:27:08.827017 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 16 12:27:08.827079 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Dec 16 12:27:08.827158 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Dec 16 12:27:08.827229 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Dec 16 12:27:08.827296 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Dec 16 12:27:08.827357 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 16 12:27:08.827454 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 16 12:27:08.827525 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Dec 16 12:27:08.827586 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 16 12:27:08.827654 kernel: pci 0000:07:00.0: ROM [mem 0x10c00000-0x10c7ffff pref]: assigned Dec 16 12:27:08.827719 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000c00000-0x8000c03fff 64bit pref]: assigned Dec 16 12:27:08.827786 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10c80000-0x10c80fff]: assigned Dec 16 12:27:08.827850 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 16 12:27:08.827914 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 16 12:27:08.827976 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Dec 16 12:27:08.828063 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 16 12:27:08.828139 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 16 12:27:08.828202 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 16 12:27:08.828263 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Dec 16 12:27:08.828326 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 16 12:27:08.828389 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 16 12:27:08.828468 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Dec 16 12:27:08.828531 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Dec 16 12:27:08.828596 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Dec 16 12:27:08.828661 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 16 12:27:08.828716 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 16 12:27:08.828771 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 16 12:27:08.828843 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 16 12:27:08.828935 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Dec 16 12:27:08.829006 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Dec 16 12:27:08.829073 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Dec 16 12:27:08.829177 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Dec 16 12:27:08.829240 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Dec 16 12:27:08.829305 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Dec 16 12:27:08.829365 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Dec 16 12:27:08.829461 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Dec 16 12:27:08.829565 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Dec 16 12:27:08.829626 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Dec 16 12:27:08.829683 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Dec 16 12:27:08.829748 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Dec 16 12:27:08.829805 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Dec 16 12:27:08.829861 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Dec 16 12:27:08.829933 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Dec 16 12:27:08.829990 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Dec 16 12:27:08.830047 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 16 12:27:08.830123 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Dec 16 12:27:08.830185 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Dec 16 12:27:08.830246 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 16 12:27:08.830310 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Dec 16 12:27:08.830370 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Dec 16 12:27:08.830437 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 16 12:27:08.830502 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Dec 16 12:27:08.830559 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Dec 16 12:27:08.830615 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Dec 16 12:27:08.830625 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 16 12:27:08.830634 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 16 12:27:08.830644 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 16 12:27:08.830652 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 16 12:27:08.830660 kernel: iommu: Default domain type: Translated Dec 16 12:27:08.830668 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 16 12:27:08.830675 kernel: efivars: Registered efivars operations Dec 16 12:27:08.830683 kernel: vgaarb: loaded Dec 16 12:27:08.830692 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 16 12:27:08.830700 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 12:27:08.830708 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 12:27:08.830717 kernel: pnp: PnP ACPI init Dec 16 12:27:08.830797 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 16 12:27:08.830810 kernel: pnp: PnP ACPI: found 1 devices Dec 16 12:27:08.830818 kernel: NET: Registered PF_INET protocol family Dec 16 12:27:08.830826 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 12:27:08.830834 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 12:27:08.830842 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 12:27:08.830850 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 12:27:08.830886 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 12:27:08.830896 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 12:27:08.830904 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:27:08.830912 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:27:08.830920 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 12:27:08.831006 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Dec 16 12:27:08.831017 kernel: PCI: CLS 0 bytes, default 64 Dec 16 12:27:08.831026 kernel: kvm [1]: HYP mode not available Dec 16 12:27:08.831033 kernel: Initialise system trusted keyrings Dec 16 12:27:08.831044 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 12:27:08.831053 kernel: Key type asymmetric registered Dec 16 12:27:08.831060 kernel: Asymmetric key parser 'x509' registered Dec 16 12:27:08.831068 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 16 12:27:08.831076 kernel: io scheduler mq-deadline registered Dec 16 12:27:08.831084 kernel: io scheduler kyber registered Dec 16 12:27:08.831092 kernel: io scheduler bfq registered Dec 16 12:27:08.831100 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 16 12:27:08.831208 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Dec 16 12:27:08.831280 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Dec 16 12:27:08.831343 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:27:08.831419 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Dec 16 12:27:08.831486 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Dec 16 12:27:08.831549 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:27:08.831613 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Dec 16 12:27:08.831679 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Dec 16 12:27:08.831740 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:27:08.831809 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Dec 16 12:27:08.831873 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Dec 16 12:27:08.831935 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:27:08.832002 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Dec 16 12:27:08.832065 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Dec 16 12:27:08.832140 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:27:08.832204 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Dec 16 12:27:08.832266 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Dec 16 12:27:08.832377 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:27:08.832499 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Dec 16 12:27:08.832610 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Dec 16 12:27:08.832677 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:27:08.832742 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Dec 16 12:27:08.832803 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Dec 16 12:27:08.832866 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:27:08.832881 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Dec 16 12:27:08.832945 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Dec 16 12:27:08.833018 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Dec 16 12:27:08.833081 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 12:27:08.833091 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 16 12:27:08.833100 kernel: ACPI: button: Power Button [PWRB] Dec 16 12:27:08.833108 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 16 12:27:08.833220 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Dec 16 12:27:08.833297 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Dec 16 12:27:08.833312 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 12:27:08.833320 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 16 12:27:08.833384 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Dec 16 12:27:08.833404 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Dec 16 12:27:08.833413 kernel: thunder_xcv, ver 1.0 Dec 16 12:27:08.833421 kernel: thunder_bgx, ver 1.0 Dec 16 12:27:08.833431 kernel: nicpf, ver 1.0 Dec 16 12:27:08.833439 kernel: nicvf, ver 1.0 Dec 16 12:27:08.833519 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 16 12:27:08.833581 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-16T12:27:08 UTC (1765888028) Dec 16 12:27:08.833592 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 12:27:08.833600 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 16 12:27:08.833608 kernel: watchdog: NMI not fully supported Dec 16 12:27:08.833615 kernel: watchdog: Hard watchdog permanently disabled Dec 16 12:27:08.833623 kernel: NET: Registered PF_INET6 protocol family Dec 16 12:27:08.833631 kernel: Segment Routing with IPv6 Dec 16 12:27:08.833639 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 12:27:08.833648 kernel: NET: Registered PF_PACKET protocol family Dec 16 12:27:08.833656 kernel: Key type dns_resolver registered Dec 16 12:27:08.833664 kernel: registered taskstats version 1 Dec 16 12:27:08.833671 kernel: Loading compiled-in X.509 certificates Dec 16 12:27:08.833679 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 92f3a94fb747a7ba7cbcfde1535be91b86f9429a' Dec 16 12:27:08.833687 kernel: Demotion targets for Node 0: null Dec 16 12:27:08.833695 kernel: Key type .fscrypt registered Dec 16 12:27:08.833702 kernel: Key type fscrypt-provisioning registered Dec 16 12:27:08.833710 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 12:27:08.833719 kernel: ima: Allocated hash algorithm: sha1 Dec 16 12:27:08.833727 kernel: ima: No architecture policies found Dec 16 12:27:08.833735 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 16 12:27:08.833744 kernel: clk: Disabling unused clocks Dec 16 12:27:08.833751 kernel: PM: genpd: Disabling unused power domains Dec 16 12:27:08.833759 kernel: Warning: unable to open an initial console. Dec 16 12:27:08.833792 kernel: Freeing unused kernel memory: 39552K Dec 16 12:27:08.833802 kernel: Run /init as init process Dec 16 12:27:08.833809 kernel: with arguments: Dec 16 12:27:08.833820 kernel: /init Dec 16 12:27:08.833828 kernel: with environment: Dec 16 12:27:08.833835 kernel: HOME=/ Dec 16 12:27:08.833843 kernel: TERM=linux Dec 16 12:27:08.833852 systemd[1]: Successfully made /usr/ read-only. Dec 16 12:27:08.833863 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:27:08.833872 systemd[1]: Detected virtualization kvm. Dec 16 12:27:08.833881 systemd[1]: Detected architecture arm64. Dec 16 12:27:08.833889 systemd[1]: Running in initrd. Dec 16 12:27:08.833897 systemd[1]: No hostname configured, using default hostname. Dec 16 12:27:08.833906 systemd[1]: Hostname set to . Dec 16 12:27:08.833914 systemd[1]: Initializing machine ID from VM UUID. Dec 16 12:27:08.833922 systemd[1]: Queued start job for default target initrd.target. Dec 16 12:27:08.833931 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:27:08.833939 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:27:08.833948 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 12:27:08.833958 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:27:08.833967 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 12:27:08.833976 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 12:27:08.833986 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 12:27:08.833994 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 12:27:08.834003 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:27:08.834013 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:27:08.834021 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:27:08.834029 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:27:08.834037 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:27:08.834046 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:27:08.834054 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:27:08.834062 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:27:08.834071 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 12:27:08.834079 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 12:27:08.834089 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:27:08.834097 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:27:08.834105 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:27:08.834147 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:27:08.834156 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 12:27:08.834164 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:27:08.834173 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 12:27:08.834184 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 12:27:08.834193 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 12:27:08.834202 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:27:08.834210 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:27:08.834218 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:27:08.834227 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 12:27:08.834236 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:27:08.834246 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 12:27:08.834283 systemd-journald[245]: Collecting audit messages is disabled. Dec 16 12:27:08.834305 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:27:08.834316 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 12:27:08.834324 kernel: Bridge firewalling registered Dec 16 12:27:08.834333 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:27:08.834341 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:27:08.834350 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 12:27:08.834360 systemd-journald[245]: Journal started Dec 16 12:27:08.834380 systemd-journald[245]: Runtime Journal (/run/log/journal/210681505651490c86240f17288a0bae) is 8M, max 76.5M, 68.5M free. Dec 16 12:27:08.806936 systemd-modules-load[247]: Inserted module 'overlay' Dec 16 12:27:08.827417 systemd-modules-load[247]: Inserted module 'br_netfilter' Dec 16 12:27:08.839131 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:27:08.841343 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:27:08.847449 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:27:08.859815 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:27:08.864452 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:27:08.868130 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:27:08.876599 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:27:08.881338 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 12:27:08.883471 systemd-tmpfiles[273]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 12:27:08.884152 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:27:08.889340 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:27:08.891531 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:27:08.913268 dracut-cmdline[283]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 16 12:27:08.939425 systemd-resolved[286]: Positive Trust Anchors: Dec 16 12:27:08.939444 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:27:08.939481 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:27:08.945121 systemd-resolved[286]: Defaulting to hostname 'linux'. Dec 16 12:27:08.946586 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:27:08.947389 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:27:09.008156 kernel: SCSI subsystem initialized Dec 16 12:27:09.013150 kernel: Loading iSCSI transport class v2.0-870. Dec 16 12:27:09.021159 kernel: iscsi: registered transport (tcp) Dec 16 12:27:09.034290 kernel: iscsi: registered transport (qla4xxx) Dec 16 12:27:09.034362 kernel: QLogic iSCSI HBA Driver Dec 16 12:27:09.052218 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:27:09.077865 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:27:09.079631 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:27:09.137981 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 12:27:09.140938 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 12:27:09.208175 kernel: raid6: neonx8 gen() 15553 MB/s Dec 16 12:27:09.225183 kernel: raid6: neonx4 gen() 15667 MB/s Dec 16 12:27:09.242224 kernel: raid6: neonx2 gen() 13078 MB/s Dec 16 12:27:09.259178 kernel: raid6: neonx1 gen() 10313 MB/s Dec 16 12:27:09.276177 kernel: raid6: int64x8 gen() 6827 MB/s Dec 16 12:27:09.293181 kernel: raid6: int64x4 gen() 7201 MB/s Dec 16 12:27:09.310195 kernel: raid6: int64x2 gen() 6002 MB/s Dec 16 12:27:09.327210 kernel: raid6: int64x1 gen() 4979 MB/s Dec 16 12:27:09.327311 kernel: raid6: using algorithm neonx4 gen() 15667 MB/s Dec 16 12:27:09.344166 kernel: raid6: .... xor() 12208 MB/s, rmw enabled Dec 16 12:27:09.344244 kernel: raid6: using neon recovery algorithm Dec 16 12:27:09.349498 kernel: xor: measuring software checksum speed Dec 16 12:27:09.349574 kernel: 8regs : 21670 MB/sec Dec 16 12:27:09.349592 kernel: 32regs : 21687 MB/sec Dec 16 12:27:09.349609 kernel: arm64_neon : 28003 MB/sec Dec 16 12:27:09.350235 kernel: xor: using function: arm64_neon (28003 MB/sec) Dec 16 12:27:09.405149 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 12:27:09.415694 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:27:09.418585 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:27:09.452173 systemd-udevd[494]: Using default interface naming scheme 'v255'. Dec 16 12:27:09.456957 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:27:09.462234 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 12:27:09.487861 dracut-pre-trigger[504]: rd.md=0: removing MD RAID activation Dec 16 12:27:09.513672 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:27:09.516935 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:27:09.578562 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:27:09.583265 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 12:27:09.674141 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Dec 16 12:27:09.679442 kernel: scsi host0: Virtio SCSI HBA Dec 16 12:27:09.692227 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 16 12:27:09.693339 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 16 12:27:09.716491 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:27:09.716616 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:27:09.718864 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:27:09.723129 kernel: ACPI: bus type USB registered Dec 16 12:27:09.723167 kernel: sr 0:0:0:0: Power-on or device reset occurred Dec 16 12:27:09.723305 kernel: sd 0:0:0:1: Power-on or device reset occurred Dec 16 12:27:09.724312 kernel: usbcore: registered new interface driver usbfs Dec 16 12:27:09.724353 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Dec 16 12:27:09.724595 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:27:09.726184 kernel: sd 0:0:0:1: [sda] Write Protect is off Dec 16 12:27:09.728286 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Dec 16 12:27:09.728440 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 16 12:27:09.729351 kernel: usbcore: registered new interface driver hub Dec 16 12:27:09.730790 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Dec 16 12:27:09.730939 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 12:27:09.733586 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Dec 16 12:27:09.733752 kernel: usbcore: registered new device driver usb Dec 16 12:27:09.736380 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 12:27:09.736427 kernel: GPT:17805311 != 80003071 Dec 16 12:27:09.736438 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 12:27:09.736448 kernel: GPT:17805311 != 80003071 Dec 16 12:27:09.737131 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 12:27:09.738998 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 12:27:09.739044 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Dec 16 12:27:09.761340 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:27:09.770683 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 16 12:27:09.771191 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Dec 16 12:27:09.771316 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 16 12:27:09.775874 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 16 12:27:09.776057 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Dec 16 12:27:09.776179 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Dec 16 12:27:09.777418 kernel: hub 1-0:1.0: USB hub found Dec 16 12:27:09.777679 kernel: hub 1-0:1.0: 4 ports detected Dec 16 12:27:09.777859 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 16 12:27:09.779173 kernel: hub 2-0:1.0: USB hub found Dec 16 12:27:09.788547 kernel: hub 2-0:1.0: 4 ports detected Dec 16 12:27:09.823286 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 16 12:27:09.834512 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 16 12:27:09.858146 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 16 12:27:09.865854 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Dec 16 12:27:09.866808 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 16 12:27:09.874500 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 12:27:09.882153 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 12:27:09.883355 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:27:09.884084 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:27:09.887056 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:27:09.890675 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 12:27:09.895219 disk-uuid[604]: Primary Header is updated. Dec 16 12:27:09.895219 disk-uuid[604]: Secondary Entries is updated. Dec 16 12:27:09.895219 disk-uuid[604]: Secondary Header is updated. Dec 16 12:27:09.908525 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 12:27:09.918711 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:27:10.013181 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 16 12:27:10.145342 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Dec 16 12:27:10.145437 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Dec 16 12:27:10.146344 kernel: usbcore: registered new interface driver usbhid Dec 16 12:27:10.146372 kernel: usbhid: USB HID core driver Dec 16 12:27:10.250250 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Dec 16 12:27:10.379203 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Dec 16 12:27:10.432727 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Dec 16 12:27:10.929132 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 12:27:10.930156 disk-uuid[605]: The operation has completed successfully. Dec 16 12:27:10.984809 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 12:27:10.985755 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 12:27:11.009288 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 12:27:11.037141 sh[629]: Success Dec 16 12:27:11.053140 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 12:27:11.054312 kernel: device-mapper: uevent: version 1.0.3 Dec 16 12:27:11.054345 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 12:27:11.067164 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 16 12:27:11.124342 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:27:11.126020 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 12:27:11.139159 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 12:27:11.148160 kernel: BTRFS: device fsid 6d6d314d-b8a1-4727-8a34-8525e276a248 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (642) Dec 16 12:27:11.149688 kernel: BTRFS info (device dm-0): first mount of filesystem 6d6d314d-b8a1-4727-8a34-8525e276a248 Dec 16 12:27:11.149744 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:27:11.157450 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 12:27:11.157521 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 12:27:11.157548 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 12:27:11.159341 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 12:27:11.161147 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:27:11.162861 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 12:27:11.165002 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 12:27:11.166902 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 12:27:11.195149 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (671) Dec 16 12:27:11.196560 kernel: BTRFS info (device sda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:27:11.196597 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:27:11.204175 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 12:27:11.204232 kernel: BTRFS info (device sda6): turning on async discard Dec 16 12:27:11.204243 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 12:27:11.209130 kernel: BTRFS info (device sda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:27:11.211000 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 12:27:11.214008 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 12:27:11.298542 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:27:11.305296 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:27:11.342779 systemd-networkd[813]: lo: Link UP Dec 16 12:27:11.343551 systemd-networkd[813]: lo: Gained carrier Dec 16 12:27:11.346060 systemd-networkd[813]: Enumeration completed Dec 16 12:27:11.346737 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:27:11.348011 systemd-networkd[813]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:27:11.348014 systemd-networkd[813]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:27:11.348502 systemd[1]: Reached target network.target - Network. Dec 16 12:27:11.349449 systemd-networkd[813]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:27:11.349452 systemd-networkd[813]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:27:11.350045 systemd-networkd[813]: eth0: Link UP Dec 16 12:27:11.354440 systemd-networkd[813]: eth1: Link UP Dec 16 12:27:11.354595 systemd-networkd[813]: eth1: Gained carrier Dec 16 12:27:11.354610 systemd-networkd[813]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:27:11.358340 systemd-networkd[813]: eth0: Gained carrier Dec 16 12:27:11.358443 systemd-networkd[813]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:27:11.368753 ignition[724]: Ignition 2.22.0 Dec 16 12:27:11.368771 ignition[724]: Stage: fetch-offline Dec 16 12:27:11.368800 ignition[724]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:27:11.373508 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:27:11.368808 ignition[724]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 12:27:11.368889 ignition[724]: parsed url from cmdline: "" Dec 16 12:27:11.368892 ignition[724]: no config URL provided Dec 16 12:27:11.368897 ignition[724]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:27:11.368904 ignition[724]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:27:11.377383 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 12:27:11.368909 ignition[724]: failed to fetch config: resource requires networking Dec 16 12:27:11.369056 ignition[724]: Ignition finished successfully Dec 16 12:27:11.406311 systemd-networkd[813]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Dec 16 12:27:11.410310 systemd-networkd[813]: eth0: DHCPv4 address 88.99.82.111/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 16 12:27:11.426707 ignition[822]: Ignition 2.22.0 Dec 16 12:27:11.426720 ignition[822]: Stage: fetch Dec 16 12:27:11.427135 ignition[822]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:27:11.427146 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 12:27:11.427232 ignition[822]: parsed url from cmdline: "" Dec 16 12:27:11.427236 ignition[822]: no config URL provided Dec 16 12:27:11.427240 ignition[822]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:27:11.427246 ignition[822]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:27:11.427269 ignition[822]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Dec 16 12:27:11.433148 ignition[822]: GET result: OK Dec 16 12:27:11.433689 ignition[822]: parsing config with SHA512: 16182222f0bae7c33f930b9b455f955334153d52cbd0f8d4bd31f57929d2359b96624e4f7d4011fe9ad72397263c9379af7b7bf044e62fd91bd71f4cf81dc461 Dec 16 12:27:11.438338 unknown[822]: fetched base config from "system" Dec 16 12:27:11.438721 ignition[822]: fetch: fetch complete Dec 16 12:27:11.438347 unknown[822]: fetched base config from "system" Dec 16 12:27:11.438726 ignition[822]: fetch: fetch passed Dec 16 12:27:11.438352 unknown[822]: fetched user config from "hetzner" Dec 16 12:27:11.441091 ignition[822]: Ignition finished successfully Dec 16 12:27:11.444183 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 12:27:11.447502 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 12:27:11.482967 ignition[830]: Ignition 2.22.0 Dec 16 12:27:11.482982 ignition[830]: Stage: kargs Dec 16 12:27:11.483137 ignition[830]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:27:11.483147 ignition[830]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 12:27:11.484002 ignition[830]: kargs: kargs passed Dec 16 12:27:11.484046 ignition[830]: Ignition finished successfully Dec 16 12:27:11.486735 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 12:27:11.488576 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 12:27:11.526825 ignition[836]: Ignition 2.22.0 Dec 16 12:27:11.527530 ignition[836]: Stage: disks Dec 16 12:27:11.527728 ignition[836]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:27:11.527741 ignition[836]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 12:27:11.530200 ignition[836]: disks: disks passed Dec 16 12:27:11.530273 ignition[836]: Ignition finished successfully Dec 16 12:27:11.532197 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 12:27:11.534480 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 12:27:11.536022 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 12:27:11.536880 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:27:11.537651 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:27:11.538963 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:27:11.541036 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 12:27:11.567496 systemd-fsck[845]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Dec 16 12:27:11.572249 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 12:27:11.575832 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 12:27:11.661157 kernel: EXT4-fs (sda9): mounted filesystem 895d7845-d0e8-43ae-a778-7804b473b868 r/w with ordered data mode. Quota mode: none. Dec 16 12:27:11.662277 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 12:27:11.664456 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 12:27:11.666708 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:27:11.668912 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 12:27:11.676598 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 16 12:27:11.679742 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 12:27:11.681856 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:27:11.685795 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 12:27:11.688129 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (853) Dec 16 12:27:11.689614 kernel: BTRFS info (device sda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:27:11.689710 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:27:11.690816 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 12:27:11.698154 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 12:27:11.698236 kernel: BTRFS info (device sda6): turning on async discard Dec 16 12:27:11.698252 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 12:27:11.701014 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:27:11.754171 coreos-metadata[855]: Dec 16 12:27:11.753 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Dec 16 12:27:11.756758 initrd-setup-root[880]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 12:27:11.758261 coreos-metadata[855]: Dec 16 12:27:11.756 INFO Fetch successful Dec 16 12:27:11.758261 coreos-metadata[855]: Dec 16 12:27:11.756 INFO wrote hostname ci-4459-2-2-0-7f64ef3ba0 to /sysroot/etc/hostname Dec 16 12:27:11.759012 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 12:27:11.765149 initrd-setup-root[887]: cut: /sysroot/etc/group: No such file or directory Dec 16 12:27:11.771090 initrd-setup-root[895]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 12:27:11.777023 initrd-setup-root[902]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 12:27:11.877297 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 12:27:11.880060 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 12:27:11.881461 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 12:27:11.906166 kernel: BTRFS info (device sda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:27:11.924163 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 12:27:11.937139 ignition[970]: INFO : Ignition 2.22.0 Dec 16 12:27:11.937139 ignition[970]: INFO : Stage: mount Dec 16 12:27:11.937139 ignition[970]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:27:11.937139 ignition[970]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 12:27:11.940541 ignition[970]: INFO : mount: mount passed Dec 16 12:27:11.941036 ignition[970]: INFO : Ignition finished successfully Dec 16 12:27:11.943659 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 12:27:11.946697 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 12:27:12.148709 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 12:27:12.152526 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:27:12.182339 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (981) Dec 16 12:27:12.185149 kernel: BTRFS info (device sda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:27:12.185213 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:27:12.189452 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 12:27:12.189530 kernel: BTRFS info (device sda6): turning on async discard Dec 16 12:27:12.189551 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 12:27:12.192954 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:27:12.231824 ignition[998]: INFO : Ignition 2.22.0 Dec 16 12:27:12.231824 ignition[998]: INFO : Stage: files Dec 16 12:27:12.233816 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:27:12.233816 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 12:27:12.236761 ignition[998]: DEBUG : files: compiled without relabeling support, skipping Dec 16 12:27:12.238792 ignition[998]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 12:27:12.238792 ignition[998]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 12:27:12.241868 ignition[998]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 12:27:12.243321 ignition[998]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 12:27:12.244660 unknown[998]: wrote ssh authorized keys file for user: core Dec 16 12:27:12.246188 ignition[998]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 12:27:12.247528 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 16 12:27:12.248857 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Dec 16 12:27:12.325155 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 12:27:12.422004 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 16 12:27:12.422004 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 12:27:12.425011 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 12:27:12.425011 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:27:12.425011 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:27:12.425011 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:27:12.425011 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:27:12.425011 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:27:12.425011 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:27:12.433995 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:27:12.433995 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:27:12.433995 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 12:27:12.433995 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 12:27:12.433995 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 12:27:12.433995 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Dec 16 12:27:12.564812 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 12:27:12.731299 systemd-networkd[813]: eth0: Gained IPv6LL Dec 16 12:27:13.179276 systemd-networkd[813]: eth1: Gained IPv6LL Dec 16 12:27:13.220813 ignition[998]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 16 12:27:13.220813 ignition[998]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 12:27:13.224453 ignition[998]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:27:13.226270 ignition[998]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:27:13.226270 ignition[998]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 12:27:13.226270 ignition[998]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 16 12:27:13.226270 ignition[998]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 16 12:27:13.226270 ignition[998]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 16 12:27:13.226270 ignition[998]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 16 12:27:13.226270 ignition[998]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Dec 16 12:27:13.226270 ignition[998]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 12:27:13.226270 ignition[998]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:27:13.226270 ignition[998]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:27:13.226270 ignition[998]: INFO : files: files passed Dec 16 12:27:13.226270 ignition[998]: INFO : Ignition finished successfully Dec 16 12:27:13.228530 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 12:27:13.230737 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 12:27:13.234700 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 12:27:13.248058 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 12:27:13.248173 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 12:27:13.256368 initrd-setup-root-after-ignition[1027]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:27:13.256368 initrd-setup-root-after-ignition[1027]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:27:13.259267 initrd-setup-root-after-ignition[1031]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:27:13.260917 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:27:13.261925 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 12:27:13.264681 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 12:27:13.325195 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 12:27:13.325404 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 12:27:13.330961 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 12:27:13.332695 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 12:27:13.334318 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 12:27:13.335759 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 12:27:13.368239 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:27:13.371244 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 12:27:13.398451 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:27:13.399963 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:27:13.401702 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 12:27:13.402727 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 12:27:13.402908 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:27:13.404951 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 12:27:13.406532 systemd[1]: Stopped target basic.target - Basic System. Dec 16 12:27:13.407385 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 12:27:13.408530 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:27:13.409714 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 12:27:13.410922 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:27:13.412046 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 12:27:13.413127 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:27:13.414296 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 12:27:13.415416 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 12:27:13.416435 systemd[1]: Stopped target swap.target - Swaps. Dec 16 12:27:13.417309 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 12:27:13.417525 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:27:13.418809 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:27:13.419964 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:27:13.421093 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 12:27:13.421683 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:27:13.423055 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 12:27:13.423247 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 12:27:13.424847 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 12:27:13.425014 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:27:13.426246 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 12:27:13.426405 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 12:27:13.427491 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 16 12:27:13.427633 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 12:27:13.431353 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 12:27:13.431890 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 12:27:13.432053 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:27:13.436432 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 12:27:13.438067 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 12:27:13.438277 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:27:13.441159 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 12:27:13.441311 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:27:13.451521 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 12:27:13.451646 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 12:27:13.460295 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 12:27:13.467621 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 12:27:13.469182 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 12:27:13.474622 ignition[1051]: INFO : Ignition 2.22.0 Dec 16 12:27:13.474622 ignition[1051]: INFO : Stage: umount Dec 16 12:27:13.477310 ignition[1051]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:27:13.477310 ignition[1051]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 12:27:13.477310 ignition[1051]: INFO : umount: umount passed Dec 16 12:27:13.477310 ignition[1051]: INFO : Ignition finished successfully Dec 16 12:27:13.478812 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 12:27:13.480160 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 12:27:13.481568 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 12:27:13.481624 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 12:27:13.482349 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 12:27:13.482396 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 12:27:13.483540 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 12:27:13.483582 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 12:27:13.484730 systemd[1]: Stopped target network.target - Network. Dec 16 12:27:13.485648 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 12:27:13.485699 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:27:13.486797 systemd[1]: Stopped target paths.target - Path Units. Dec 16 12:27:13.488298 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 12:27:13.492326 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:27:13.494583 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 12:27:13.496834 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 12:27:13.497748 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 12:27:13.497791 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:27:13.498746 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 12:27:13.498778 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:27:13.499719 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 12:27:13.499774 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 12:27:13.500733 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 12:27:13.500772 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 12:27:13.501807 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 12:27:13.501857 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 12:27:13.502960 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 12:27:13.503963 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 12:27:13.516555 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 12:27:13.516755 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 12:27:13.520887 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 12:27:13.521378 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 12:27:13.521444 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:27:13.525064 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 12:27:13.526601 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 12:27:13.527269 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 12:27:13.530228 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 12:27:13.530925 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 12:27:13.532792 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 12:27:13.532863 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:27:13.534922 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 12:27:13.538811 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 12:27:13.538906 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:27:13.539999 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:27:13.540044 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:27:13.544717 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 12:27:13.544782 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 12:27:13.547083 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:27:13.549731 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 12:27:13.568882 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 12:27:13.569229 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:27:13.571974 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 12:27:13.572028 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 12:27:13.573008 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 12:27:13.573047 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:27:13.574347 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 12:27:13.574411 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:27:13.576607 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 12:27:13.576657 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 12:27:13.578194 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 12:27:13.578322 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:27:13.580511 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 12:27:13.582238 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 12:27:13.582306 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:27:13.585207 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 12:27:13.585264 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:27:13.587996 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:27:13.588051 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:27:13.591905 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 12:27:13.594184 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 12:27:13.599673 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 12:27:13.600795 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 12:27:13.603467 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 12:27:13.606996 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 12:27:13.634081 systemd[1]: Switching root. Dec 16 12:27:13.673010 systemd-journald[245]: Journal stopped Dec 16 12:27:14.691757 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Dec 16 12:27:14.691830 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 12:27:14.691843 kernel: SELinux: policy capability open_perms=1 Dec 16 12:27:14.691853 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 12:27:14.691866 kernel: SELinux: policy capability always_check_network=0 Dec 16 12:27:14.691875 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 12:27:14.691885 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 12:27:14.691898 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 12:27:14.691908 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 12:27:14.691920 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 12:27:14.691930 kernel: audit: type=1403 audit(1765888033.857:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 12:27:14.691941 systemd[1]: Successfully loaded SELinux policy in 68.490ms. Dec 16 12:27:14.691958 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.227ms. Dec 16 12:27:14.691969 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:27:14.691981 systemd[1]: Detected virtualization kvm. Dec 16 12:27:14.691993 systemd[1]: Detected architecture arm64. Dec 16 12:27:14.692004 systemd[1]: Detected first boot. Dec 16 12:27:14.692015 systemd[1]: Hostname set to . Dec 16 12:27:14.692025 systemd[1]: Initializing machine ID from VM UUID. Dec 16 12:27:14.692035 zram_generator::config[1099]: No configuration found. Dec 16 12:27:14.692046 kernel: NET: Registered PF_VSOCK protocol family Dec 16 12:27:14.692057 systemd[1]: Populated /etc with preset unit settings. Dec 16 12:27:14.692072 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 12:27:14.692085 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 12:27:14.692096 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 12:27:14.692107 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 12:27:14.692136 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 12:27:14.692150 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 12:27:14.692164 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 12:27:14.692177 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 12:27:14.692187 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 12:27:14.692198 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 12:27:14.692209 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 12:27:14.692222 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 12:27:14.692232 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:27:14.692243 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:27:14.692254 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 12:27:14.692268 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 12:27:14.692282 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 12:27:14.692293 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:27:14.692303 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 16 12:27:14.692314 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:27:14.692338 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:27:14.692351 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 12:27:14.692364 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 12:27:14.692376 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 12:27:14.692387 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 12:27:14.692398 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:27:14.692409 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:27:14.692419 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:27:14.692430 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:27:14.692441 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 12:27:14.692452 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 12:27:14.692464 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 12:27:14.692475 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:27:14.692486 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:27:14.692496 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:27:14.692508 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 12:27:14.692519 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 12:27:14.692530 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 12:27:14.692541 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 12:27:14.692552 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 12:27:14.692562 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 12:27:14.692574 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 12:27:14.692585 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 12:27:14.692596 systemd[1]: Reached target machines.target - Containers. Dec 16 12:27:14.692607 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 12:27:14.692617 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:27:14.692628 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:27:14.692639 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 12:27:14.692649 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:27:14.692661 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:27:14.692672 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:27:14.692682 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 12:27:14.692693 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:27:14.692704 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 12:27:14.692714 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 12:27:14.692726 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 12:27:14.692736 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 12:27:14.692748 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 12:27:14.692759 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:27:14.692772 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:27:14.692782 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:27:14.692794 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:27:14.692805 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 12:27:14.692816 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 12:27:14.692827 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:27:14.692838 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 12:27:14.692848 systemd[1]: Stopped verity-setup.service. Dec 16 12:27:14.692859 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 12:27:14.692869 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 12:27:14.692881 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 12:27:14.692892 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 12:27:14.692902 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 12:27:14.692912 kernel: loop: module loaded Dec 16 12:27:14.692922 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 12:27:14.692933 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:27:14.692945 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 12:27:14.692956 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 12:27:14.692967 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:27:14.692977 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:27:14.692988 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:27:14.692998 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:27:14.693009 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:27:14.693023 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:27:14.693085 kernel: fuse: init (API version 7.41) Dec 16 12:27:14.693106 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:27:14.698245 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 12:27:14.698272 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 12:27:14.698284 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 12:27:14.698295 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 12:27:14.698306 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 12:27:14.698317 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 12:27:14.698338 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:27:14.698351 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 12:27:14.698369 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 12:27:14.698380 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:27:14.698391 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 12:27:14.698404 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:27:14.698446 systemd-journald[1164]: Collecting audit messages is disabled. Dec 16 12:27:14.698471 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 12:27:14.698482 kernel: ACPI: bus type drm_connector registered Dec 16 12:27:14.698496 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:27:14.698506 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:27:14.698518 systemd-journald[1164]: Journal started Dec 16 12:27:14.698550 systemd-journald[1164]: Runtime Journal (/run/log/journal/210681505651490c86240f17288a0bae) is 8M, max 76.5M, 68.5M free. Dec 16 12:27:14.380381 systemd[1]: Queued start job for default target multi-user.target. Dec 16 12:27:14.406440 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 16 12:27:14.406879 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 12:27:14.703407 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 12:27:14.707149 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:27:14.709486 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 12:27:14.711040 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:27:14.714509 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:27:14.715504 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:27:14.716594 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 12:27:14.719416 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 12:27:14.720740 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 12:27:14.742796 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:27:14.749464 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 12:27:14.758368 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 12:27:14.759415 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 12:27:14.764002 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 12:27:14.768683 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 12:27:14.774198 kernel: loop0: detected capacity change from 0 to 100632 Dec 16 12:27:14.805233 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 12:27:14.805767 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:27:14.812006 systemd-journald[1164]: Time spent on flushing to /var/log/journal/210681505651490c86240f17288a0bae is 45.547ms for 1173 entries. Dec 16 12:27:14.812006 systemd-journald[1164]: System Journal (/var/log/journal/210681505651490c86240f17288a0bae) is 8M, max 584.8M, 576.8M free. Dec 16 12:27:14.868402 systemd-journald[1164]: Received client request to flush runtime journal. Dec 16 12:27:14.868464 kernel: loop1: detected capacity change from 0 to 8 Dec 16 12:27:14.849787 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:27:14.872852 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 12:27:14.875664 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 12:27:14.877147 kernel: loop2: detected capacity change from 0 to 207008 Dec 16 12:27:14.880260 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 12:27:14.887623 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:27:14.912466 kernel: loop3: detected capacity change from 0 to 119840 Dec 16 12:27:14.928880 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Dec 16 12:27:14.928899 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Dec 16 12:27:14.933752 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:27:14.954144 kernel: loop4: detected capacity change from 0 to 100632 Dec 16 12:27:14.971141 kernel: loop5: detected capacity change from 0 to 8 Dec 16 12:27:14.975159 kernel: loop6: detected capacity change from 0 to 207008 Dec 16 12:27:15.008169 kernel: loop7: detected capacity change from 0 to 119840 Dec 16 12:27:15.026980 (sd-merge)[1238]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Dec 16 12:27:15.027887 (sd-merge)[1238]: Merged extensions into '/usr'. Dec 16 12:27:15.036576 systemd[1]: Reload requested from client PID 1196 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 12:27:15.036726 systemd[1]: Reloading... Dec 16 12:27:15.158154 zram_generator::config[1260]: No configuration found. Dec 16 12:27:15.261149 ldconfig[1192]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 12:27:15.387897 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 12:27:15.388480 systemd[1]: Reloading finished in 350 ms. Dec 16 12:27:15.411179 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 12:27:15.412213 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 12:27:15.413344 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 12:27:15.423162 systemd[1]: Starting ensure-sysext.service... Dec 16 12:27:15.428296 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:27:15.430652 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:27:15.447826 systemd[1]: Reload requested from client PID 1302 ('systemctl') (unit ensure-sysext.service)... Dec 16 12:27:15.447842 systemd[1]: Reloading... Dec 16 12:27:15.453135 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 12:27:15.453167 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 12:27:15.453437 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 12:27:15.453623 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 12:27:15.454252 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 12:27:15.454534 systemd-tmpfiles[1303]: ACLs are not supported, ignoring. Dec 16 12:27:15.454576 systemd-tmpfiles[1303]: ACLs are not supported, ignoring. Dec 16 12:27:15.458259 systemd-tmpfiles[1303]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:27:15.458269 systemd-tmpfiles[1303]: Skipping /boot Dec 16 12:27:15.464511 systemd-tmpfiles[1303]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:27:15.464658 systemd-tmpfiles[1303]: Skipping /boot Dec 16 12:27:15.485656 systemd-udevd[1304]: Using default interface naming scheme 'v255'. Dec 16 12:27:15.538679 zram_generator::config[1328]: No configuration found. Dec 16 12:27:15.793132 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 12:27:15.800447 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 16 12:27:15.800937 systemd[1]: Reloading finished in 352 ms. Dec 16 12:27:15.817252 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:27:15.819781 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:27:15.840456 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:27:15.845268 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 12:27:15.849379 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 12:27:15.856461 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:27:15.861400 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:27:15.864455 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 12:27:15.884799 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 12:27:15.890755 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:27:15.894452 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:27:15.899490 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:27:15.910539 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:27:15.912268 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:27:15.912459 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:27:15.918714 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:27:15.918922 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:27:15.919058 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:27:15.926786 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:27:15.929349 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:27:15.931301 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:27:15.931391 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:27:15.936509 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 12:27:15.946803 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Dec 16 12:27:15.946934 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:27:15.947019 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:27:15.947043 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:27:15.952430 systemd[1]: Finished ensure-sysext.service. Dec 16 12:27:15.959447 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 12:27:15.971567 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 12:27:15.972927 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 12:27:16.008487 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 12:27:16.014274 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 12:27:16.019860 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Dec 16 12:27:16.019940 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 16 12:27:16.019956 kernel: [drm] features: -context_init Dec 16 12:27:16.020978 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:27:16.021190 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:27:16.024153 kernel: [drm] number of scanouts: 1 Dec 16 12:27:16.024216 kernel: [drm] number of cap sets: 0 Dec 16 12:27:16.025503 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:27:16.028152 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Dec 16 12:27:16.035028 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:27:16.038503 kernel: Console: switching to colour frame buffer device 160x50 Dec 16 12:27:16.039728 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:27:16.046528 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 16 12:27:16.046874 augenrules[1457]: No rules Dec 16 12:27:16.050536 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:27:16.050792 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:27:16.052531 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:27:16.052716 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:27:16.055625 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:27:16.055782 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:27:16.083333 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 12:27:16.095506 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 16 12:27:16.101401 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 12:27:16.102102 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:27:16.119088 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 12:27:16.154077 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:27:16.179759 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 12:27:16.191383 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:27:16.191617 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:27:16.194486 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:27:16.288676 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:27:16.301699 systemd-networkd[1418]: lo: Link UP Dec 16 12:27:16.301712 systemd-networkd[1418]: lo: Gained carrier Dec 16 12:27:16.305509 systemd-networkd[1418]: Enumeration completed Dec 16 12:27:16.305735 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:27:16.308176 systemd-networkd[1418]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:27:16.308180 systemd-networkd[1418]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:27:16.308867 systemd-networkd[1418]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:27:16.308877 systemd-networkd[1418]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:27:16.309253 systemd-networkd[1418]: eth0: Link UP Dec 16 12:27:16.309378 systemd-networkd[1418]: eth0: Gained carrier Dec 16 12:27:16.309397 systemd-networkd[1418]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:27:16.309576 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 12:27:16.312390 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 12:27:16.315430 systemd-networkd[1418]: eth1: Link UP Dec 16 12:27:16.317743 systemd-networkd[1418]: eth1: Gained carrier Dec 16 12:27:16.317775 systemd-networkd[1418]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:27:16.356252 systemd-networkd[1418]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Dec 16 12:27:16.369236 systemd-networkd[1418]: eth0: DHCPv4 address 88.99.82.111/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 16 12:27:16.371557 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 12:27:16.372434 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 12:27:16.377291 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 12:27:16.386936 systemd-resolved[1419]: Positive Trust Anchors: Dec 16 12:27:16.387181 systemd-resolved[1419]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:27:16.387215 systemd-resolved[1419]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:27:16.391721 systemd-resolved[1419]: Using system hostname 'ci-4459-2-2-0-7f64ef3ba0'. Dec 16 12:27:16.393721 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:27:16.394712 systemd[1]: Reached target network.target - Network. Dec 16 12:27:16.395413 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:27:16.396199 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:27:16.396984 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 12:27:16.397966 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 12:27:16.399105 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 12:27:16.399926 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 12:27:16.400838 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 12:27:16.401707 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 12:27:16.401815 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:27:16.402474 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:27:16.404496 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 12:27:16.406855 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 12:27:16.410030 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 12:27:16.411068 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 12:27:16.875462 systemd-resolved[1419]: Clock change detected. Flushing caches. Dec 16 12:27:16.875513 systemd-timesyncd[1435]: Contacted time server 46.41.21.10:123 (0.flatcar.pool.ntp.org). Dec 16 12:27:16.875598 systemd-timesyncd[1435]: Initial clock synchronization to Tue 2025-12-16 12:27:16.875413 UTC. Dec 16 12:27:16.876677 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 12:27:16.879825 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 12:27:16.881036 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 12:27:16.884291 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 12:27:16.885540 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:27:16.886433 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:27:16.887107 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:27:16.887218 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:27:16.888788 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 12:27:16.892560 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 12:27:16.895694 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 12:27:16.899738 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 12:27:16.907818 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 12:27:16.912517 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 12:27:16.913224 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 12:27:16.916120 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 12:27:16.919616 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 12:27:16.923147 jq[1510]: false Dec 16 12:27:16.924638 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Dec 16 12:27:16.933460 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 12:27:16.938038 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 12:27:16.942683 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 12:27:16.945429 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 12:27:16.946108 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 12:27:16.955507 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 12:27:16.958942 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 12:27:16.965388 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 12:27:16.967807 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 12:27:16.973643 coreos-metadata[1507]: Dec 16 12:27:16.968 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Dec 16 12:27:16.973643 coreos-metadata[1507]: Dec 16 12:27:16.971 INFO Fetch successful Dec 16 12:27:16.973643 coreos-metadata[1507]: Dec 16 12:27:16.972 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Dec 16 12:27:16.973643 coreos-metadata[1507]: Dec 16 12:27:16.972 INFO Fetch successful Dec 16 12:27:16.968042 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 12:27:16.969912 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 12:27:16.970418 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 12:27:16.984947 extend-filesystems[1511]: Found /dev/sda6 Dec 16 12:27:17.010967 extend-filesystems[1511]: Found /dev/sda9 Dec 16 12:27:17.018489 tar[1529]: linux-arm64/LICENSE Dec 16 12:27:17.018489 tar[1529]: linux-arm64/helm Dec 16 12:27:17.013027 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 12:27:17.014445 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 12:27:17.027757 extend-filesystems[1511]: Checking size of /dev/sda9 Dec 16 12:27:17.041502 jq[1527]: true Dec 16 12:27:17.039685 dbus-daemon[1508]: [system] SELinux support is enabled Dec 16 12:27:17.039886 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 12:27:17.041375 (ntainerd)[1536]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 12:27:17.043923 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 12:27:17.043948 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 12:27:17.048467 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 12:27:17.048495 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 12:27:17.080094 extend-filesystems[1511]: Resized partition /dev/sda9 Dec 16 12:27:17.087466 extend-filesystems[1561]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 12:27:17.101389 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Dec 16 12:27:17.113248 jq[1555]: true Dec 16 12:27:17.124113 update_engine[1521]: I20251216 12:27:17.123891 1521 main.cc:92] Flatcar Update Engine starting Dec 16 12:27:17.145537 systemd[1]: Started update-engine.service - Update Engine. Dec 16 12:27:17.150013 update_engine[1521]: I20251216 12:27:17.149528 1521 update_check_scheduler.cc:74] Next update check in 10m34s Dec 16 12:27:17.152516 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 12:27:17.187743 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 12:27:17.191793 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 12:27:17.297866 containerd[1536]: time="2025-12-16T12:27:17Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 12:27:17.314423 bash[1588]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:27:17.317862 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 12:27:17.324767 systemd[1]: Starting sshkeys.service... Dec 16 12:27:17.332940 systemd-logind[1519]: New seat seat0. Dec 16 12:27:17.342203 systemd-logind[1519]: Watching system buttons on /dev/input/event0 (Power Button) Dec 16 12:27:17.342227 systemd-logind[1519]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Dec 16 12:27:17.342579 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 12:27:17.344747 containerd[1536]: time="2025-12-16T12:27:17.344333532Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 12:27:17.354383 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Dec 16 12:27:17.377388 extend-filesystems[1561]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 16 12:27:17.377388 extend-filesystems[1561]: old_desc_blocks = 1, new_desc_blocks = 5 Dec 16 12:27:17.377388 extend-filesystems[1561]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Dec 16 12:27:17.392559 extend-filesystems[1511]: Resized filesystem in /dev/sda9 Dec 16 12:27:17.395762 containerd[1536]: time="2025-12-16T12:27:17.388913972Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.28µs" Dec 16 12:27:17.395762 containerd[1536]: time="2025-12-16T12:27:17.388951772Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 12:27:17.395762 containerd[1536]: time="2025-12-16T12:27:17.388976332Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 12:27:17.395762 containerd[1536]: time="2025-12-16T12:27:17.389141652Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 12:27:17.395762 containerd[1536]: time="2025-12-16T12:27:17.389170012Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 12:27:17.395762 containerd[1536]: time="2025-12-16T12:27:17.389199612Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:27:17.395762 containerd[1536]: time="2025-12-16T12:27:17.389264772Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:27:17.395762 containerd[1536]: time="2025-12-16T12:27:17.389280172Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:27:17.381806 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 12:27:17.383404 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 12:27:17.398768 containerd[1536]: time="2025-12-16T12:27:17.396607612Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:27:17.398768 containerd[1536]: time="2025-12-16T12:27:17.396650412Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:27:17.398768 containerd[1536]: time="2025-12-16T12:27:17.396668012Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:27:17.398768 containerd[1536]: time="2025-12-16T12:27:17.396675732Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 12:27:17.398768 containerd[1536]: time="2025-12-16T12:27:17.396798652Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 12:27:17.398768 containerd[1536]: time="2025-12-16T12:27:17.397000292Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:27:17.398768 containerd[1536]: time="2025-12-16T12:27:17.397039332Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:27:17.398768 containerd[1536]: time="2025-12-16T12:27:17.397050212Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 12:27:17.387045 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 12:27:17.393350 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 12:27:17.400808 containerd[1536]: time="2025-12-16T12:27:17.400489852Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 12:27:17.402085 containerd[1536]: time="2025-12-16T12:27:17.401729012Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 12:27:17.402085 containerd[1536]: time="2025-12-16T12:27:17.401852692Z" level=info msg="metadata content store policy set" policy=shared Dec 16 12:27:17.407861 containerd[1536]: time="2025-12-16T12:27:17.407598492Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 12:27:17.407861 containerd[1536]: time="2025-12-16T12:27:17.407672492Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 12:27:17.407861 containerd[1536]: time="2025-12-16T12:27:17.407689252Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 12:27:17.407861 containerd[1536]: time="2025-12-16T12:27:17.407703012Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 12:27:17.407861 containerd[1536]: time="2025-12-16T12:27:17.407716492Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 12:27:17.407861 containerd[1536]: time="2025-12-16T12:27:17.407728292Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 12:27:17.409610 containerd[1536]: time="2025-12-16T12:27:17.408276692Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 12:27:17.409610 containerd[1536]: time="2025-12-16T12:27:17.408307732Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 12:27:17.409610 containerd[1536]: time="2025-12-16T12:27:17.408328052Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 12:27:17.409610 containerd[1536]: time="2025-12-16T12:27:17.408339932Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 12:27:17.409610 containerd[1536]: time="2025-12-16T12:27:17.408351372Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 12:27:17.409610 containerd[1536]: time="2025-12-16T12:27:17.408389932Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 12:27:17.409610 containerd[1536]: time="2025-12-16T12:27:17.408532372Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 12:27:17.409610 containerd[1536]: time="2025-12-16T12:27:17.408596892Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 12:27:17.409610 containerd[1536]: time="2025-12-16T12:27:17.408612972Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 12:27:17.409610 containerd[1536]: time="2025-12-16T12:27:17.408626252Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 12:27:17.409610 containerd[1536]: time="2025-12-16T12:27:17.408637532Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 12:27:17.409610 containerd[1536]: time="2025-12-16T12:27:17.408647772Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 12:27:17.409610 containerd[1536]: time="2025-12-16T12:27:17.408663412Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 12:27:17.409610 containerd[1536]: time="2025-12-16T12:27:17.408674212Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 12:27:17.409610 containerd[1536]: time="2025-12-16T12:27:17.408686252Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 12:27:17.409921 containerd[1536]: time="2025-12-16T12:27:17.408696692Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 12:27:17.409921 containerd[1536]: time="2025-12-16T12:27:17.408707532Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 12:27:17.409921 containerd[1536]: time="2025-12-16T12:27:17.408879172Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 12:27:17.409921 containerd[1536]: time="2025-12-16T12:27:17.408893652Z" level=info msg="Start snapshots syncer" Dec 16 12:27:17.409921 containerd[1536]: time="2025-12-16T12:27:17.408919452Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 12:27:17.410006 containerd[1536]: time="2025-12-16T12:27:17.409190812Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 12:27:17.410006 containerd[1536]: time="2025-12-16T12:27:17.409242332Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 12:27:17.410103 containerd[1536]: time="2025-12-16T12:27:17.409291812Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 12:27:17.411936 containerd[1536]: time="2025-12-16T12:27:17.411910612Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 12:27:17.412046 containerd[1536]: time="2025-12-16T12:27:17.412030852Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 12:27:17.412244 containerd[1536]: time="2025-12-16T12:27:17.412225252Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 12:27:17.412350 containerd[1536]: time="2025-12-16T12:27:17.412333892Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 12:27:17.412497 containerd[1536]: time="2025-12-16T12:27:17.412428012Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 12:27:17.412863 containerd[1536]: time="2025-12-16T12:27:17.412673892Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 12:27:17.412863 containerd[1536]: time="2025-12-16T12:27:17.412696412Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 12:27:17.412863 containerd[1536]: time="2025-12-16T12:27:17.412764732Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 12:27:17.412863 containerd[1536]: time="2025-12-16T12:27:17.412782492Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 12:27:17.412863 containerd[1536]: time="2025-12-16T12:27:17.412795732Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 12:27:17.413385 containerd[1536]: time="2025-12-16T12:27:17.413179852Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:27:17.413385 containerd[1536]: time="2025-12-16T12:27:17.413213052Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:27:17.413385 containerd[1536]: time="2025-12-16T12:27:17.413237052Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:27:17.413385 containerd[1536]: time="2025-12-16T12:27:17.413248372Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:27:17.413385 containerd[1536]: time="2025-12-16T12:27:17.413256692Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 12:27:17.413385 containerd[1536]: time="2025-12-16T12:27:17.413267012Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 12:27:17.413385 containerd[1536]: time="2025-12-16T12:27:17.413280772Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 12:27:17.413767 containerd[1536]: time="2025-12-16T12:27:17.413748052Z" level=info msg="runtime interface created" Dec 16 12:27:17.414195 containerd[1536]: time="2025-12-16T12:27:17.413824892Z" level=info msg="created NRI interface" Dec 16 12:27:17.414195 containerd[1536]: time="2025-12-16T12:27:17.413840732Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 12:27:17.414195 containerd[1536]: time="2025-12-16T12:27:17.413860332Z" level=info msg="Connect containerd service" Dec 16 12:27:17.414195 containerd[1536]: time="2025-12-16T12:27:17.413888252Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 12:27:17.416723 containerd[1536]: time="2025-12-16T12:27:17.416678732Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:27:17.427551 locksmithd[1567]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 12:27:17.459994 coreos-metadata[1600]: Dec 16 12:27:17.459 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Dec 16 12:27:17.460930 coreos-metadata[1600]: Dec 16 12:27:17.460 INFO Fetch successful Dec 16 12:27:17.464085 unknown[1600]: wrote ssh authorized keys file for user: core Dec 16 12:27:17.506176 update-ssh-keys[1612]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:27:17.509426 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 12:27:17.514073 systemd[1]: Finished sshkeys.service. Dec 16 12:27:17.581243 containerd[1536]: time="2025-12-16T12:27:17.579313852Z" level=info msg="Start subscribing containerd event" Dec 16 12:27:17.581243 containerd[1536]: time="2025-12-16T12:27:17.579418092Z" level=info msg="Start recovering state" Dec 16 12:27:17.581243 containerd[1536]: time="2025-12-16T12:27:17.579512772Z" level=info msg="Start event monitor" Dec 16 12:27:17.581243 containerd[1536]: time="2025-12-16T12:27:17.579525972Z" level=info msg="Start cni network conf syncer for default" Dec 16 12:27:17.581243 containerd[1536]: time="2025-12-16T12:27:17.579643652Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 12:27:17.581243 containerd[1536]: time="2025-12-16T12:27:17.579691932Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 12:27:17.581243 containerd[1536]: time="2025-12-16T12:27:17.579941612Z" level=info msg="Start streaming server" Dec 16 12:27:17.581243 containerd[1536]: time="2025-12-16T12:27:17.579961492Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 12:27:17.581243 containerd[1536]: time="2025-12-16T12:27:17.579969212Z" level=info msg="runtime interface starting up..." Dec 16 12:27:17.581243 containerd[1536]: time="2025-12-16T12:27:17.579975212Z" level=info msg="starting plugins..." Dec 16 12:27:17.581243 containerd[1536]: time="2025-12-16T12:27:17.579995212Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 12:27:17.581243 containerd[1536]: time="2025-12-16T12:27:17.580373892Z" level=info msg="containerd successfully booted in 0.283126s" Dec 16 12:27:17.580511 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 12:27:17.721666 tar[1529]: linux-arm64/README.md Dec 16 12:27:17.740638 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 12:27:18.072071 sshd_keygen[1551]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 12:27:18.095808 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 12:27:18.099718 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 12:27:18.124318 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 12:27:18.124653 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 12:27:18.128419 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 12:27:18.151510 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 12:27:18.155774 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 12:27:18.159727 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 16 12:27:18.162635 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 12:27:18.443730 systemd-networkd[1418]: eth0: Gained IPv6LL Dec 16 12:27:18.447774 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 12:27:18.449685 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 12:27:18.452930 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:27:18.456659 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 12:27:18.488463 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 12:27:18.763698 systemd-networkd[1418]: eth1: Gained IPv6LL Dec 16 12:27:19.271294 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:27:19.274156 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 12:27:19.276648 systemd[1]: Startup finished in 2.387s (kernel) + 5.229s (initrd) + 5.023s (userspace) = 12.640s. Dec 16 12:27:19.284909 (kubelet)[1656]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:27:19.790005 kubelet[1656]: E1216 12:27:19.789908 1656 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:27:19.794192 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:27:19.794337 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:27:19.795735 systemd[1]: kubelet.service: Consumed 872ms CPU time, 254.3M memory peak. Dec 16 12:27:30.045438 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 12:27:30.048811 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:27:30.211949 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:27:30.225973 (kubelet)[1675]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:27:30.286271 kubelet[1675]: E1216 12:27:30.286222 1675 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:27:30.289736 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:27:30.290001 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:27:30.290668 systemd[1]: kubelet.service: Consumed 174ms CPU time, 106.1M memory peak. Dec 16 12:27:40.540738 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 12:27:40.543485 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:27:40.698885 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:27:40.709202 (kubelet)[1689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:27:40.755378 kubelet[1689]: E1216 12:27:40.755298 1689 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:27:40.758106 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:27:40.758238 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:27:40.758869 systemd[1]: kubelet.service: Consumed 166ms CPU time, 105M memory peak. Dec 16 12:27:50.632766 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 12:27:50.635040 systemd[1]: Started sshd@0-88.99.82.111:22-139.178.89.65:34448.service - OpenSSH per-connection server daemon (139.178.89.65:34448). Dec 16 12:27:50.805780 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 12:27:50.807246 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:27:50.971338 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:27:50.978689 (kubelet)[1708]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:27:51.027296 kubelet[1708]: E1216 12:27:51.027214 1708 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:27:51.029641 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:27:51.029793 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:27:51.030678 systemd[1]: kubelet.service: Consumed 166ms CPU time, 107.3M memory peak. Dec 16 12:27:51.688975 sshd[1697]: Accepted publickey for core from 139.178.89.65 port 34448 ssh2: RSA SHA256:dvgfFUX4LTtI/InRtbpPXJ1Q5z3l9e9H0QH68YHigeA Dec 16 12:27:51.692312 sshd-session[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:27:51.700585 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 12:27:51.702118 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 12:27:51.710220 systemd-logind[1519]: New session 1 of user core. Dec 16 12:27:51.735799 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 12:27:51.738740 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 12:27:51.751909 (systemd)[1716]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 12:27:51.755450 systemd-logind[1519]: New session c1 of user core. Dec 16 12:27:51.888239 systemd[1716]: Queued start job for default target default.target. Dec 16 12:27:51.900144 systemd[1716]: Created slice app.slice - User Application Slice. Dec 16 12:27:51.900204 systemd[1716]: Reached target paths.target - Paths. Dec 16 12:27:51.900271 systemd[1716]: Reached target timers.target - Timers. Dec 16 12:27:51.902615 systemd[1716]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 12:27:51.927084 systemd[1716]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 12:27:51.927613 systemd[1716]: Reached target sockets.target - Sockets. Dec 16 12:27:51.927883 systemd[1716]: Reached target basic.target - Basic System. Dec 16 12:27:51.928154 systemd[1716]: Reached target default.target - Main User Target. Dec 16 12:27:51.928190 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 12:27:51.929510 systemd[1716]: Startup finished in 166ms. Dec 16 12:27:51.939714 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 12:27:52.662562 systemd[1]: Started sshd@1-88.99.82.111:22-139.178.89.65:34454.service - OpenSSH per-connection server daemon (139.178.89.65:34454). Dec 16 12:27:53.723149 sshd[1727]: Accepted publickey for core from 139.178.89.65 port 34454 ssh2: RSA SHA256:dvgfFUX4LTtI/InRtbpPXJ1Q5z3l9e9H0QH68YHigeA Dec 16 12:27:53.726448 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:27:53.734262 systemd-logind[1519]: New session 2 of user core. Dec 16 12:27:53.739714 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 12:27:54.450391 sshd[1730]: Connection closed by 139.178.89.65 port 34454 Dec 16 12:27:54.449710 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Dec 16 12:27:54.455508 systemd-logind[1519]: Session 2 logged out. Waiting for processes to exit. Dec 16 12:27:54.456187 systemd[1]: sshd@1-88.99.82.111:22-139.178.89.65:34454.service: Deactivated successfully. Dec 16 12:27:54.458471 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 12:27:54.461171 systemd-logind[1519]: Removed session 2. Dec 16 12:27:54.631916 systemd[1]: Started sshd@2-88.99.82.111:22-139.178.89.65:34460.service - OpenSSH per-connection server daemon (139.178.89.65:34460). Dec 16 12:27:55.690134 sshd[1736]: Accepted publickey for core from 139.178.89.65 port 34460 ssh2: RSA SHA256:dvgfFUX4LTtI/InRtbpPXJ1Q5z3l9e9H0QH68YHigeA Dec 16 12:27:55.692826 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:27:55.700429 systemd-logind[1519]: New session 3 of user core. Dec 16 12:27:55.707746 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 12:27:56.406494 sshd[1739]: Connection closed by 139.178.89.65 port 34460 Dec 16 12:27:56.407341 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Dec 16 12:27:56.412797 systemd[1]: sshd@2-88.99.82.111:22-139.178.89.65:34460.service: Deactivated successfully. Dec 16 12:27:56.416172 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 12:27:56.417350 systemd-logind[1519]: Session 3 logged out. Waiting for processes to exit. Dec 16 12:27:56.419222 systemd-logind[1519]: Removed session 3. Dec 16 12:27:56.586949 systemd[1]: Started sshd@3-88.99.82.111:22-139.178.89.65:34464.service - OpenSSH per-connection server daemon (139.178.89.65:34464). Dec 16 12:27:57.657066 sshd[1745]: Accepted publickey for core from 139.178.89.65 port 34464 ssh2: RSA SHA256:dvgfFUX4LTtI/InRtbpPXJ1Q5z3l9e9H0QH68YHigeA Dec 16 12:27:57.659390 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:27:57.666743 systemd-logind[1519]: New session 4 of user core. Dec 16 12:27:57.672629 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 12:27:58.377406 sshd[1748]: Connection closed by 139.178.89.65 port 34464 Dec 16 12:27:58.378060 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Dec 16 12:27:58.382138 systemd-logind[1519]: Session 4 logged out. Waiting for processes to exit. Dec 16 12:27:58.382469 systemd[1]: sshd@3-88.99.82.111:22-139.178.89.65:34464.service: Deactivated successfully. Dec 16 12:27:58.385135 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 12:27:58.387205 systemd-logind[1519]: Removed session 4. Dec 16 12:27:58.568417 systemd[1]: Started sshd@4-88.99.82.111:22-139.178.89.65:34470.service - OpenSSH per-connection server daemon (139.178.89.65:34470). Dec 16 12:27:59.607889 sshd[1754]: Accepted publickey for core from 139.178.89.65 port 34470 ssh2: RSA SHA256:dvgfFUX4LTtI/InRtbpPXJ1Q5z3l9e9H0QH68YHigeA Dec 16 12:27:59.610066 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:27:59.615858 systemd-logind[1519]: New session 5 of user core. Dec 16 12:27:59.624210 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 12:28:00.165985 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 12:28:00.166295 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:28:00.183963 sudo[1758]: pam_unix(sudo:session): session closed for user root Dec 16 12:28:00.351450 sshd[1757]: Connection closed by 139.178.89.65 port 34470 Dec 16 12:28:00.352487 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:00.358538 systemd[1]: sshd@4-88.99.82.111:22-139.178.89.65:34470.service: Deactivated successfully. Dec 16 12:28:00.363550 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 12:28:00.364611 systemd-logind[1519]: Session 5 logged out. Waiting for processes to exit. Dec 16 12:28:00.366279 systemd-logind[1519]: Removed session 5. Dec 16 12:28:00.538697 systemd[1]: Started sshd@5-88.99.82.111:22-139.178.89.65:36240.service - OpenSSH per-connection server daemon (139.178.89.65:36240). Dec 16 12:28:01.056144 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 16 12:28:01.058823 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:28:01.214306 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:28:01.228942 (kubelet)[1775]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:28:01.280007 kubelet[1775]: E1216 12:28:01.279936 1775 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:28:01.282600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:28:01.282758 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:28:01.283706 systemd[1]: kubelet.service: Consumed 169ms CPU time, 105.2M memory peak. Dec 16 12:28:01.593951 sshd[1764]: Accepted publickey for core from 139.178.89.65 port 36240 ssh2: RSA SHA256:dvgfFUX4LTtI/InRtbpPXJ1Q5z3l9e9H0QH68YHigeA Dec 16 12:28:01.595776 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:01.602315 systemd-logind[1519]: New session 6 of user core. Dec 16 12:28:01.607650 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 12:28:02.142121 sudo[1783]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 12:28:02.142793 sudo[1783]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:28:02.149019 sudo[1783]: pam_unix(sudo:session): session closed for user root Dec 16 12:28:02.157245 sudo[1782]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 12:28:02.157598 sudo[1782]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:28:02.169777 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:28:02.213421 augenrules[1805]: No rules Dec 16 12:28:02.215025 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:28:02.215262 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:28:02.217738 sudo[1782]: pam_unix(sudo:session): session closed for user root Dec 16 12:28:02.387023 sshd[1781]: Connection closed by 139.178.89.65 port 36240 Dec 16 12:28:02.386272 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:02.392766 systemd[1]: sshd@5-88.99.82.111:22-139.178.89.65:36240.service: Deactivated successfully. Dec 16 12:28:02.396147 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 12:28:02.399120 systemd-logind[1519]: Session 6 logged out. Waiting for processes to exit. Dec 16 12:28:02.400950 systemd-logind[1519]: Removed session 6. Dec 16 12:28:02.578640 systemd[1]: Started sshd@6-88.99.82.111:22-139.178.89.65:36246.service - OpenSSH per-connection server daemon (139.178.89.65:36246). Dec 16 12:28:02.636781 update_engine[1521]: I20251216 12:28:02.636693 1521 update_attempter.cc:509] Updating boot flags... Dec 16 12:28:03.644444 sshd[1814]: Accepted publickey for core from 139.178.89.65 port 36246 ssh2: RSA SHA256:dvgfFUX4LTtI/InRtbpPXJ1Q5z3l9e9H0QH68YHigeA Dec 16 12:28:03.646699 sshd-session[1814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:03.654562 systemd-logind[1519]: New session 7 of user core. Dec 16 12:28:03.660623 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 12:28:04.197131 sudo[1838]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 12:28:04.197440 sudo[1838]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:28:04.517617 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 12:28:04.533400 (dockerd)[1856]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 12:28:04.756773 dockerd[1856]: time="2025-12-16T12:28:04.756689718Z" level=info msg="Starting up" Dec 16 12:28:04.757750 dockerd[1856]: time="2025-12-16T12:28:04.757684764Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 12:28:04.773069 dockerd[1856]: time="2025-12-16T12:28:04.772725263Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 12:28:04.790493 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4106320725-merged.mount: Deactivated successfully. Dec 16 12:28:04.809535 systemd[1]: var-lib-docker-metacopy\x2dcheck3602827555-merged.mount: Deactivated successfully. Dec 16 12:28:04.821420 dockerd[1856]: time="2025-12-16T12:28:04.821060399Z" level=info msg="Loading containers: start." Dec 16 12:28:04.833386 kernel: Initializing XFRM netlink socket Dec 16 12:28:05.079269 systemd-networkd[1418]: docker0: Link UP Dec 16 12:28:05.084652 dockerd[1856]: time="2025-12-16T12:28:05.084538113Z" level=info msg="Loading containers: done." Dec 16 12:28:05.104309 dockerd[1856]: time="2025-12-16T12:28:05.103926843Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 12:28:05.104309 dockerd[1856]: time="2025-12-16T12:28:05.104040532Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 12:28:05.104309 dockerd[1856]: time="2025-12-16T12:28:05.104148341Z" level=info msg="Initializing buildkit" Dec 16 12:28:05.131256 dockerd[1856]: time="2025-12-16T12:28:05.131212213Z" level=info msg="Completed buildkit initialization" Dec 16 12:28:05.140860 dockerd[1856]: time="2025-12-16T12:28:05.140810071Z" level=info msg="Daemon has completed initialization" Dec 16 12:28:05.141151 dockerd[1856]: time="2025-12-16T12:28:05.141070572Z" level=info msg="API listen on /run/docker.sock" Dec 16 12:28:05.144229 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 12:28:06.161690 containerd[1536]: time="2025-12-16T12:28:06.161549694Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 16 12:28:06.797975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2157510312.mount: Deactivated successfully. Dec 16 12:28:07.639389 containerd[1536]: time="2025-12-16T12:28:07.639279154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:07.640735 containerd[1536]: time="2025-12-16T12:28:07.640697815Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=26432057" Dec 16 12:28:07.642377 containerd[1536]: time="2025-12-16T12:28:07.641639202Z" level=info msg="ImageCreate event name:\"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:07.645574 containerd[1536]: time="2025-12-16T12:28:07.645532360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:07.646757 containerd[1536]: time="2025-12-16T12:28:07.646717644Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"26428558\" in 1.485091264s" Dec 16 12:28:07.646757 containerd[1536]: time="2025-12-16T12:28:07.646758207Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\"" Dec 16 12:28:07.647715 containerd[1536]: time="2025-12-16T12:28:07.647689833Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 16 12:28:08.935215 containerd[1536]: time="2025-12-16T12:28:08.935140334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:08.936915 containerd[1536]: time="2025-12-16T12:28:08.936847848Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=22618975" Dec 16 12:28:08.937562 containerd[1536]: time="2025-12-16T12:28:08.937517212Z" level=info msg="ImageCreate event name:\"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:08.941257 containerd[1536]: time="2025-12-16T12:28:08.941201258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:08.942577 containerd[1536]: time="2025-12-16T12:28:08.942224607Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"24203439\" in 1.294417525s" Dec 16 12:28:08.942577 containerd[1536]: time="2025-12-16T12:28:08.942260569Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\"" Dec 16 12:28:08.942728 containerd[1536]: time="2025-12-16T12:28:08.942703359Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 16 12:28:09.905171 containerd[1536]: time="2025-12-16T12:28:09.905074505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:09.907076 containerd[1536]: time="2025-12-16T12:28:09.907002985Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=17618456" Dec 16 12:28:09.908037 containerd[1536]: time="2025-12-16T12:28:09.907993287Z" level=info msg="ImageCreate event name:\"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:09.913230 containerd[1536]: time="2025-12-16T12:28:09.911573271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:09.913613 containerd[1536]: time="2025-12-16T12:28:09.913571076Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"19202938\" in 970.833475ms" Dec 16 12:28:09.913737 containerd[1536]: time="2025-12-16T12:28:09.913713005Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\"" Dec 16 12:28:09.914438 containerd[1536]: time="2025-12-16T12:28:09.914401848Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 16 12:28:10.912183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3399556018.mount: Deactivated successfully. Dec 16 12:28:11.251702 containerd[1536]: time="2025-12-16T12:28:11.251533750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:11.253072 containerd[1536]: time="2025-12-16T12:28:11.252846103Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=27561825" Dec 16 12:28:11.254038 containerd[1536]: time="2025-12-16T12:28:11.253997286Z" level=info msg="ImageCreate event name:\"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:11.256564 containerd[1536]: time="2025-12-16T12:28:11.256513624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:11.257652 containerd[1536]: time="2025-12-16T12:28:11.257346910Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"27560818\" in 1.342781851s" Dec 16 12:28:11.257652 containerd[1536]: time="2025-12-16T12:28:11.257450636Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\"" Dec 16 12:28:11.258016 containerd[1536]: time="2025-12-16T12:28:11.257989465Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 16 12:28:11.306158 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 16 12:28:11.308855 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:28:11.471054 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:28:11.481911 (kubelet)[2148]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:28:11.530394 kubelet[2148]: E1216 12:28:11.529452 2148 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:28:11.532387 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:28:11.532651 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:28:11.533191 systemd[1]: kubelet.service: Consumed 168ms CPU time, 105.1M memory peak. Dec 16 12:28:11.986661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3282506397.mount: Deactivated successfully. Dec 16 12:28:12.622384 containerd[1536]: time="2025-12-16T12:28:12.621321146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:12.623955 containerd[1536]: time="2025-12-16T12:28:12.623922760Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Dec 16 12:28:12.624884 containerd[1536]: time="2025-12-16T12:28:12.624853128Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:12.629497 containerd[1536]: time="2025-12-16T12:28:12.629439525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:12.630805 containerd[1536]: time="2025-12-16T12:28:12.630740432Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.372643521s" Dec 16 12:28:12.630878 containerd[1536]: time="2025-12-16T12:28:12.630803235Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Dec 16 12:28:12.631956 containerd[1536]: time="2025-12-16T12:28:12.631923533Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 12:28:13.117900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3973344190.mount: Deactivated successfully. Dec 16 12:28:13.125839 containerd[1536]: time="2025-12-16T12:28:13.125687224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:28:13.127028 containerd[1536]: time="2025-12-16T12:28:13.126747635Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Dec 16 12:28:13.127904 containerd[1536]: time="2025-12-16T12:28:13.127864489Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:28:13.130313 containerd[1536]: time="2025-12-16T12:28:13.130264805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:28:13.131190 containerd[1536]: time="2025-12-16T12:28:13.131150928Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 499.050027ms" Dec 16 12:28:13.131190 containerd[1536]: time="2025-12-16T12:28:13.131188370Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 16 12:28:13.131769 containerd[1536]: time="2025-12-16T12:28:13.131722756Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 16 12:28:13.629531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2424757601.mount: Deactivated successfully. Dec 16 12:28:15.081242 containerd[1536]: time="2025-12-16T12:28:15.081182604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:15.083306 containerd[1536]: time="2025-12-16T12:28:15.082417937Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943239" Dec 16 12:28:15.083306 containerd[1536]: time="2025-12-16T12:28:15.083241372Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:15.088488 containerd[1536]: time="2025-12-16T12:28:15.088425912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:15.089696 containerd[1536]: time="2025-12-16T12:28:15.089639923Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 1.957879566s" Dec 16 12:28:15.089696 containerd[1536]: time="2025-12-16T12:28:15.089688845Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Dec 16 12:28:21.556060 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 16 12:28:21.560212 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:28:21.715543 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:28:21.726103 (kubelet)[2294]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:28:21.768198 kubelet[2294]: E1216 12:28:21.768140 2294 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:28:21.771392 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:28:21.771647 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:28:21.772294 systemd[1]: kubelet.service: Consumed 154ms CPU time, 106.7M memory peak. Dec 16 12:28:22.713739 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:28:22.713900 systemd[1]: kubelet.service: Consumed 154ms CPU time, 106.7M memory peak. Dec 16 12:28:22.716783 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:28:22.750090 systemd[1]: Reload requested from client PID 2308 ('systemctl') (unit session-7.scope)... Dec 16 12:28:22.750110 systemd[1]: Reloading... Dec 16 12:28:22.864771 zram_generator::config[2351]: No configuration found. Dec 16 12:28:23.084336 systemd[1]: Reloading finished in 333 ms. Dec 16 12:28:23.150179 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 12:28:23.150293 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 12:28:23.150843 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:28:23.150908 systemd[1]: kubelet.service: Consumed 108ms CPU time, 95.2M memory peak. Dec 16 12:28:23.153674 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:28:23.307593 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:28:23.317809 (kubelet)[2400]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:28:23.364518 kubelet[2400]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:28:23.364518 kubelet[2400]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:28:23.364518 kubelet[2400]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:28:23.364518 kubelet[2400]: I1216 12:28:23.364104 2400 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:28:24.118038 kubelet[2400]: I1216 12:28:24.117974 2400 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 12:28:24.118038 kubelet[2400]: I1216 12:28:24.118021 2400 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:28:24.118611 kubelet[2400]: I1216 12:28:24.118425 2400 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 12:28:24.148571 kubelet[2400]: E1216 12:28:24.148523 2400 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://88.99.82.111:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 88.99.82.111:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:28:24.149492 kubelet[2400]: I1216 12:28:24.149362 2400 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:28:24.158804 kubelet[2400]: I1216 12:28:24.158773 2400 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:28:24.162064 kubelet[2400]: I1216 12:28:24.161942 2400 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:28:24.163162 kubelet[2400]: I1216 12:28:24.163113 2400 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:28:24.163487 kubelet[2400]: I1216 12:28:24.163264 2400 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-2-0-7f64ef3ba0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:28:24.163707 kubelet[2400]: I1216 12:28:24.163691 2400 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:28:24.163810 kubelet[2400]: I1216 12:28:24.163801 2400 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 12:28:24.164073 kubelet[2400]: I1216 12:28:24.164056 2400 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:28:24.172290 kubelet[2400]: I1216 12:28:24.172242 2400 kubelet.go:446] "Attempting to sync node with API server" Dec 16 12:28:24.172498 kubelet[2400]: I1216 12:28:24.172484 2400 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:28:24.172614 kubelet[2400]: I1216 12:28:24.172601 2400 kubelet.go:352] "Adding apiserver pod source" Dec 16 12:28:24.172756 kubelet[2400]: I1216 12:28:24.172740 2400 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:28:24.176778 kubelet[2400]: W1216 12:28:24.176668 2400 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://88.99.82.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-2-0-7f64ef3ba0&limit=500&resourceVersion=0": dial tcp 88.99.82.111:6443: connect: connection refused Dec 16 12:28:24.176909 kubelet[2400]: E1216 12:28:24.176803 2400 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://88.99.82.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-2-0-7f64ef3ba0&limit=500&resourceVersion=0\": dial tcp 88.99.82.111:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:28:24.178053 kubelet[2400]: W1216 12:28:24.177981 2400 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://88.99.82.111:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 88.99.82.111:6443: connect: connection refused Dec 16 12:28:24.178149 kubelet[2400]: E1216 12:28:24.178065 2400 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://88.99.82.111:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 88.99.82.111:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:28:24.178248 kubelet[2400]: I1216 12:28:24.178220 2400 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 12:28:24.179329 kubelet[2400]: I1216 12:28:24.179288 2400 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 12:28:24.179480 kubelet[2400]: W1216 12:28:24.179461 2400 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 12:28:24.183134 kubelet[2400]: I1216 12:28:24.183096 2400 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:28:24.183134 kubelet[2400]: I1216 12:28:24.183141 2400 server.go:1287] "Started kubelet" Dec 16 12:28:24.186463 kubelet[2400]: I1216 12:28:24.186414 2400 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:28:24.187587 kubelet[2400]: I1216 12:28:24.187500 2400 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:28:24.187697 kubelet[2400]: I1216 12:28:24.187561 2400 server.go:479] "Adding debug handlers to kubelet server" Dec 16 12:28:24.187943 kubelet[2400]: I1216 12:28:24.187914 2400 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:28:24.188384 kubelet[2400]: E1216 12:28:24.188111 2400 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://88.99.82.111:6443/api/v1/namespaces/default/events\": dial tcp 88.99.82.111:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-2-0-7f64ef3ba0.1881b1dfeaec4013 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-2-0-7f64ef3ba0,UID:ci-4459-2-2-0-7f64ef3ba0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-2-0-7f64ef3ba0,},FirstTimestamp:2025-12-16 12:28:24.183119891 +0000 UTC m=+0.859973872,LastTimestamp:2025-12-16 12:28:24.183119891 +0000 UTC m=+0.859973872,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-2-0-7f64ef3ba0,}" Dec 16 12:28:24.190262 kubelet[2400]: I1216 12:28:24.190234 2400 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:28:24.191870 kubelet[2400]: I1216 12:28:24.191834 2400 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:28:24.196095 kubelet[2400]: I1216 12:28:24.196071 2400 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:28:24.196597 kubelet[2400]: E1216 12:28:24.196565 2400 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:28:24.196688 kubelet[2400]: E1216 12:28:24.196669 2400 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-2-0-7f64ef3ba0\" not found" Dec 16 12:28:24.197183 kubelet[2400]: I1216 12:28:24.197138 2400 factory.go:221] Registration of the systemd container factory successfully Dec 16 12:28:24.197277 kubelet[2400]: I1216 12:28:24.197258 2400 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:28:24.198796 kubelet[2400]: I1216 12:28:24.197963 2400 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:28:24.198796 kubelet[2400]: I1216 12:28:24.198033 2400 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:28:24.198796 kubelet[2400]: W1216 12:28:24.198338 2400 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://88.99.82.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 88.99.82.111:6443: connect: connection refused Dec 16 12:28:24.198796 kubelet[2400]: E1216 12:28:24.198472 2400 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://88.99.82.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 88.99.82.111:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:28:24.198796 kubelet[2400]: E1216 12:28:24.198567 2400 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://88.99.82.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-0-7f64ef3ba0?timeout=10s\": dial tcp 88.99.82.111:6443: connect: connection refused" interval="200ms" Dec 16 12:28:24.199123 kubelet[2400]: I1216 12:28:24.199100 2400 factory.go:221] Registration of the containerd container factory successfully Dec 16 12:28:24.219896 kubelet[2400]: I1216 12:28:24.219828 2400 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 12:28:24.221024 kubelet[2400]: I1216 12:28:24.220992 2400 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 12:28:24.221024 kubelet[2400]: I1216 12:28:24.221024 2400 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 12:28:24.221114 kubelet[2400]: I1216 12:28:24.221050 2400 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:28:24.221114 kubelet[2400]: I1216 12:28:24.221057 2400 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 12:28:24.221157 kubelet[2400]: E1216 12:28:24.221101 2400 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:28:24.227214 kubelet[2400]: W1216 12:28:24.227090 2400 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://88.99.82.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 88.99.82.111:6443: connect: connection refused Dec 16 12:28:24.227414 kubelet[2400]: E1216 12:28:24.227387 2400 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://88.99.82.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 88.99.82.111:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:28:24.228599 kubelet[2400]: I1216 12:28:24.228579 2400 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:28:24.228724 kubelet[2400]: I1216 12:28:24.228711 2400 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:28:24.228814 kubelet[2400]: I1216 12:28:24.228805 2400 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:28:24.232683 kubelet[2400]: I1216 12:28:24.232651 2400 policy_none.go:49] "None policy: Start" Dec 16 12:28:24.232844 kubelet[2400]: I1216 12:28:24.232831 2400 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:28:24.232899 kubelet[2400]: I1216 12:28:24.232892 2400 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:28:24.241168 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 12:28:24.257245 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 12:28:24.261599 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 12:28:24.274475 kubelet[2400]: I1216 12:28:24.274430 2400 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 12:28:24.275092 kubelet[2400]: I1216 12:28:24.275060 2400 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:28:24.275239 kubelet[2400]: I1216 12:28:24.275086 2400 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:28:24.275653 kubelet[2400]: I1216 12:28:24.275624 2400 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:28:24.279487 kubelet[2400]: E1216 12:28:24.279451 2400 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:28:24.279487 kubelet[2400]: E1216 12:28:24.279491 2400 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-2-0-7f64ef3ba0\" not found" Dec 16 12:28:24.337788 systemd[1]: Created slice kubepods-burstable-pod008f0eb8705abf355281fc26217b03de.slice - libcontainer container kubepods-burstable-pod008f0eb8705abf355281fc26217b03de.slice. Dec 16 12:28:24.346124 kubelet[2400]: E1216 12:28:24.345895 2400 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-0-7f64ef3ba0\" not found" node="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:24.352299 systemd[1]: Created slice kubepods-burstable-pod1f6c60e428c797ce80285326d590e63d.slice - libcontainer container kubepods-burstable-pod1f6c60e428c797ce80285326d590e63d.slice. Dec 16 12:28:24.363604 kubelet[2400]: E1216 12:28:24.363562 2400 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-0-7f64ef3ba0\" not found" node="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:24.367431 systemd[1]: Created slice kubepods-burstable-pod0a8998f9703a61913bdb5ca2745d23ef.slice - libcontainer container kubepods-burstable-pod0a8998f9703a61913bdb5ca2745d23ef.slice. Dec 16 12:28:24.369677 kubelet[2400]: E1216 12:28:24.369497 2400 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-0-7f64ef3ba0\" not found" node="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:24.377865 kubelet[2400]: I1216 12:28:24.377820 2400 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:24.378927 kubelet[2400]: E1216 12:28:24.378875 2400 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://88.99.82.111:6443/api/v1/nodes\": dial tcp 88.99.82.111:6443: connect: connection refused" node="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:24.399997 kubelet[2400]: I1216 12:28:24.399813 2400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1f6c60e428c797ce80285326d590e63d-ca-certs\") pod \"kube-controller-manager-ci-4459-2-2-0-7f64ef3ba0\" (UID: \"1f6c60e428c797ce80285326d590e63d\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:24.399997 kubelet[2400]: E1216 12:28:24.399854 2400 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://88.99.82.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-0-7f64ef3ba0?timeout=10s\": dial tcp 88.99.82.111:6443: connect: connection refused" interval="400ms" Dec 16 12:28:24.399997 kubelet[2400]: I1216 12:28:24.399890 2400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1f6c60e428c797ce80285326d590e63d-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-2-0-7f64ef3ba0\" (UID: \"1f6c60e428c797ce80285326d590e63d\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:24.399997 kubelet[2400]: I1216 12:28:24.399960 2400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1f6c60e428c797ce80285326d590e63d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-2-0-7f64ef3ba0\" (UID: \"1f6c60e428c797ce80285326d590e63d\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:24.399997 kubelet[2400]: I1216 12:28:24.400016 2400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a8998f9703a61913bdb5ca2745d23ef-kubeconfig\") pod \"kube-scheduler-ci-4459-2-2-0-7f64ef3ba0\" (UID: \"0a8998f9703a61913bdb5ca2745d23ef\") " pod="kube-system/kube-scheduler-ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:24.400507 kubelet[2400]: I1216 12:28:24.400059 2400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/008f0eb8705abf355281fc26217b03de-ca-certs\") pod \"kube-apiserver-ci-4459-2-2-0-7f64ef3ba0\" (UID: \"008f0eb8705abf355281fc26217b03de\") " pod="kube-system/kube-apiserver-ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:24.400507 kubelet[2400]: I1216 12:28:24.400099 2400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/008f0eb8705abf355281fc26217b03de-k8s-certs\") pod \"kube-apiserver-ci-4459-2-2-0-7f64ef3ba0\" (UID: \"008f0eb8705abf355281fc26217b03de\") " pod="kube-system/kube-apiserver-ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:24.400507 kubelet[2400]: I1216 12:28:24.400161 2400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/008f0eb8705abf355281fc26217b03de-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-2-0-7f64ef3ba0\" (UID: \"008f0eb8705abf355281fc26217b03de\") " pod="kube-system/kube-apiserver-ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:24.400507 kubelet[2400]: I1216 12:28:24.400218 2400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1f6c60e428c797ce80285326d590e63d-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-2-0-7f64ef3ba0\" (UID: \"1f6c60e428c797ce80285326d590e63d\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:24.400507 kubelet[2400]: I1216 12:28:24.400254 2400 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1f6c60e428c797ce80285326d590e63d-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-2-0-7f64ef3ba0\" (UID: \"1f6c60e428c797ce80285326d590e63d\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:24.582543 kubelet[2400]: I1216 12:28:24.582466 2400 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:24.583201 kubelet[2400]: E1216 12:28:24.583157 2400 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://88.99.82.111:6443/api/v1/nodes\": dial tcp 88.99.82.111:6443: connect: connection refused" node="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:24.649387 containerd[1536]: time="2025-12-16T12:28:24.648577393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-2-0-7f64ef3ba0,Uid:008f0eb8705abf355281fc26217b03de,Namespace:kube-system,Attempt:0,}" Dec 16 12:28:24.668134 containerd[1536]: time="2025-12-16T12:28:24.666110730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-2-0-7f64ef3ba0,Uid:1f6c60e428c797ce80285326d590e63d,Namespace:kube-system,Attempt:0,}" Dec 16 12:28:24.684670 containerd[1536]: time="2025-12-16T12:28:24.684625130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-2-0-7f64ef3ba0,Uid:0a8998f9703a61913bdb5ca2745d23ef,Namespace:kube-system,Attempt:0,}" Dec 16 12:28:24.688447 containerd[1536]: time="2025-12-16T12:28:24.688344898Z" level=info msg="connecting to shim 7008af4a928b07637e64d7f80e39429f68396e234a52191afc58e5acf40aba46" address="unix:///run/containerd/s/3dc9bc599572362b3dcca440f02a7212e5f82cacc2e44e26a6a89e3eb316ae7b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:28:24.714882 containerd[1536]: time="2025-12-16T12:28:24.714827847Z" level=info msg="connecting to shim 364d97b320adf6fca0899a22c278666d0a77062846ec2cd90cb5afa2fa941319" address="unix:///run/containerd/s/11ff23c360ba8a07c55bace55d7648c48f58a95d8f7640fdb0154c1dc76b3a9a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:28:24.736916 containerd[1536]: time="2025-12-16T12:28:24.736716207Z" level=info msg="connecting to shim 49073cece6619cda0edfa148a7475d8cec5460fe2ac8c1ab0ceb5a4ec637c292" address="unix:///run/containerd/s/4547071a073faf7873c629b0c49e59eaea83703d2269812491ceb8e9e52867f7" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:28:24.742990 systemd[1]: Started cri-containerd-7008af4a928b07637e64d7f80e39429f68396e234a52191afc58e5acf40aba46.scope - libcontainer container 7008af4a928b07637e64d7f80e39429f68396e234a52191afc58e5acf40aba46. Dec 16 12:28:24.763616 systemd[1]: Started cri-containerd-364d97b320adf6fca0899a22c278666d0a77062846ec2cd90cb5afa2fa941319.scope - libcontainer container 364d97b320adf6fca0899a22c278666d0a77062846ec2cd90cb5afa2fa941319. Dec 16 12:28:24.786018 systemd[1]: Started cri-containerd-49073cece6619cda0edfa148a7475d8cec5460fe2ac8c1ab0ceb5a4ec637c292.scope - libcontainer container 49073cece6619cda0edfa148a7475d8cec5460fe2ac8c1ab0ceb5a4ec637c292. Dec 16 12:28:24.801597 kubelet[2400]: E1216 12:28:24.801035 2400 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://88.99.82.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-0-7f64ef3ba0?timeout=10s\": dial tcp 88.99.82.111:6443: connect: connection refused" interval="800ms" Dec 16 12:28:24.826672 containerd[1536]: time="2025-12-16T12:28:24.826614104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-2-0-7f64ef3ba0,Uid:008f0eb8705abf355281fc26217b03de,Namespace:kube-system,Attempt:0,} returns sandbox id \"7008af4a928b07637e64d7f80e39429f68396e234a52191afc58e5acf40aba46\"" Dec 16 12:28:24.832651 containerd[1536]: time="2025-12-16T12:28:24.832604686Z" level=info msg="CreateContainer within sandbox \"7008af4a928b07637e64d7f80e39429f68396e234a52191afc58e5acf40aba46\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 12:28:24.845198 containerd[1536]: time="2025-12-16T12:28:24.845071782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-2-0-7f64ef3ba0,Uid:1f6c60e428c797ce80285326d590e63d,Namespace:kube-system,Attempt:0,} returns sandbox id \"364d97b320adf6fca0899a22c278666d0a77062846ec2cd90cb5afa2fa941319\"" Dec 16 12:28:24.851398 containerd[1536]: time="2025-12-16T12:28:24.851332051Z" level=info msg="CreateContainer within sandbox \"364d97b320adf6fca0899a22c278666d0a77062846ec2cd90cb5afa2fa941319\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 12:28:24.853579 containerd[1536]: time="2025-12-16T12:28:24.853493743Z" level=info msg="Container 2253a2a9574b021d7aa054c435704bb54971d0d65f784813fdacc47e0abe4b43: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:28:24.866228 containerd[1536]: time="2025-12-16T12:28:24.865979519Z" level=info msg="CreateContainer within sandbox \"7008af4a928b07637e64d7f80e39429f68396e234a52191afc58e5acf40aba46\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2253a2a9574b021d7aa054c435704bb54971d0d65f784813fdacc47e0abe4b43\"" Dec 16 12:28:24.867165 containerd[1536]: time="2025-12-16T12:28:24.866805379Z" level=info msg="StartContainer for \"2253a2a9574b021d7aa054c435704bb54971d0d65f784813fdacc47e0abe4b43\"" Dec 16 12:28:24.868648 containerd[1536]: time="2025-12-16T12:28:24.868487499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-2-0-7f64ef3ba0,Uid:0a8998f9703a61913bdb5ca2745d23ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"49073cece6619cda0edfa148a7475d8cec5460fe2ac8c1ab0ceb5a4ec637c292\"" Dec 16 12:28:24.869197 containerd[1536]: time="2025-12-16T12:28:24.869163555Z" level=info msg="connecting to shim 2253a2a9574b021d7aa054c435704bb54971d0d65f784813fdacc47e0abe4b43" address="unix:///run/containerd/s/3dc9bc599572362b3dcca440f02a7212e5f82cacc2e44e26a6a89e3eb316ae7b" protocol=ttrpc version=3 Dec 16 12:28:24.871494 containerd[1536]: time="2025-12-16T12:28:24.871422889Z" level=info msg="Container bce43a6c367181a98a505482859f536e4b4f83822d4b6ce59100c4dc78eb8386: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:28:24.872338 containerd[1536]: time="2025-12-16T12:28:24.872288789Z" level=info msg="CreateContainer within sandbox \"49073cece6619cda0edfa148a7475d8cec5460fe2ac8c1ab0ceb5a4ec637c292\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 12:28:24.879382 containerd[1536]: time="2025-12-16T12:28:24.879317756Z" level=info msg="CreateContainer within sandbox \"364d97b320adf6fca0899a22c278666d0a77062846ec2cd90cb5afa2fa941319\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bce43a6c367181a98a505482859f536e4b4f83822d4b6ce59100c4dc78eb8386\"" Dec 16 12:28:24.880218 containerd[1536]: time="2025-12-16T12:28:24.880184857Z" level=info msg="StartContainer for \"bce43a6c367181a98a505482859f536e4b4f83822d4b6ce59100c4dc78eb8386\"" Dec 16 12:28:24.882562 containerd[1536]: time="2025-12-16T12:28:24.882489672Z" level=info msg="connecting to shim bce43a6c367181a98a505482859f536e4b4f83822d4b6ce59100c4dc78eb8386" address="unix:///run/containerd/s/11ff23c360ba8a07c55bace55d7648c48f58a95d8f7640fdb0154c1dc76b3a9a" protocol=ttrpc version=3 Dec 16 12:28:24.887531 containerd[1536]: time="2025-12-16T12:28:24.887488831Z" level=info msg="Container 3115464919e3432dc056f09975a66e0a2c0b0b88e450e7ef73f32e2be77e96ed: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:28:24.897672 systemd[1]: Started cri-containerd-2253a2a9574b021d7aa054c435704bb54971d0d65f784813fdacc47e0abe4b43.scope - libcontainer container 2253a2a9574b021d7aa054c435704bb54971d0d65f784813fdacc47e0abe4b43. Dec 16 12:28:24.901312 containerd[1536]: time="2025-12-16T12:28:24.901204956Z" level=info msg="CreateContainer within sandbox \"49073cece6619cda0edfa148a7475d8cec5460fe2ac8c1ab0ceb5a4ec637c292\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3115464919e3432dc056f09975a66e0a2c0b0b88e450e7ef73f32e2be77e96ed\"" Dec 16 12:28:24.903085 containerd[1536]: time="2025-12-16T12:28:24.903049320Z" level=info msg="StartContainer for \"3115464919e3432dc056f09975a66e0a2c0b0b88e450e7ef73f32e2be77e96ed\"" Dec 16 12:28:24.905233 containerd[1536]: time="2025-12-16T12:28:24.905165291Z" level=info msg="connecting to shim 3115464919e3432dc056f09975a66e0a2c0b0b88e450e7ef73f32e2be77e96ed" address="unix:///run/containerd/s/4547071a073faf7873c629b0c49e59eaea83703d2269812491ceb8e9e52867f7" protocol=ttrpc version=3 Dec 16 12:28:24.915684 systemd[1]: Started cri-containerd-bce43a6c367181a98a505482859f536e4b4f83822d4b6ce59100c4dc78eb8386.scope - libcontainer container bce43a6c367181a98a505482859f536e4b4f83822d4b6ce59100c4dc78eb8386. Dec 16 12:28:24.939332 systemd[1]: Started cri-containerd-3115464919e3432dc056f09975a66e0a2c0b0b88e450e7ef73f32e2be77e96ed.scope - libcontainer container 3115464919e3432dc056f09975a66e0a2c0b0b88e450e7ef73f32e2be77e96ed. Dec 16 12:28:24.973496 containerd[1536]: time="2025-12-16T12:28:24.972841059Z" level=info msg="StartContainer for \"2253a2a9574b021d7aa054c435704bb54971d0d65f784813fdacc47e0abe4b43\" returns successfully" Dec 16 12:28:24.995644 kubelet[2400]: I1216 12:28:24.995613 2400 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:24.996450 kubelet[2400]: E1216 12:28:24.996296 2400 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://88.99.82.111:6443/api/v1/nodes\": dial tcp 88.99.82.111:6443: connect: connection refused" node="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:25.013312 containerd[1536]: time="2025-12-16T12:28:25.013275162Z" level=info msg="StartContainer for \"bce43a6c367181a98a505482859f536e4b4f83822d4b6ce59100c4dc78eb8386\" returns successfully" Dec 16 12:28:25.032346 containerd[1536]: time="2025-12-16T12:28:25.032303866Z" level=info msg="StartContainer for \"3115464919e3432dc056f09975a66e0a2c0b0b88e450e7ef73f32e2be77e96ed\" returns successfully" Dec 16 12:28:25.046990 kubelet[2400]: W1216 12:28:25.046876 2400 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://88.99.82.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 88.99.82.111:6443: connect: connection refused Dec 16 12:28:25.046990 kubelet[2400]: E1216 12:28:25.046952 2400 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://88.99.82.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 88.99.82.111:6443: connect: connection refused" logger="UnhandledError" Dec 16 12:28:25.239091 kubelet[2400]: E1216 12:28:25.238885 2400 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-0-7f64ef3ba0\" not found" node="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:25.240960 kubelet[2400]: E1216 12:28:25.240917 2400 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-0-7f64ef3ba0\" not found" node="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:25.245035 kubelet[2400]: E1216 12:28:25.245002 2400 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-0-7f64ef3ba0\" not found" node="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:25.799672 kubelet[2400]: I1216 12:28:25.799629 2400 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:26.247020 kubelet[2400]: E1216 12:28:26.246963 2400 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-0-7f64ef3ba0\" not found" node="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:26.248602 kubelet[2400]: E1216 12:28:26.248571 2400 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-0-7f64ef3ba0\" not found" node="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:27.016279 kubelet[2400]: E1216 12:28:27.016207 2400 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-2-2-0-7f64ef3ba0\" not found" node="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:27.103339 kubelet[2400]: E1216 12:28:27.103076 2400 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4459-2-2-0-7f64ef3ba0.1881b1dfeaec4013 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-2-0-7f64ef3ba0,UID:ci-4459-2-2-0-7f64ef3ba0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-2-0-7f64ef3ba0,},FirstTimestamp:2025-12-16 12:28:24.183119891 +0000 UTC m=+0.859973872,LastTimestamp:2025-12-16 12:28:24.183119891 +0000 UTC m=+0.859973872,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-2-0-7f64ef3ba0,}" Dec 16 12:28:27.167036 kubelet[2400]: I1216 12:28:27.166779 2400 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:27.179160 kubelet[2400]: I1216 12:28:27.179123 2400 apiserver.go:52] "Watching apiserver" Dec 16 12:28:27.199036 kubelet[2400]: I1216 12:28:27.199001 2400 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:27.244285 kubelet[2400]: E1216 12:28:27.244246 2400 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-2-0-7f64ef3ba0\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:27.244285 kubelet[2400]: I1216 12:28:27.244279 2400 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:27.253172 kubelet[2400]: E1216 12:28:27.253123 2400 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-2-0-7f64ef3ba0\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:27.253172 kubelet[2400]: I1216 12:28:27.253156 2400 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:27.261655 kubelet[2400]: E1216 12:28:27.261616 2400 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-2-0-7f64ef3ba0\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:27.299441 kubelet[2400]: I1216 12:28:27.298709 2400 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:28:28.589344 kubelet[2400]: I1216 12:28:28.589294 2400 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:29.271496 systemd[1]: Reload requested from client PID 2669 ('systemctl') (unit session-7.scope)... Dec 16 12:28:29.271515 systemd[1]: Reloading... Dec 16 12:28:29.411453 zram_generator::config[2716]: No configuration found. Dec 16 12:28:29.638545 systemd[1]: Reloading finished in 366 ms. Dec 16 12:28:29.670199 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:28:29.697209 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:28:29.697730 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:28:29.697852 systemd[1]: kubelet.service: Consumed 1.300s CPU time, 127.7M memory peak. Dec 16 12:28:29.704116 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:28:29.882034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:28:29.892991 (kubelet)[2757]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:28:29.949656 kubelet[2757]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:28:29.949656 kubelet[2757]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:28:29.949656 kubelet[2757]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:28:29.950062 kubelet[2757]: I1216 12:28:29.949765 2757 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:28:29.957489 kubelet[2757]: I1216 12:28:29.957350 2757 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 12:28:29.957489 kubelet[2757]: I1216 12:28:29.957470 2757 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:28:29.957847 kubelet[2757]: I1216 12:28:29.957811 2757 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 12:28:29.962321 kubelet[2757]: I1216 12:28:29.962197 2757 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 16 12:28:29.965457 kubelet[2757]: I1216 12:28:29.965237 2757 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:28:29.973934 kubelet[2757]: I1216 12:28:29.973864 2757 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:28:29.978347 kubelet[2757]: I1216 12:28:29.978288 2757 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:28:29.978662 kubelet[2757]: I1216 12:28:29.978614 2757 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:28:29.978862 kubelet[2757]: I1216 12:28:29.978649 2757 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-2-0-7f64ef3ba0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:28:29.978862 kubelet[2757]: I1216 12:28:29.978849 2757 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:28:29.978862 kubelet[2757]: I1216 12:28:29.978858 2757 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 12:28:29.979030 kubelet[2757]: I1216 12:28:29.978904 2757 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:28:29.979076 kubelet[2757]: I1216 12:28:29.979059 2757 kubelet.go:446] "Attempting to sync node with API server" Dec 16 12:28:29.979112 kubelet[2757]: I1216 12:28:29.979077 2757 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:28:29.979556 kubelet[2757]: I1216 12:28:29.979534 2757 kubelet.go:352] "Adding apiserver pod source" Dec 16 12:28:29.979556 kubelet[2757]: I1216 12:28:29.979558 2757 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:28:29.982491 kubelet[2757]: I1216 12:28:29.982460 2757 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 12:28:29.983017 kubelet[2757]: I1216 12:28:29.982953 2757 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 12:28:29.983436 kubelet[2757]: I1216 12:28:29.983402 2757 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:28:29.983436 kubelet[2757]: I1216 12:28:29.983436 2757 server.go:1287] "Started kubelet" Dec 16 12:28:29.987614 kubelet[2757]: I1216 12:28:29.987381 2757 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:28:29.992039 kubelet[2757]: I1216 12:28:29.989668 2757 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:28:30.006370 kubelet[2757]: I1216 12:28:30.004948 2757 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 12:28:30.006370 kubelet[2757]: I1216 12:28:30.005997 2757 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 12:28:30.006370 kubelet[2757]: I1216 12:28:30.006024 2757 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 12:28:30.006370 kubelet[2757]: I1216 12:28:30.006048 2757 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:28:30.006370 kubelet[2757]: I1216 12:28:30.006055 2757 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 12:28:30.006370 kubelet[2757]: E1216 12:28:30.006099 2757 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:28:30.008009 kubelet[2757]: I1216 12:28:30.007956 2757 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:28:30.008453 kubelet[2757]: I1216 12:28:30.008419 2757 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:28:30.008872 kubelet[2757]: I1216 12:28:30.008843 2757 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:28:30.010739 kubelet[2757]: I1216 12:28:30.010714 2757 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:28:30.011129 kubelet[2757]: E1216 12:28:30.011092 2757 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-2-0-7f64ef3ba0\" not found" Dec 16 12:28:30.012442 kubelet[2757]: I1216 12:28:30.012406 2757 server.go:479] "Adding debug handlers to kubelet server" Dec 16 12:28:30.020426 kubelet[2757]: I1216 12:28:30.020392 2757 factory.go:221] Registration of the systemd container factory successfully Dec 16 12:28:30.020630 kubelet[2757]: I1216 12:28:30.020608 2757 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:28:30.033397 kubelet[2757]: I1216 12:28:30.032113 2757 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:28:30.033727 kubelet[2757]: I1216 12:28:30.033703 2757 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:28:30.036618 kubelet[2757]: I1216 12:28:30.036592 2757 factory.go:221] Registration of the containerd container factory successfully Dec 16 12:28:30.083705 kubelet[2757]: I1216 12:28:30.083677 2757 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:28:30.083705 kubelet[2757]: I1216 12:28:30.083696 2757 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:28:30.083705 kubelet[2757]: I1216 12:28:30.083717 2757 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:28:30.083916 kubelet[2757]: I1216 12:28:30.083897 2757 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 12:28:30.083946 kubelet[2757]: I1216 12:28:30.083913 2757 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 12:28:30.083946 kubelet[2757]: I1216 12:28:30.083932 2757 policy_none.go:49] "None policy: Start" Dec 16 12:28:30.083946 kubelet[2757]: I1216 12:28:30.083941 2757 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:28:30.084004 kubelet[2757]: I1216 12:28:30.083950 2757 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:28:30.084055 kubelet[2757]: I1216 12:28:30.084043 2757 state_mem.go:75] "Updated machine memory state" Dec 16 12:28:30.088899 kubelet[2757]: I1216 12:28:30.088582 2757 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 12:28:30.090717 kubelet[2757]: I1216 12:28:30.090656 2757 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:28:30.092216 kubelet[2757]: I1216 12:28:30.092168 2757 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:28:30.092628 kubelet[2757]: I1216 12:28:30.092614 2757 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:28:30.095308 kubelet[2757]: E1216 12:28:30.094035 2757 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:28:30.106809 kubelet[2757]: I1216 12:28:30.106764 2757 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:30.108170 kubelet[2757]: I1216 12:28:30.108151 2757 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:30.112370 kubelet[2757]: I1216 12:28:30.111564 2757 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:30.112370 kubelet[2757]: I1216 12:28:30.111762 2757 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:30.117215 kubelet[2757]: E1216 12:28:30.117007 2757 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-2-0-7f64ef3ba0\" already exists" pod="kube-system/kube-scheduler-ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:30.120691 kubelet[2757]: E1216 12:28:30.120532 2757 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-2-0-7f64ef3ba0\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:30.120691 kubelet[2757]: I1216 12:28:30.120563 2757 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:30.128322 kubelet[2757]: E1216 12:28:30.128231 2757 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-2-0-7f64ef3ba0\" already exists" pod="kube-system/kube-controller-manager-ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:30.128322 kubelet[2757]: I1216 12:28:30.128281 2757 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:30.134172 kubelet[2757]: I1216 12:28:30.134125 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/008f0eb8705abf355281fc26217b03de-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-2-0-7f64ef3ba0\" (UID: \"008f0eb8705abf355281fc26217b03de\") " pod="kube-system/kube-apiserver-ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:30.134172 kubelet[2757]: I1216 12:28:30.134171 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1f6c60e428c797ce80285326d590e63d-ca-certs\") pod \"kube-controller-manager-ci-4459-2-2-0-7f64ef3ba0\" (UID: \"1f6c60e428c797ce80285326d590e63d\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:30.134423 kubelet[2757]: I1216 12:28:30.134201 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1f6c60e428c797ce80285326d590e63d-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-2-0-7f64ef3ba0\" (UID: \"1f6c60e428c797ce80285326d590e63d\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:30.134423 kubelet[2757]: I1216 12:28:30.134225 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a8998f9703a61913bdb5ca2745d23ef-kubeconfig\") pod \"kube-scheduler-ci-4459-2-2-0-7f64ef3ba0\" (UID: \"0a8998f9703a61913bdb5ca2745d23ef\") " pod="kube-system/kube-scheduler-ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:30.134423 kubelet[2757]: I1216 12:28:30.134247 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/008f0eb8705abf355281fc26217b03de-ca-certs\") pod \"kube-apiserver-ci-4459-2-2-0-7f64ef3ba0\" (UID: \"008f0eb8705abf355281fc26217b03de\") " pod="kube-system/kube-apiserver-ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:30.134423 kubelet[2757]: I1216 12:28:30.134266 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/008f0eb8705abf355281fc26217b03de-k8s-certs\") pod \"kube-apiserver-ci-4459-2-2-0-7f64ef3ba0\" (UID: \"008f0eb8705abf355281fc26217b03de\") " pod="kube-system/kube-apiserver-ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:30.134423 kubelet[2757]: I1216 12:28:30.134288 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1f6c60e428c797ce80285326d590e63d-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-2-0-7f64ef3ba0\" (UID: \"1f6c60e428c797ce80285326d590e63d\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:30.134683 kubelet[2757]: I1216 12:28:30.134307 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1f6c60e428c797ce80285326d590e63d-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-2-0-7f64ef3ba0\" (UID: \"1f6c60e428c797ce80285326d590e63d\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:30.134683 kubelet[2757]: I1216 12:28:30.134328 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1f6c60e428c797ce80285326d590e63d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-2-0-7f64ef3ba0\" (UID: \"1f6c60e428c797ce80285326d590e63d\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:30.137135 kubelet[2757]: E1216 12:28:30.137092 2757 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-2-0-7f64ef3ba0\" already exists" pod="kube-system/kube-scheduler-ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:30.198941 kubelet[2757]: I1216 12:28:30.197944 2757 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:30.211204 kubelet[2757]: I1216 12:28:30.211168 2757 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:30.211405 kubelet[2757]: I1216 12:28:30.211258 2757 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:30.981543 kubelet[2757]: I1216 12:28:30.981487 2757 apiserver.go:52] "Watching apiserver" Dec 16 12:28:31.033893 kubelet[2757]: I1216 12:28:31.033614 2757 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:28:31.033893 kubelet[2757]: I1216 12:28:31.033599 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-2-0-7f64ef3ba0" podStartSLOduration=1.033579355 podStartE2EDuration="1.033579355s" podCreationTimestamp="2025-12-16 12:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:28:31.030521389 +0000 UTC m=+1.130794126" watchObservedRunningTime="2025-12-16 12:28:31.033579355 +0000 UTC m=+1.133852092" Dec 16 12:28:31.033893 kubelet[2757]: I1216 12:28:31.033753 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-2-0-7f64ef3ba0" podStartSLOduration=1.0337476780000001 podStartE2EDuration="1.033747678s" podCreationTimestamp="2025-12-16 12:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:28:31.015560363 +0000 UTC m=+1.115833140" watchObservedRunningTime="2025-12-16 12:28:31.033747678 +0000 UTC m=+1.134020415" Dec 16 12:28:31.048384 kubelet[2757]: I1216 12:28:31.047655 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-2-0-7f64ef3ba0" podStartSLOduration=3.047630168 podStartE2EDuration="3.047630168s" podCreationTimestamp="2025-12-16 12:28:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:28:31.047154881 +0000 UTC m=+1.147427618" watchObservedRunningTime="2025-12-16 12:28:31.047630168 +0000 UTC m=+1.147902905" Dec 16 12:28:31.056759 kubelet[2757]: I1216 12:28:31.056726 2757 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:31.070382 kubelet[2757]: E1216 12:28:31.069073 2757 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-2-0-7f64ef3ba0\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:28:36.807008 kubelet[2757]: I1216 12:28:36.806800 2757 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 12:28:36.809275 kubelet[2757]: I1216 12:28:36.807755 2757 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 12:28:36.809485 containerd[1536]: time="2025-12-16T12:28:36.807416375Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 12:28:37.724290 systemd[1]: Created slice kubepods-besteffort-pod70392733_2a07_4ffd_8a31_03351d64b13a.slice - libcontainer container kubepods-besteffort-pod70392733_2a07_4ffd_8a31_03351d64b13a.slice. Dec 16 12:28:37.784670 kubelet[2757]: I1216 12:28:37.784576 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/70392733-2a07-4ffd-8a31-03351d64b13a-kube-proxy\") pod \"kube-proxy-nthjb\" (UID: \"70392733-2a07-4ffd-8a31-03351d64b13a\") " pod="kube-system/kube-proxy-nthjb" Dec 16 12:28:37.784942 kubelet[2757]: I1216 12:28:37.784886 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70392733-2a07-4ffd-8a31-03351d64b13a-xtables-lock\") pod \"kube-proxy-nthjb\" (UID: \"70392733-2a07-4ffd-8a31-03351d64b13a\") " pod="kube-system/kube-proxy-nthjb" Dec 16 12:28:37.785022 kubelet[2757]: I1216 12:28:37.784961 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70392733-2a07-4ffd-8a31-03351d64b13a-lib-modules\") pod \"kube-proxy-nthjb\" (UID: \"70392733-2a07-4ffd-8a31-03351d64b13a\") " pod="kube-system/kube-proxy-nthjb" Dec 16 12:28:37.785022 kubelet[2757]: I1216 12:28:37.784984 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-466bw\" (UniqueName: \"kubernetes.io/projected/70392733-2a07-4ffd-8a31-03351d64b13a-kube-api-access-466bw\") pod \"kube-proxy-nthjb\" (UID: \"70392733-2a07-4ffd-8a31-03351d64b13a\") " pod="kube-system/kube-proxy-nthjb" Dec 16 12:28:37.922942 systemd[1]: Created slice kubepods-besteffort-pod3c35e47e_c421_4306_a8ff_261667ff316d.slice - libcontainer container kubepods-besteffort-pod3c35e47e_c421_4306_a8ff_261667ff316d.slice. Dec 16 12:28:37.986235 kubelet[2757]: I1216 12:28:37.986050 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8wqd\" (UniqueName: \"kubernetes.io/projected/3c35e47e-c421-4306-a8ff-261667ff316d-kube-api-access-q8wqd\") pod \"tigera-operator-7dcd859c48-swwvw\" (UID: \"3c35e47e-c421-4306-a8ff-261667ff316d\") " pod="tigera-operator/tigera-operator-7dcd859c48-swwvw" Dec 16 12:28:37.986235 kubelet[2757]: I1216 12:28:37.986106 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3c35e47e-c421-4306-a8ff-261667ff316d-var-lib-calico\") pod \"tigera-operator-7dcd859c48-swwvw\" (UID: \"3c35e47e-c421-4306-a8ff-261667ff316d\") " pod="tigera-operator/tigera-operator-7dcd859c48-swwvw" Dec 16 12:28:38.036330 containerd[1536]: time="2025-12-16T12:28:38.036278068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nthjb,Uid:70392733-2a07-4ffd-8a31-03351d64b13a,Namespace:kube-system,Attempt:0,}" Dec 16 12:28:38.061263 containerd[1536]: time="2025-12-16T12:28:38.060551822Z" level=info msg="connecting to shim 2a87b8d14ca5840a8eeb737942747fa5ca50508bae737bdb66609b70d0698463" address="unix:///run/containerd/s/80cebc172f612098115bd732587495a1e08820d50c833baadaf196494c408832" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:28:38.097627 systemd[1]: Started cri-containerd-2a87b8d14ca5840a8eeb737942747fa5ca50508bae737bdb66609b70d0698463.scope - libcontainer container 2a87b8d14ca5840a8eeb737942747fa5ca50508bae737bdb66609b70d0698463. Dec 16 12:28:38.135683 containerd[1536]: time="2025-12-16T12:28:38.135598144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nthjb,Uid:70392733-2a07-4ffd-8a31-03351d64b13a,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a87b8d14ca5840a8eeb737942747fa5ca50508bae737bdb66609b70d0698463\"" Dec 16 12:28:38.140396 containerd[1536]: time="2025-12-16T12:28:38.140320750Z" level=info msg="CreateContainer within sandbox \"2a87b8d14ca5840a8eeb737942747fa5ca50508bae737bdb66609b70d0698463\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 12:28:38.155900 containerd[1536]: time="2025-12-16T12:28:38.155607697Z" level=info msg="Container 6885660835d8f7f98d6d64f916163db6482f35ff46412363ffffb8f600a4184d: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:28:38.169414 containerd[1536]: time="2025-12-16T12:28:38.169290229Z" level=info msg="CreateContainer within sandbox \"2a87b8d14ca5840a8eeb737942747fa5ca50508bae737bdb66609b70d0698463\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6885660835d8f7f98d6d64f916163db6482f35ff46412363ffffb8f600a4184d\"" Dec 16 12:28:38.170587 containerd[1536]: time="2025-12-16T12:28:38.170542761Z" level=info msg="StartContainer for \"6885660835d8f7f98d6d64f916163db6482f35ff46412363ffffb8f600a4184d\"" Dec 16 12:28:38.172465 containerd[1536]: time="2025-12-16T12:28:38.172433939Z" level=info msg="connecting to shim 6885660835d8f7f98d6d64f916163db6482f35ff46412363ffffb8f600a4184d" address="unix:///run/containerd/s/80cebc172f612098115bd732587495a1e08820d50c833baadaf196494c408832" protocol=ttrpc version=3 Dec 16 12:28:38.198678 systemd[1]: Started cri-containerd-6885660835d8f7f98d6d64f916163db6482f35ff46412363ffffb8f600a4184d.scope - libcontainer container 6885660835d8f7f98d6d64f916163db6482f35ff46412363ffffb8f600a4184d. Dec 16 12:28:38.229337 containerd[1536]: time="2025-12-16T12:28:38.229264686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-swwvw,Uid:3c35e47e-c421-4306-a8ff-261667ff316d,Namespace:tigera-operator,Attempt:0,}" Dec 16 12:28:38.251741 containerd[1536]: time="2025-12-16T12:28:38.251625341Z" level=info msg="connecting to shim 6aac3e64664ab2f5c2505c7e796659c6c5ad399b093b6a7e0c506b088c2536ce" address="unix:///run/containerd/s/07cd2268c32c21b368d764f7ec08bccf93a4b31f14e470d0c14f2aa1b46bddbe" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:28:38.281604 systemd[1]: Started cri-containerd-6aac3e64664ab2f5c2505c7e796659c6c5ad399b093b6a7e0c506b088c2536ce.scope - libcontainer container 6aac3e64664ab2f5c2505c7e796659c6c5ad399b093b6a7e0c506b088c2536ce. Dec 16 12:28:38.294862 containerd[1536]: time="2025-12-16T12:28:38.294725676Z" level=info msg="StartContainer for \"6885660835d8f7f98d6d64f916163db6482f35ff46412363ffffb8f600a4184d\" returns successfully" Dec 16 12:28:38.330427 containerd[1536]: time="2025-12-16T12:28:38.330267579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-swwvw,Uid:3c35e47e-c421-4306-a8ff-261667ff316d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6aac3e64664ab2f5c2505c7e796659c6c5ad399b093b6a7e0c506b088c2536ce\"" Dec 16 12:28:38.334060 containerd[1536]: time="2025-12-16T12:28:38.334022295Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 12:28:39.097641 kubelet[2757]: I1216 12:28:39.097109 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nthjb" podStartSLOduration=2.097082104 podStartE2EDuration="2.097082104s" podCreationTimestamp="2025-12-16 12:28:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:28:39.096924662 +0000 UTC m=+9.197197399" watchObservedRunningTime="2025-12-16 12:28:39.097082104 +0000 UTC m=+9.197354841" Dec 16 12:28:40.416289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3849210138.mount: Deactivated successfully. Dec 16 12:28:40.859397 containerd[1536]: time="2025-12-16T12:28:40.859008363Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:40.861015 containerd[1536]: time="2025-12-16T12:28:40.860935860Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Dec 16 12:28:40.862317 containerd[1536]: time="2025-12-16T12:28:40.861917148Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:40.864406 containerd[1536]: time="2025-12-16T12:28:40.864380889Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:40.865949 containerd[1536]: time="2025-12-16T12:28:40.865916702Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.531857527s" Dec 16 12:28:40.866270 containerd[1536]: time="2025-12-16T12:28:40.866251825Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 16 12:28:40.870090 containerd[1536]: time="2025-12-16T12:28:40.870045817Z" level=info msg="CreateContainer within sandbox \"6aac3e64664ab2f5c2505c7e796659c6c5ad399b093b6a7e0c506b088c2536ce\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 12:28:40.878961 containerd[1536]: time="2025-12-16T12:28:40.878823731Z" level=info msg="Container 849cb1d444f3ff2d8dc1b387fd4b775fbdc2a478ade2b21739d9b059b4b4918d: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:28:40.892753 containerd[1536]: time="2025-12-16T12:28:40.892636128Z" level=info msg="CreateContainer within sandbox \"6aac3e64664ab2f5c2505c7e796659c6c5ad399b093b6a7e0c506b088c2536ce\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"849cb1d444f3ff2d8dc1b387fd4b775fbdc2a478ade2b21739d9b059b4b4918d\"" Dec 16 12:28:40.895051 containerd[1536]: time="2025-12-16T12:28:40.893582856Z" level=info msg="StartContainer for \"849cb1d444f3ff2d8dc1b387fd4b775fbdc2a478ade2b21739d9b059b4b4918d\"" Dec 16 12:28:40.895051 containerd[1536]: time="2025-12-16T12:28:40.894446663Z" level=info msg="connecting to shim 849cb1d444f3ff2d8dc1b387fd4b775fbdc2a478ade2b21739d9b059b4b4918d" address="unix:///run/containerd/s/07cd2268c32c21b368d764f7ec08bccf93a4b31f14e470d0c14f2aa1b46bddbe" protocol=ttrpc version=3 Dec 16 12:28:40.931557 systemd[1]: Started cri-containerd-849cb1d444f3ff2d8dc1b387fd4b775fbdc2a478ade2b21739d9b059b4b4918d.scope - libcontainer container 849cb1d444f3ff2d8dc1b387fd4b775fbdc2a478ade2b21739d9b059b4b4918d. Dec 16 12:28:40.976708 containerd[1536]: time="2025-12-16T12:28:40.976598958Z" level=info msg="StartContainer for \"849cb1d444f3ff2d8dc1b387fd4b775fbdc2a478ade2b21739d9b059b4b4918d\" returns successfully" Dec 16 12:28:41.633872 kubelet[2757]: I1216 12:28:41.633444 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-swwvw" podStartSLOduration=2.098509388 podStartE2EDuration="4.633418942s" podCreationTimestamp="2025-12-16 12:28:37 +0000 UTC" firstStartedPulling="2025-12-16 12:28:38.332618921 +0000 UTC m=+8.432891658" lastFinishedPulling="2025-12-16 12:28:40.867528475 +0000 UTC m=+10.967801212" observedRunningTime="2025-12-16 12:28:41.106168759 +0000 UTC m=+11.206441496" watchObservedRunningTime="2025-12-16 12:28:41.633418942 +0000 UTC m=+11.733691799" Dec 16 12:28:47.034964 sudo[1838]: pam_unix(sudo:session): session closed for user root Dec 16 12:28:47.205703 sshd[1837]: Connection closed by 139.178.89.65 port 36246 Dec 16 12:28:47.207950 sshd-session[1814]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:47.214477 systemd[1]: sshd@6-88.99.82.111:22-139.178.89.65:36246.service: Deactivated successfully. Dec 16 12:28:47.214622 systemd-logind[1519]: Session 7 logged out. Waiting for processes to exit. Dec 16 12:28:47.219905 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 12:28:47.221533 systemd[1]: session-7.scope: Consumed 9.405s CPU time, 218.1M memory peak. Dec 16 12:28:47.225574 systemd-logind[1519]: Removed session 7. Dec 16 12:28:57.525202 systemd[1]: Created slice kubepods-besteffort-pod063e7e17_9ab0_4883_8ea7_e65c1d356a44.slice - libcontainer container kubepods-besteffort-pod063e7e17_9ab0_4883_8ea7_e65c1d356a44.slice. Dec 16 12:28:57.620374 kubelet[2757]: I1216 12:28:57.620308 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/063e7e17-9ab0-4883-8ea7-e65c1d356a44-tigera-ca-bundle\") pod \"calico-typha-7d6d559789-7x8zr\" (UID: \"063e7e17-9ab0-4883-8ea7-e65c1d356a44\") " pod="calico-system/calico-typha-7d6d559789-7x8zr" Dec 16 12:28:57.620872 kubelet[2757]: I1216 12:28:57.620408 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clr29\" (UniqueName: \"kubernetes.io/projected/063e7e17-9ab0-4883-8ea7-e65c1d356a44-kube-api-access-clr29\") pod \"calico-typha-7d6d559789-7x8zr\" (UID: \"063e7e17-9ab0-4883-8ea7-e65c1d356a44\") " pod="calico-system/calico-typha-7d6d559789-7x8zr" Dec 16 12:28:57.620872 kubelet[2757]: I1216 12:28:57.620431 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/063e7e17-9ab0-4883-8ea7-e65c1d356a44-typha-certs\") pod \"calico-typha-7d6d559789-7x8zr\" (UID: \"063e7e17-9ab0-4883-8ea7-e65c1d356a44\") " pod="calico-system/calico-typha-7d6d559789-7x8zr" Dec 16 12:28:57.720280 kubelet[2757]: I1216 12:28:57.720238 2757 status_manager.go:890] "Failed to get status for pod" podUID="08322b52-c1c0-423e-ae62-db7de174f36d" pod="calico-system/calico-node-sj62k" err="pods \"calico-node-sj62k\" is forbidden: User \"system:node:ci-4459-2-2-0-7f64ef3ba0\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4459-2-2-0-7f64ef3ba0' and this object" Dec 16 12:28:57.722139 systemd[1]: Created slice kubepods-besteffort-pod08322b52_c1c0_423e_ae62_db7de174f36d.slice - libcontainer container kubepods-besteffort-pod08322b52_c1c0_423e_ae62_db7de174f36d.slice. Dec 16 12:28:57.822201 kubelet[2757]: I1216 12:28:57.822135 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/08322b52-c1c0-423e-ae62-db7de174f36d-lib-modules\") pod \"calico-node-sj62k\" (UID: \"08322b52-c1c0-423e-ae62-db7de174f36d\") " pod="calico-system/calico-node-sj62k" Dec 16 12:28:57.822201 kubelet[2757]: I1216 12:28:57.822188 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/08322b52-c1c0-423e-ae62-db7de174f36d-node-certs\") pod \"calico-node-sj62k\" (UID: \"08322b52-c1c0-423e-ae62-db7de174f36d\") " pod="calico-system/calico-node-sj62k" Dec 16 12:28:57.822201 kubelet[2757]: I1216 12:28:57.822207 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/08322b52-c1c0-423e-ae62-db7de174f36d-xtables-lock\") pod \"calico-node-sj62k\" (UID: \"08322b52-c1c0-423e-ae62-db7de174f36d\") " pod="calico-system/calico-node-sj62k" Dec 16 12:28:57.822485 kubelet[2757]: I1216 12:28:57.822231 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08322b52-c1c0-423e-ae62-db7de174f36d-tigera-ca-bundle\") pod \"calico-node-sj62k\" (UID: \"08322b52-c1c0-423e-ae62-db7de174f36d\") " pod="calico-system/calico-node-sj62k" Dec 16 12:28:57.822485 kubelet[2757]: I1216 12:28:57.822252 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/08322b52-c1c0-423e-ae62-db7de174f36d-var-lib-calico\") pod \"calico-node-sj62k\" (UID: \"08322b52-c1c0-423e-ae62-db7de174f36d\") " pod="calico-system/calico-node-sj62k" Dec 16 12:28:57.822485 kubelet[2757]: I1216 12:28:57.822269 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/08322b52-c1c0-423e-ae62-db7de174f36d-policysync\") pod \"calico-node-sj62k\" (UID: \"08322b52-c1c0-423e-ae62-db7de174f36d\") " pod="calico-system/calico-node-sj62k" Dec 16 12:28:57.822485 kubelet[2757]: I1216 12:28:57.822286 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/08322b52-c1c0-423e-ae62-db7de174f36d-cni-bin-dir\") pod \"calico-node-sj62k\" (UID: \"08322b52-c1c0-423e-ae62-db7de174f36d\") " pod="calico-system/calico-node-sj62k" Dec 16 12:28:57.822485 kubelet[2757]: I1216 12:28:57.822305 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/08322b52-c1c0-423e-ae62-db7de174f36d-cni-log-dir\") pod \"calico-node-sj62k\" (UID: \"08322b52-c1c0-423e-ae62-db7de174f36d\") " pod="calico-system/calico-node-sj62k" Dec 16 12:28:57.822611 kubelet[2757]: I1216 12:28:57.822322 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/08322b52-c1c0-423e-ae62-db7de174f36d-cni-net-dir\") pod \"calico-node-sj62k\" (UID: \"08322b52-c1c0-423e-ae62-db7de174f36d\") " pod="calico-system/calico-node-sj62k" Dec 16 12:28:57.822611 kubelet[2757]: I1216 12:28:57.822341 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/08322b52-c1c0-423e-ae62-db7de174f36d-var-run-calico\") pod \"calico-node-sj62k\" (UID: \"08322b52-c1c0-423e-ae62-db7de174f36d\") " pod="calico-system/calico-node-sj62k" Dec 16 12:28:57.822611 kubelet[2757]: I1216 12:28:57.822374 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sltqn\" (UniqueName: \"kubernetes.io/projected/08322b52-c1c0-423e-ae62-db7de174f36d-kube-api-access-sltqn\") pod \"calico-node-sj62k\" (UID: \"08322b52-c1c0-423e-ae62-db7de174f36d\") " pod="calico-system/calico-node-sj62k" Dec 16 12:28:57.822611 kubelet[2757]: I1216 12:28:57.822403 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/08322b52-c1c0-423e-ae62-db7de174f36d-flexvol-driver-host\") pod \"calico-node-sj62k\" (UID: \"08322b52-c1c0-423e-ae62-db7de174f36d\") " pod="calico-system/calico-node-sj62k" Dec 16 12:28:57.831401 containerd[1536]: time="2025-12-16T12:28:57.831144400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d6d559789-7x8zr,Uid:063e7e17-9ab0-4883-8ea7-e65c1d356a44,Namespace:calico-system,Attempt:0,}" Dec 16 12:28:57.861065 containerd[1536]: time="2025-12-16T12:28:57.860969760Z" level=info msg="connecting to shim 2a29731e261ae5a905fb39df3c50bfdae7ffc384a314376a57efa9f1b7c34031" address="unix:///run/containerd/s/9fb40ad4b18f34f38d12b89fee88d41e59ef769cd0fc3e70b4b4787bd6a5f3e3" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:28:57.904149 kubelet[2757]: E1216 12:28:57.903140 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c5llv" podUID="de20fd2d-5c54-466a-8865-47a00aec7cac" Dec 16 12:28:57.903605 systemd[1]: Started cri-containerd-2a29731e261ae5a905fb39df3c50bfdae7ffc384a314376a57efa9f1b7c34031.scope - libcontainer container 2a29731e261ae5a905fb39df3c50bfdae7ffc384a314376a57efa9f1b7c34031. Dec 16 12:28:57.923129 kubelet[2757]: I1216 12:28:57.923039 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/de20fd2d-5c54-466a-8865-47a00aec7cac-socket-dir\") pod \"csi-node-driver-c5llv\" (UID: \"de20fd2d-5c54-466a-8865-47a00aec7cac\") " pod="calico-system/csi-node-driver-c5llv" Dec 16 12:28:57.923129 kubelet[2757]: I1216 12:28:57.923118 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/de20fd2d-5c54-466a-8865-47a00aec7cac-registration-dir\") pod \"csi-node-driver-c5llv\" (UID: \"de20fd2d-5c54-466a-8865-47a00aec7cac\") " pod="calico-system/csi-node-driver-c5llv" Dec 16 12:28:57.923469 kubelet[2757]: I1216 12:28:57.923151 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/de20fd2d-5c54-466a-8865-47a00aec7cac-kubelet-dir\") pod \"csi-node-driver-c5llv\" (UID: \"de20fd2d-5c54-466a-8865-47a00aec7cac\") " pod="calico-system/csi-node-driver-c5llv" Dec 16 12:28:57.923469 kubelet[2757]: I1216 12:28:57.923178 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/de20fd2d-5c54-466a-8865-47a00aec7cac-varrun\") pod \"csi-node-driver-c5llv\" (UID: \"de20fd2d-5c54-466a-8865-47a00aec7cac\") " pod="calico-system/csi-node-driver-c5llv" Dec 16 12:28:57.923469 kubelet[2757]: I1216 12:28:57.923259 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkl2g\" (UniqueName: \"kubernetes.io/projected/de20fd2d-5c54-466a-8865-47a00aec7cac-kube-api-access-pkl2g\") pod \"csi-node-driver-c5llv\" (UID: \"de20fd2d-5c54-466a-8865-47a00aec7cac\") " pod="calico-system/csi-node-driver-c5llv" Dec 16 12:28:57.930395 kubelet[2757]: E1216 12:28:57.928941 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:57.930395 kubelet[2757]: W1216 12:28:57.928973 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:57.930395 kubelet[2757]: E1216 12:28:57.929007 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:57.931533 kubelet[2757]: E1216 12:28:57.931472 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:57.931533 kubelet[2757]: W1216 12:28:57.931490 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:57.931533 kubelet[2757]: E1216 12:28:57.931503 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:57.943138 kubelet[2757]: E1216 12:28:57.943057 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:57.943138 kubelet[2757]: W1216 12:28:57.943080 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:57.943138 kubelet[2757]: E1216 12:28:57.943100 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:57.994141 containerd[1536]: time="2025-12-16T12:28:57.993980046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d6d559789-7x8zr,Uid:063e7e17-9ab0-4883-8ea7-e65c1d356a44,Namespace:calico-system,Attempt:0,} returns sandbox id \"2a29731e261ae5a905fb39df3c50bfdae7ffc384a314376a57efa9f1b7c34031\"" Dec 16 12:28:57.998540 containerd[1536]: time="2025-12-16T12:28:57.997881311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 12:28:58.025497 kubelet[2757]: E1216 12:28:58.025444 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:58.025497 kubelet[2757]: W1216 12:28:58.025473 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:58.025497 kubelet[2757]: E1216 12:28:58.025497 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:58.025902 kubelet[2757]: E1216 12:28:58.025877 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:58.025902 kubelet[2757]: W1216 12:28:58.025897 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:58.025971 kubelet[2757]: E1216 12:28:58.025914 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:58.026102 kubelet[2757]: E1216 12:28:58.026083 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:58.026102 kubelet[2757]: W1216 12:28:58.026097 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:58.026234 kubelet[2757]: E1216 12:28:58.026180 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:58.026321 kubelet[2757]: E1216 12:28:58.026307 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:58.026352 kubelet[2757]: W1216 12:28:58.026319 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:58.026352 kubelet[2757]: E1216 12:28:58.026338 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:58.026546 kubelet[2757]: E1216 12:28:58.026529 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:58.026546 kubelet[2757]: W1216 12:28:58.026542 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:58.026546 kubelet[2757]: E1216 12:28:58.026557 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:58.026799 kubelet[2757]: E1216 12:28:58.026774 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:58.026799 kubelet[2757]: W1216 12:28:58.026788 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:58.026858 kubelet[2757]: E1216 12:28:58.026807 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:58.026999 kubelet[2757]: E1216 12:28:58.026984 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:58.026999 kubelet[2757]: W1216 12:28:58.026996 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:58.027069 kubelet[2757]: E1216 12:28:58.027007 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:58.027455 kubelet[2757]: E1216 12:28:58.027432 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:58.027455 kubelet[2757]: W1216 12:28:58.027449 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:58.027545 kubelet[2757]: E1216 12:28:58.027470 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:58.027792 kubelet[2757]: E1216 12:28:58.027624 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:58.027792 kubelet[2757]: W1216 12:28:58.027637 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:58.027792 kubelet[2757]: E1216 12:28:58.027646 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:58.029493 kubelet[2757]: E1216 12:28:58.029470 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:58.029596 kubelet[2757]: W1216 12:28:58.029583 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:58.029668 kubelet[2757]: E1216 12:28:58.029652 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:58.030007 kubelet[2757]: E1216 12:28:58.029993 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:58.030149 kubelet[2757]: W1216 12:28:58.030075 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:58.030149 kubelet[2757]: E1216 12:28:58.030092 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:58.030609 kubelet[2757]: E1216 12:28:58.030591 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:58.030609 kubelet[2757]: W1216 12:28:58.030606 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:58.030761 kubelet[2757]: E1216 12:28:58.030725 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:58.030881 containerd[1536]: time="2025-12-16T12:28:58.030833333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sj62k,Uid:08322b52-c1c0-423e-ae62-db7de174f36d,Namespace:calico-system,Attempt:0,}" Dec 16 12:28:58.031314 kubelet[2757]: E1216 12:28:58.031286 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:58.031314 kubelet[2757]: W1216 12:28:58.031312 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:58.031437 kubelet[2757]: E1216 12:28:58.031326 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:58.031875 kubelet[2757]: E1216 12:28:58.031841 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:58.031875 kubelet[2757]: W1216 12:28:58.031866 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:58.032003 kubelet[2757]: E1216 12:28:58.031912 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:58.032474 kubelet[2757]: E1216 12:28:58.032434 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:58.032474 kubelet[2757]: W1216 12:28:58.032469 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:58.032582 kubelet[2757]: E1216 12:28:58.032488 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:58.032835 kubelet[2757]: E1216 12:28:58.032815 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:58.032835 kubelet[2757]: W1216 12:28:58.032829 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:58.032934 kubelet[2757]: E1216 12:28:58.032914 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:58.033979 kubelet[2757]: E1216 12:28:58.033614 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:58.034118 kubelet[2757]: W1216 12:28:58.033983 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:58.034214 kubelet[2757]: E1216 12:28:58.034174 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:58.034695 kubelet[2757]: E1216 12:28:58.034662 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:58.034695 kubelet[2757]: W1216 12:28:58.034693 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:58.034879 kubelet[2757]: E1216 12:28:58.034837 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:58.036945 kubelet[2757]: E1216 12:28:58.035834 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:58.037313 kubelet[2757]: W1216 12:28:58.036946 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:58.037313 kubelet[2757]: E1216 12:28:58.037068 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:58.037919 kubelet[2757]: E1216 12:28:58.037894 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:58.038026 kubelet[2757]: W1216 12:28:58.037916 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:58.038058 kubelet[2757]: E1216 12:28:58.038035 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:58.038421 kubelet[2757]: E1216 12:28:58.038400 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:58.038421 kubelet[2757]: W1216 12:28:58.038416 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:58.038642 kubelet[2757]: E1216 12:28:58.038607 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:58.039140 kubelet[2757]: E1216 12:28:58.039121 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:58.039237 kubelet[2757]: W1216 12:28:58.039202 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:58.039237 kubelet[2757]: E1216 12:28:58.039224 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:58.040510 kubelet[2757]: E1216 12:28:58.040487 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:58.040510 kubelet[2757]: W1216 12:28:58.040504 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:58.041498 kubelet[2757]: E1216 12:28:58.041464 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:58.041814 kubelet[2757]: E1216 12:28:58.041798 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:58.041814 kubelet[2757]: W1216 12:28:58.041813 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:58.042268 kubelet[2757]: E1216 12:28:58.042236 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:58.043091 kubelet[2757]: E1216 12:28:58.042991 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:58.043091 kubelet[2757]: W1216 12:28:58.043014 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:58.043091 kubelet[2757]: E1216 12:28:58.043029 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:58.060190 kubelet[2757]: E1216 12:28:58.060149 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:58.060564 kubelet[2757]: W1216 12:28:58.060426 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:58.060564 kubelet[2757]: E1216 12:28:58.060454 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:58.068768 containerd[1536]: time="2025-12-16T12:28:58.068688760Z" level=info msg="connecting to shim c26e64f62cc1876da140fc074e0b474df3bcd5479d679339e4d39b4e656e1d29" address="unix:///run/containerd/s/4e6c0baecbff5f392a169dc714ea52ea2d17724a0ea56e0e2b5e5d4a648090ef" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:28:58.105549 systemd[1]: Started cri-containerd-c26e64f62cc1876da140fc074e0b474df3bcd5479d679339e4d39b4e656e1d29.scope - libcontainer container c26e64f62cc1876da140fc074e0b474df3bcd5479d679339e4d39b4e656e1d29. Dec 16 12:28:58.154094 containerd[1536]: time="2025-12-16T12:28:58.154045826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sj62k,Uid:08322b52-c1c0-423e-ae62-db7de174f36d,Namespace:calico-system,Attempt:0,} returns sandbox id \"c26e64f62cc1876da140fc074e0b474df3bcd5479d679339e4d39b4e656e1d29\"" Dec 16 12:28:59.426106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1781538813.mount: Deactivated successfully. Dec 16 12:28:59.943245 containerd[1536]: time="2025-12-16T12:28:59.943182010Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:59.945124 containerd[1536]: time="2025-12-16T12:28:59.945078778Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Dec 16 12:28:59.946378 containerd[1536]: time="2025-12-16T12:28:59.946014122Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:59.950145 containerd[1536]: time="2025-12-16T12:28:59.950092266Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:59.950901 containerd[1536]: time="2025-12-16T12:28:59.950865965Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.952944093s" Dec 16 12:28:59.950969 containerd[1536]: time="2025-12-16T12:28:59.950901046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 16 12:28:59.952553 containerd[1536]: time="2025-12-16T12:28:59.952528927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 12:28:59.970251 containerd[1536]: time="2025-12-16T12:28:59.970212456Z" level=info msg="CreateContainer within sandbox \"2a29731e261ae5a905fb39df3c50bfdae7ffc384a314376a57efa9f1b7c34031\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 12:28:59.985336 containerd[1536]: time="2025-12-16T12:28:59.984109728Z" level=info msg="Container 4e83256b9fdc011cb15078d179c1c6b1c11859e846b1019886b838614818e916: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:29:00.007944 containerd[1536]: time="2025-12-16T12:29:00.007796005Z" level=info msg="CreateContainer within sandbox \"2a29731e261ae5a905fb39df3c50bfdae7ffc384a314376a57efa9f1b7c34031\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4e83256b9fdc011cb15078d179c1c6b1c11859e846b1019886b838614818e916\"" Dec 16 12:29:00.009589 containerd[1536]: time="2025-12-16T12:29:00.009424045Z" level=info msg="StartContainer for \"4e83256b9fdc011cb15078d179c1c6b1c11859e846b1019886b838614818e916\"" Dec 16 12:29:00.010419 kubelet[2757]: E1216 12:29:00.009850 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c5llv" podUID="de20fd2d-5c54-466a-8865-47a00aec7cac" Dec 16 12:29:00.014374 containerd[1536]: time="2025-12-16T12:29:00.014164602Z" level=info msg="connecting to shim 4e83256b9fdc011cb15078d179c1c6b1c11859e846b1019886b838614818e916" address="unix:///run/containerd/s/9fb40ad4b18f34f38d12b89fee88d41e59ef769cd0fc3e70b4b4787bd6a5f3e3" protocol=ttrpc version=3 Dec 16 12:29:00.041604 systemd[1]: Started cri-containerd-4e83256b9fdc011cb15078d179c1c6b1c11859e846b1019886b838614818e916.scope - libcontainer container 4e83256b9fdc011cb15078d179c1c6b1c11859e846b1019886b838614818e916. Dec 16 12:29:00.093850 containerd[1536]: time="2025-12-16T12:29:00.093465198Z" level=info msg="StartContainer for \"4e83256b9fdc011cb15078d179c1c6b1c11859e846b1019886b838614818e916\" returns successfully" Dec 16 12:29:00.168004 kubelet[2757]: I1216 12:29:00.167563 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7d6d559789-7x8zr" podStartSLOduration=1.212850808 podStartE2EDuration="3.167543906s" podCreationTimestamp="2025-12-16 12:28:57 +0000 UTC" firstStartedPulling="2025-12-16 12:28:57.997521581 +0000 UTC m=+28.097794318" lastFinishedPulling="2025-12-16 12:28:59.952214679 +0000 UTC m=+30.052487416" observedRunningTime="2025-12-16 12:29:00.16570202 +0000 UTC m=+30.265974757" watchObservedRunningTime="2025-12-16 12:29:00.167543906 +0000 UTC m=+30.267816643" Dec 16 12:29:00.222071 kubelet[2757]: E1216 12:29:00.221514 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:00.222294 kubelet[2757]: W1216 12:29:00.222266 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:00.222711 kubelet[2757]: E1216 12:29:00.222540 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:00.223376 kubelet[2757]: E1216 12:29:00.223127 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:00.223376 kubelet[2757]: W1216 12:29:00.223161 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:00.223617 kubelet[2757]: E1216 12:29:00.223216 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:00.223794 kubelet[2757]: E1216 12:29:00.223759 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:00.223897 kubelet[2757]: W1216 12:29:00.223866 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:00.224033 kubelet[2757]: E1216 12:29:00.223966 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:00.224582 kubelet[2757]: E1216 12:29:00.224426 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:00.224582 kubelet[2757]: W1216 12:29:00.224442 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:00.224582 kubelet[2757]: E1216 12:29:00.224455 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:00.224852 kubelet[2757]: E1216 12:29:00.224829 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:00.224939 kubelet[2757]: W1216 12:29:00.224925 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:00.225108 kubelet[2757]: E1216 12:29:00.225015 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:00.225573 kubelet[2757]: E1216 12:29:00.225451 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:00.225573 kubelet[2757]: W1216 12:29:00.225466 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:00.225573 kubelet[2757]: E1216 12:29:00.225477 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:00.225905 kubelet[2757]: E1216 12:29:00.225805 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:00.226028 kubelet[2757]: W1216 12:29:00.226014 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:00.226389 kubelet[2757]: E1216 12:29:00.226087 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:00.226759 kubelet[2757]: E1216 12:29:00.226620 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:00.226759 kubelet[2757]: W1216 12:29:00.226633 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:00.226759 kubelet[2757]: E1216 12:29:00.226646 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:00.227471 kubelet[2757]: E1216 12:29:00.227322 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:00.227471 kubelet[2757]: W1216 12:29:00.227352 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:00.227471 kubelet[2757]: E1216 12:29:00.227391 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:00.227798 kubelet[2757]: E1216 12:29:00.227753 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:00.228048 kubelet[2757]: W1216 12:29:00.227879 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:00.228048 kubelet[2757]: E1216 12:29:00.227898 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:00.228309 kubelet[2757]: E1216 12:29:00.228274 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:00.228504 kubelet[2757]: W1216 12:29:00.228389 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:00.228504 kubelet[2757]: E1216 12:29:00.228405 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:00.229275 kubelet[2757]: E1216 12:29:00.229259 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:00.229493 kubelet[2757]: W1216 12:29:00.229383 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:00.229493 kubelet[2757]: E1216 12:29:00.229402 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:00.229760 kubelet[2757]: E1216 12:29:00.229658 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:00.229760 kubelet[2757]: W1216 12:29:00.229670 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:00.229760 kubelet[2757]: E1216 12:29:00.229680 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:00.230296 kubelet[2757]: E1216 12:29:00.230161 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:00.230296 kubelet[2757]: W1216 12:29:00.230182 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:00.230296 kubelet[2757]: E1216 12:29:00.230195 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:00.230911 kubelet[2757]: E1216 12:29:00.230636 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:00.231088 kubelet[2757]: W1216 12:29:00.231005 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:00.231088 kubelet[2757]: E1216 12:29:00.231030 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:00.252054 kubelet[2757]: E1216 12:29:00.251852 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:00.252054 kubelet[2757]: W1216 12:29:00.252004 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:00.252054 kubelet[2757]: E1216 12:29:00.252031 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:00.252532 kubelet[2757]: E1216 12:29:00.252512 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:00.252575 kubelet[2757]: W1216 12:29:00.252535 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:00.252575 kubelet[2757]: E1216 12:29:00.252556 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:00.252839 kubelet[2757]: E1216 12:29:00.252818 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:00.252938 kubelet[2757]: W1216 12:29:00.252840 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:00.252938 kubelet[2757]: E1216 12:29:00.252862 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:00.253299 kubelet[2757]: E1216 12:29:00.253279 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:00.253299 kubelet[2757]: W1216 12:29:00.253295 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:00.253809 kubelet[2757]: E1216 12:29:00.253683 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:00.253996 kubelet[2757]: E1216 12:29:00.253981 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:00.254157 kubelet[2757]: W1216 12:29:00.254140 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:00.254242 kubelet[2757]: E1216 12:29:00.254226 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:00.254685 kubelet[2757]: E1216 12:29:00.254567 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:00.254685 kubelet[2757]: W1216 12:29:00.254685 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:00.254797 kubelet[2757]: E1216 12:29:00.254705 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:00.256492 kubelet[2757]: E1216 12:29:00.256461 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:00.256492 kubelet[2757]: W1216 12:29:00.256483 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:00.256677 kubelet[2757]: E1216 12:29:00.256575 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:00.256795 kubelet[2757]: E1216 12:29:00.256676 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:00.256795 kubelet[2757]: W1216 12:29:00.256685 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:00.256795 kubelet[2757]: E1216 12:29:00.256721 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:00.256928 kubelet[2757]: E1216 12:29:00.256910 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:00.257014 kubelet[2757]: W1216 12:29:00.256929 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:00.257014 kubelet[2757]: E1216 12:29:00.256985 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:00.257329 kubelet[2757]: E1216 12:29:00.257307 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:00.257329 kubelet[2757]: W1216 12:29:00.257324 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:00.257580 kubelet[2757]: E1216 12:29:00.257347 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:00.257704 kubelet[2757]: E1216 12:29:00.257687 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:00.257767 kubelet[2757]: W1216 12:29:00.257755 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:00.257904 kubelet[2757]: E1216 12:29:00.257889 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:00.258587 kubelet[2757]: E1216 12:29:00.258560 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:00.258587 kubelet[2757]: W1216 12:29:00.258579 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:00.258761 kubelet[2757]: E1216 12:29:00.258600 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:00.259178 kubelet[2757]: E1216 12:29:00.259151 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:00.259178 kubelet[2757]: W1216 12:29:00.259173 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:00.260470 kubelet[2757]: E1216 12:29:00.259263 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:00.260470 kubelet[2757]: E1216 12:29:00.259330 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:00.260470 kubelet[2757]: W1216 12:29:00.259337 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:00.260470 kubelet[2757]: E1216 12:29:00.259495 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:00.260470 kubelet[2757]: W1216 12:29:00.259502 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:00.260470 kubelet[2757]: E1216 12:29:00.259512 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:00.260996 kubelet[2757]: E1216 12:29:00.260960 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:00.260996 kubelet[2757]: W1216 12:29:00.260982 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:00.261094 kubelet[2757]: E1216 12:29:00.260999 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:00.261094 kubelet[2757]: E1216 12:29:00.261031 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:00.261410 kubelet[2757]: E1216 12:29:00.261390 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:00.261410 kubelet[2757]: W1216 12:29:00.261404 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:00.261491 kubelet[2757]: E1216 12:29:00.261420 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:00.261633 kubelet[2757]: E1216 12:29:00.261617 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:00.261633 kubelet[2757]: W1216 12:29:00.261631 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:00.261689 kubelet[2757]: E1216 12:29:00.261644 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:01.150065 kubelet[2757]: I1216 12:29:01.147644 2757 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:29:01.238438 kubelet[2757]: E1216 12:29:01.238386 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:01.238757 kubelet[2757]: W1216 12:29:01.238595 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:01.238757 kubelet[2757]: E1216 12:29:01.238635 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:01.239185 kubelet[2757]: E1216 12:29:01.239146 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:01.239371 kubelet[2757]: W1216 12:29:01.239170 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:01.239371 kubelet[2757]: E1216 12:29:01.239304 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:01.239917 kubelet[2757]: E1216 12:29:01.239855 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:01.239917 kubelet[2757]: W1216 12:29:01.239876 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:01.240144 kubelet[2757]: E1216 12:29:01.239896 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:01.240650 kubelet[2757]: E1216 12:29:01.240609 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:01.240650 kubelet[2757]: W1216 12:29:01.240623 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:01.240650 kubelet[2757]: E1216 12:29:01.240636 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:01.241077 kubelet[2757]: E1216 12:29:01.240977 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:01.241077 kubelet[2757]: W1216 12:29:01.240990 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:01.241077 kubelet[2757]: E1216 12:29:01.241007 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:01.241239 kubelet[2757]: E1216 12:29:01.241228 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:01.241393 kubelet[2757]: W1216 12:29:01.241279 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:01.241393 kubelet[2757]: E1216 12:29:01.241292 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:01.241615 kubelet[2757]: E1216 12:29:01.241558 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:01.241615 kubelet[2757]: W1216 12:29:01.241570 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:01.241615 kubelet[2757]: E1216 12:29:01.241581 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:01.241929 kubelet[2757]: E1216 12:29:01.241884 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:01.241929 kubelet[2757]: W1216 12:29:01.241897 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:01.241929 kubelet[2757]: E1216 12:29:01.241908 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:01.242260 kubelet[2757]: E1216 12:29:01.242215 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:01.242260 kubelet[2757]: W1216 12:29:01.242228 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:01.242260 kubelet[2757]: E1216 12:29:01.242238 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:01.242585 kubelet[2757]: E1216 12:29:01.242498 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:01.242585 kubelet[2757]: W1216 12:29:01.242509 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:01.242585 kubelet[2757]: E1216 12:29:01.242519 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:01.242739 kubelet[2757]: E1216 12:29:01.242729 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:01.242873 kubelet[2757]: W1216 12:29:01.242778 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:01.242873 kubelet[2757]: E1216 12:29:01.242801 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:01.243069 kubelet[2757]: E1216 12:29:01.243058 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:01.243236 kubelet[2757]: W1216 12:29:01.243105 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:01.243236 kubelet[2757]: E1216 12:29:01.243117 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:01.243423 kubelet[2757]: E1216 12:29:01.243411 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:01.243492 kubelet[2757]: W1216 12:29:01.243482 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:01.243631 kubelet[2757]: E1216 12:29:01.243558 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:01.243849 kubelet[2757]: E1216 12:29:01.243837 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:01.243989 kubelet[2757]: W1216 12:29:01.243918 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:01.243989 kubelet[2757]: E1216 12:29:01.243932 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:01.244205 kubelet[2757]: E1216 12:29:01.244153 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:01.244205 kubelet[2757]: W1216 12:29:01.244163 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:01.244205 kubelet[2757]: E1216 12:29:01.244172 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:01.259088 kubelet[2757]: E1216 12:29:01.258854 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:01.259088 kubelet[2757]: W1216 12:29:01.258889 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:01.259088 kubelet[2757]: E1216 12:29:01.258924 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:01.259732 kubelet[2757]: E1216 12:29:01.259698 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:01.260128 kubelet[2757]: W1216 12:29:01.259877 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:01.260128 kubelet[2757]: E1216 12:29:01.259914 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:01.260481 kubelet[2757]: E1216 12:29:01.260459 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:01.260835 kubelet[2757]: W1216 12:29:01.260598 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:01.260835 kubelet[2757]: E1216 12:29:01.260630 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:01.261177 kubelet[2757]: E1216 12:29:01.261153 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:01.261382 kubelet[2757]: W1216 12:29:01.261286 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:01.261382 kubelet[2757]: E1216 12:29:01.261316 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:01.263096 kubelet[2757]: E1216 12:29:01.262858 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:01.263096 kubelet[2757]: W1216 12:29:01.262888 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:01.263096 kubelet[2757]: E1216 12:29:01.263058 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:01.263672 kubelet[2757]: E1216 12:29:01.263439 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:01.263672 kubelet[2757]: W1216 12:29:01.263460 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:01.263672 kubelet[2757]: E1216 12:29:01.263506 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:01.264501 kubelet[2757]: E1216 12:29:01.263961 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:01.264501 kubelet[2757]: W1216 12:29:01.263983 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:01.264501 kubelet[2757]: E1216 12:29:01.264146 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:01.265307 kubelet[2757]: E1216 12:29:01.265147 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:01.265307 kubelet[2757]: W1216 12:29:01.265160 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:01.265307 kubelet[2757]: E1216 12:29:01.265181 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:01.265514 kubelet[2757]: E1216 12:29:01.265498 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:01.265603 kubelet[2757]: W1216 12:29:01.265590 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:01.265762 kubelet[2757]: E1216 12:29:01.265735 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:01.266007 kubelet[2757]: E1216 12:29:01.265993 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:01.266117 kubelet[2757]: W1216 12:29:01.266102 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:01.266261 kubelet[2757]: E1216 12:29:01.266240 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:01.266522 kubelet[2757]: E1216 12:29:01.266459 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:01.266522 kubelet[2757]: W1216 12:29:01.266473 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:01.266522 kubelet[2757]: E1216 12:29:01.266508 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:01.266884 kubelet[2757]: E1216 12:29:01.266777 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:01.266884 kubelet[2757]: W1216 12:29:01.266802 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:01.266884 kubelet[2757]: E1216 12:29:01.266825 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:01.267168 kubelet[2757]: E1216 12:29:01.267153 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:01.267241 kubelet[2757]: W1216 12:29:01.267228 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:01.267300 kubelet[2757]: E1216 12:29:01.267289 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:01.267539 kubelet[2757]: E1216 12:29:01.267523 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:01.267539 kubelet[2757]: W1216 12:29:01.267538 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:01.267659 kubelet[2757]: E1216 12:29:01.267553 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:01.267709 kubelet[2757]: E1216 12:29:01.267702 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:01.267965 kubelet[2757]: W1216 12:29:01.267711 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:01.267965 kubelet[2757]: E1216 12:29:01.267725 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:01.268308 kubelet[2757]: E1216 12:29:01.268114 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:01.268308 kubelet[2757]: W1216 12:29:01.268131 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:01.268308 kubelet[2757]: E1216 12:29:01.268156 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:01.268429 kubelet[2757]: E1216 12:29:01.268413 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:01.268429 kubelet[2757]: W1216 12:29:01.268427 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:01.268556 kubelet[2757]: E1216 12:29:01.268440 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:01.269336 kubelet[2757]: E1216 12:29:01.269215 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:29:01.269336 kubelet[2757]: W1216 12:29:01.269235 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:29:01.269336 kubelet[2757]: E1216 12:29:01.269252 2757 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:29:01.529413 containerd[1536]: time="2025-12-16T12:29:01.528724815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:01.532711 containerd[1536]: time="2025-12-16T12:29:01.531296996Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Dec 16 12:29:01.534389 containerd[1536]: time="2025-12-16T12:29:01.533062199Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:01.536349 containerd[1536]: time="2025-12-16T12:29:01.536278796Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:01.537079 containerd[1536]: time="2025-12-16T12:29:01.537022494Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.584370724s" Dec 16 12:29:01.537191 containerd[1536]: time="2025-12-16T12:29:01.537143697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 16 12:29:01.542276 containerd[1536]: time="2025-12-16T12:29:01.542229179Z" level=info msg="CreateContainer within sandbox \"c26e64f62cc1876da140fc074e0b474df3bcd5479d679339e4d39b4e656e1d29\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 12:29:01.555396 containerd[1536]: time="2025-12-16T12:29:01.554608196Z" level=info msg="Container 45a2ba915f2ae83b7792dabe2156f4c3ee0f524e537721fb65e94d54abcce38b: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:29:01.565755 containerd[1536]: time="2025-12-16T12:29:01.565661261Z" level=info msg="CreateContainer within sandbox \"c26e64f62cc1876da140fc074e0b474df3bcd5479d679339e4d39b4e656e1d29\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"45a2ba915f2ae83b7792dabe2156f4c3ee0f524e537721fb65e94d54abcce38b\"" Dec 16 12:29:01.566437 containerd[1536]: time="2025-12-16T12:29:01.566394319Z" level=info msg="StartContainer for \"45a2ba915f2ae83b7792dabe2156f4c3ee0f524e537721fb65e94d54abcce38b\"" Dec 16 12:29:01.569225 containerd[1536]: time="2025-12-16T12:29:01.569127664Z" level=info msg="connecting to shim 45a2ba915f2ae83b7792dabe2156f4c3ee0f524e537721fb65e94d54abcce38b" address="unix:///run/containerd/s/4e6c0baecbff5f392a169dc714ea52ea2d17724a0ea56e0e2b5e5d4a648090ef" protocol=ttrpc version=3 Dec 16 12:29:01.598633 systemd[1]: Started cri-containerd-45a2ba915f2ae83b7792dabe2156f4c3ee0f524e537721fb65e94d54abcce38b.scope - libcontainer container 45a2ba915f2ae83b7792dabe2156f4c3ee0f524e537721fb65e94d54abcce38b. Dec 16 12:29:01.691675 containerd[1536]: time="2025-12-16T12:29:01.691600804Z" level=info msg="StartContainer for \"45a2ba915f2ae83b7792dabe2156f4c3ee0f524e537721fb65e94d54abcce38b\" returns successfully" Dec 16 12:29:01.707730 systemd[1]: cri-containerd-45a2ba915f2ae83b7792dabe2156f4c3ee0f524e537721fb65e94d54abcce38b.scope: Deactivated successfully. Dec 16 12:29:01.716400 containerd[1536]: time="2025-12-16T12:29:01.716295197Z" level=info msg="received container exit event container_id:\"45a2ba915f2ae83b7792dabe2156f4c3ee0f524e537721fb65e94d54abcce38b\" id:\"45a2ba915f2ae83b7792dabe2156f4c3ee0f524e537721fb65e94d54abcce38b\" pid:3436 exited_at:{seconds:1765888141 nanos:715769304}" Dec 16 12:29:01.748344 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45a2ba915f2ae83b7792dabe2156f4c3ee0f524e537721fb65e94d54abcce38b-rootfs.mount: Deactivated successfully. Dec 16 12:29:02.007389 kubelet[2757]: E1216 12:29:02.006939 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c5llv" podUID="de20fd2d-5c54-466a-8865-47a00aec7cac" Dec 16 12:29:02.160431 containerd[1536]: time="2025-12-16T12:29:02.159609694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 12:29:04.007893 kubelet[2757]: E1216 12:29:04.007802 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c5llv" podUID="de20fd2d-5c54-466a-8865-47a00aec7cac" Dec 16 12:29:04.802208 containerd[1536]: time="2025-12-16T12:29:04.802150705Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:04.805600 containerd[1536]: time="2025-12-16T12:29:04.805530860Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Dec 16 12:29:04.806643 containerd[1536]: time="2025-12-16T12:29:04.806552922Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:04.809325 containerd[1536]: time="2025-12-16T12:29:04.809254942Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:04.810567 containerd[1536]: time="2025-12-16T12:29:04.810069320Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.649636967s" Dec 16 12:29:04.810567 containerd[1536]: time="2025-12-16T12:29:04.810110081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 16 12:29:04.814721 containerd[1536]: time="2025-12-16T12:29:04.814659502Z" level=info msg="CreateContainer within sandbox \"c26e64f62cc1876da140fc074e0b474df3bcd5479d679339e4d39b4e656e1d29\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 12:29:04.827392 containerd[1536]: time="2025-12-16T12:29:04.825535302Z" level=info msg="Container de24e64f702177efa36e27bee62bb81c39ccae7083817320056e68314119af61: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:29:04.837211 containerd[1536]: time="2025-12-16T12:29:04.837145759Z" level=info msg="CreateContainer within sandbox \"c26e64f62cc1876da140fc074e0b474df3bcd5479d679339e4d39b4e656e1d29\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"de24e64f702177efa36e27bee62bb81c39ccae7083817320056e68314119af61\"" Dec 16 12:29:04.838270 containerd[1536]: time="2025-12-16T12:29:04.838221583Z" level=info msg="StartContainer for \"de24e64f702177efa36e27bee62bb81c39ccae7083817320056e68314119af61\"" Dec 16 12:29:04.840113 containerd[1536]: time="2025-12-16T12:29:04.840081704Z" level=info msg="connecting to shim de24e64f702177efa36e27bee62bb81c39ccae7083817320056e68314119af61" address="unix:///run/containerd/s/4e6c0baecbff5f392a169dc714ea52ea2d17724a0ea56e0e2b5e5d4a648090ef" protocol=ttrpc version=3 Dec 16 12:29:04.867594 systemd[1]: Started cri-containerd-de24e64f702177efa36e27bee62bb81c39ccae7083817320056e68314119af61.scope - libcontainer container de24e64f702177efa36e27bee62bb81c39ccae7083817320056e68314119af61. Dec 16 12:29:04.956854 containerd[1536]: time="2025-12-16T12:29:04.956805405Z" level=info msg="StartContainer for \"de24e64f702177efa36e27bee62bb81c39ccae7083817320056e68314119af61\" returns successfully" Dec 16 12:29:05.513817 containerd[1536]: time="2025-12-16T12:29:05.513772620Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:29:05.516440 systemd[1]: cri-containerd-de24e64f702177efa36e27bee62bb81c39ccae7083817320056e68314119af61.scope: Deactivated successfully. Dec 16 12:29:05.516883 systemd[1]: cri-containerd-de24e64f702177efa36e27bee62bb81c39ccae7083817320056e68314119af61.scope: Consumed 516ms CPU time, 188.6M memory peak, 165.9M written to disk. Dec 16 12:29:05.520015 containerd[1536]: time="2025-12-16T12:29:05.519960793Z" level=info msg="received container exit event container_id:\"de24e64f702177efa36e27bee62bb81c39ccae7083817320056e68314119af61\" id:\"de24e64f702177efa36e27bee62bb81c39ccae7083817320056e68314119af61\" pid:3497 exited_at:{seconds:1765888145 nanos:519669587}" Dec 16 12:29:05.545531 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de24e64f702177efa36e27bee62bb81c39ccae7083817320056e68314119af61-rootfs.mount: Deactivated successfully. Dec 16 12:29:05.563632 kubelet[2757]: I1216 12:29:05.563585 2757 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 12:29:05.619574 systemd[1]: Created slice kubepods-besteffort-podb160a20e_ffb1_4e7e_91c1_0be63ff8efe3.slice - libcontainer container kubepods-besteffort-podb160a20e_ffb1_4e7e_91c1_0be63ff8efe3.slice. Dec 16 12:29:05.640034 systemd[1]: Created slice kubepods-burstable-podcff0ff36_6ff6_4100_b765_5149b21c7f04.slice - libcontainer container kubepods-burstable-podcff0ff36_6ff6_4100_b765_5149b21c7f04.slice. Dec 16 12:29:05.648469 systemd[1]: Created slice kubepods-besteffort-pod6eaa9000_d78a_4f2c_90ec_72eb46b8f615.slice - libcontainer container kubepods-besteffort-pod6eaa9000_d78a_4f2c_90ec_72eb46b8f615.slice. Dec 16 12:29:05.672906 systemd[1]: Created slice kubepods-besteffort-pod95ed81e2_a00e_4be1_9fa5_31d53c3e4933.slice - libcontainer container kubepods-besteffort-pod95ed81e2_a00e_4be1_9fa5_31d53c3e4933.slice. Dec 16 12:29:05.683635 systemd[1]: Created slice kubepods-burstable-podc774d147_6d0a_4215_84e8_95061b791c9c.slice - libcontainer container kubepods-burstable-podc774d147_6d0a_4215_84e8_95061b791c9c.slice. Dec 16 12:29:05.693228 systemd[1]: Created slice kubepods-besteffort-pod10459573_8b1e_499f_ad3a_d0c1037d6278.slice - libcontainer container kubepods-besteffort-pod10459573_8b1e_499f_ad3a_d0c1037d6278.slice. Dec 16 12:29:05.695743 kubelet[2757]: I1216 12:29:05.695582 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e9d246f-dbf0-4c36-8e21-415862de6ecd-config\") pod \"goldmane-666569f655-qk9mh\" (UID: \"9e9d246f-dbf0-4c36-8e21-415862de6ecd\") " pod="calico-system/goldmane-666569f655-qk9mh" Dec 16 12:29:05.695743 kubelet[2757]: I1216 12:29:05.695625 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c774d147-6d0a-4215-84e8-95061b791c9c-config-volume\") pod \"coredns-668d6bf9bc-hdcn2\" (UID: \"c774d147-6d0a-4215-84e8-95061b791c9c\") " pod="kube-system/coredns-668d6bf9bc-hdcn2" Dec 16 12:29:05.695743 kubelet[2757]: I1216 12:29:05.695644 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv48m\" (UniqueName: \"kubernetes.io/projected/95ed81e2-a00e-4be1-9fa5-31d53c3e4933-kube-api-access-mv48m\") pod \"calico-apiserver-654c6f7d5b-9x9hg\" (UID: \"95ed81e2-a00e-4be1-9fa5-31d53c3e4933\") " pod="calico-apiserver/calico-apiserver-654c6f7d5b-9x9hg" Dec 16 12:29:05.695743 kubelet[2757]: I1216 12:29:05.695663 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsdwd\" (UniqueName: \"kubernetes.io/projected/10459573-8b1e-499f-ad3a-d0c1037d6278-kube-api-access-xsdwd\") pod \"whisker-5f775f4fbb-txbqj\" (UID: \"10459573-8b1e-499f-ad3a-d0c1037d6278\") " pod="calico-system/whisker-5f775f4fbb-txbqj" Dec 16 12:29:05.695743 kubelet[2757]: I1216 12:29:05.695680 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b160a20e-ffb1-4e7e-91c1-0be63ff8efe3-calico-apiserver-certs\") pod \"calico-apiserver-654c6f7d5b-snhmr\" (UID: \"b160a20e-ffb1-4e7e-91c1-0be63ff8efe3\") " pod="calico-apiserver/calico-apiserver-654c6f7d5b-snhmr" Dec 16 12:29:05.696045 kubelet[2757]: I1216 12:29:05.695708 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmbc5\" (UniqueName: \"kubernetes.io/projected/b160a20e-ffb1-4e7e-91c1-0be63ff8efe3-kube-api-access-bmbc5\") pod \"calico-apiserver-654c6f7d5b-snhmr\" (UID: \"b160a20e-ffb1-4e7e-91c1-0be63ff8efe3\") " pod="calico-apiserver/calico-apiserver-654c6f7d5b-snhmr" Dec 16 12:29:05.696045 kubelet[2757]: I1216 12:29:05.695731 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfw7l\" (UniqueName: \"kubernetes.io/projected/9e9d246f-dbf0-4c36-8e21-415862de6ecd-kube-api-access-dfw7l\") pod \"goldmane-666569f655-qk9mh\" (UID: \"9e9d246f-dbf0-4c36-8e21-415862de6ecd\") " pod="calico-system/goldmane-666569f655-qk9mh" Dec 16 12:29:05.696045 kubelet[2757]: I1216 12:29:05.695747 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rz8j\" (UniqueName: \"kubernetes.io/projected/c774d147-6d0a-4215-84e8-95061b791c9c-kube-api-access-7rz8j\") pod \"coredns-668d6bf9bc-hdcn2\" (UID: \"c774d147-6d0a-4215-84e8-95061b791c9c\") " pod="kube-system/coredns-668d6bf9bc-hdcn2" Dec 16 12:29:05.696045 kubelet[2757]: I1216 12:29:05.695771 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cff0ff36-6ff6-4100-b765-5149b21c7f04-config-volume\") pod \"coredns-668d6bf9bc-t2s8d\" (UID: \"cff0ff36-6ff6-4100-b765-5149b21c7f04\") " pod="kube-system/coredns-668d6bf9bc-t2s8d" Dec 16 12:29:05.696045 kubelet[2757]: I1216 12:29:05.695788 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6eaa9000-d78a-4f2c-90ec-72eb46b8f615-tigera-ca-bundle\") pod \"calico-kube-controllers-7d8c786985-6rqkc\" (UID: \"6eaa9000-d78a-4f2c-90ec-72eb46b8f615\") " pod="calico-system/calico-kube-controllers-7d8c786985-6rqkc" Dec 16 12:29:05.696159 kubelet[2757]: I1216 12:29:05.695805 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/10459573-8b1e-499f-ad3a-d0c1037d6278-whisker-backend-key-pair\") pod \"whisker-5f775f4fbb-txbqj\" (UID: \"10459573-8b1e-499f-ad3a-d0c1037d6278\") " pod="calico-system/whisker-5f775f4fbb-txbqj" Dec 16 12:29:05.696159 kubelet[2757]: I1216 12:29:05.695825 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e9d246f-dbf0-4c36-8e21-415862de6ecd-goldmane-ca-bundle\") pod \"goldmane-666569f655-qk9mh\" (UID: \"9e9d246f-dbf0-4c36-8e21-415862de6ecd\") " pod="calico-system/goldmane-666569f655-qk9mh" Dec 16 12:29:05.696159 kubelet[2757]: I1216 12:29:05.695882 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvjgx\" (UniqueName: \"kubernetes.io/projected/6eaa9000-d78a-4f2c-90ec-72eb46b8f615-kube-api-access-kvjgx\") pod \"calico-kube-controllers-7d8c786985-6rqkc\" (UID: \"6eaa9000-d78a-4f2c-90ec-72eb46b8f615\") " pod="calico-system/calico-kube-controllers-7d8c786985-6rqkc" Dec 16 12:29:05.696159 kubelet[2757]: I1216 12:29:05.695905 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10459573-8b1e-499f-ad3a-d0c1037d6278-whisker-ca-bundle\") pod \"whisker-5f775f4fbb-txbqj\" (UID: \"10459573-8b1e-499f-ad3a-d0c1037d6278\") " pod="calico-system/whisker-5f775f4fbb-txbqj" Dec 16 12:29:05.696159 kubelet[2757]: I1216 12:29:05.695934 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-847g4\" (UniqueName: \"kubernetes.io/projected/cff0ff36-6ff6-4100-b765-5149b21c7f04-kube-api-access-847g4\") pod \"coredns-668d6bf9bc-t2s8d\" (UID: \"cff0ff36-6ff6-4100-b765-5149b21c7f04\") " pod="kube-system/coredns-668d6bf9bc-t2s8d" Dec 16 12:29:05.696270 kubelet[2757]: I1216 12:29:05.695959 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9e9d246f-dbf0-4c36-8e21-415862de6ecd-goldmane-key-pair\") pod \"goldmane-666569f655-qk9mh\" (UID: \"9e9d246f-dbf0-4c36-8e21-415862de6ecd\") " pod="calico-system/goldmane-666569f655-qk9mh" Dec 16 12:29:05.696270 kubelet[2757]: I1216 12:29:05.695976 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/95ed81e2-a00e-4be1-9fa5-31d53c3e4933-calico-apiserver-certs\") pod \"calico-apiserver-654c6f7d5b-9x9hg\" (UID: \"95ed81e2-a00e-4be1-9fa5-31d53c3e4933\") " pod="calico-apiserver/calico-apiserver-654c6f7d5b-9x9hg" Dec 16 12:29:05.707035 systemd[1]: Created slice kubepods-besteffort-pod9e9d246f_dbf0_4c36_8e21_415862de6ecd.slice - libcontainer container kubepods-besteffort-pod9e9d246f_dbf0_4c36_8e21_415862de6ecd.slice. Dec 16 12:29:05.929542 containerd[1536]: time="2025-12-16T12:29:05.929484729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654c6f7d5b-snhmr,Uid:b160a20e-ffb1-4e7e-91c1-0be63ff8efe3,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:29:05.952063 containerd[1536]: time="2025-12-16T12:29:05.951879011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t2s8d,Uid:cff0ff36-6ff6-4100-b765-5149b21c7f04,Namespace:kube-system,Attempt:0,}" Dec 16 12:29:05.966394 containerd[1536]: time="2025-12-16T12:29:05.966260880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d8c786985-6rqkc,Uid:6eaa9000-d78a-4f2c-90ec-72eb46b8f615,Namespace:calico-system,Attempt:0,}" Dec 16 12:29:05.981729 containerd[1536]: time="2025-12-16T12:29:05.981371085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654c6f7d5b-9x9hg,Uid:95ed81e2-a00e-4be1-9fa5-31d53c3e4933,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:29:05.990795 containerd[1536]: time="2025-12-16T12:29:05.990743647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hdcn2,Uid:c774d147-6d0a-4215-84e8-95061b791c9c,Namespace:kube-system,Attempt:0,}" Dec 16 12:29:06.005695 containerd[1536]: time="2025-12-16T12:29:06.005650965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f775f4fbb-txbqj,Uid:10459573-8b1e-499f-ad3a-d0c1037d6278,Namespace:calico-system,Attempt:0,}" Dec 16 12:29:06.013381 containerd[1536]: time="2025-12-16T12:29:06.012961758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qk9mh,Uid:9e9d246f-dbf0-4c36-8e21-415862de6ecd,Namespace:calico-system,Attempt:0,}" Dec 16 12:29:06.015736 systemd[1]: Created slice kubepods-besteffort-podde20fd2d_5c54_466a_8865_47a00aec7cac.slice - libcontainer container kubepods-besteffort-podde20fd2d_5c54_466a_8865_47a00aec7cac.slice. Dec 16 12:29:06.021987 containerd[1536]: time="2025-12-16T12:29:06.021677221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c5llv,Uid:de20fd2d-5c54-466a-8865-47a00aec7cac,Namespace:calico-system,Attempt:0,}" Dec 16 12:29:06.130306 containerd[1536]: time="2025-12-16T12:29:06.130227535Z" level=error msg="Failed to destroy network for sandbox \"539aa7f3e3287e9845fa3fc5f907f4dbeb6db514c76d7a269ab27395afd885fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:29:06.137813 containerd[1536]: time="2025-12-16T12:29:06.137639971Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654c6f7d5b-snhmr,Uid:b160a20e-ffb1-4e7e-91c1-0be63ff8efe3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"539aa7f3e3287e9845fa3fc5f907f4dbeb6db514c76d7a269ab27395afd885fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:29:06.138630 kubelet[2757]: E1216 12:29:06.137937 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"539aa7f3e3287e9845fa3fc5f907f4dbeb6db514c76d7a269ab27395afd885fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:29:06.138630 kubelet[2757]: E1216 12:29:06.138011 2757 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"539aa7f3e3287e9845fa3fc5f907f4dbeb6db514c76d7a269ab27395afd885fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-654c6f7d5b-snhmr" Dec 16 12:29:06.138630 kubelet[2757]: E1216 12:29:06.138031 2757 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"539aa7f3e3287e9845fa3fc5f907f4dbeb6db514c76d7a269ab27395afd885fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-654c6f7d5b-snhmr" Dec 16 12:29:06.138727 kubelet[2757]: E1216 12:29:06.138082 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-654c6f7d5b-snhmr_calico-apiserver(b160a20e-ffb1-4e7e-91c1-0be63ff8efe3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-654c6f7d5b-snhmr_calico-apiserver(b160a20e-ffb1-4e7e-91c1-0be63ff8efe3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"539aa7f3e3287e9845fa3fc5f907f4dbeb6db514c76d7a269ab27395afd885fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-654c6f7d5b-snhmr" podUID="b160a20e-ffb1-4e7e-91c1-0be63ff8efe3" Dec 16 12:29:06.188919 containerd[1536]: time="2025-12-16T12:29:06.188638039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 12:29:06.200944 containerd[1536]: time="2025-12-16T12:29:06.200826655Z" level=error msg="Failed to destroy network for sandbox \"3a6a0b42517d3f8f8607027d3f55b27365e22ac83deb4df2c4cf59d5a89dc5f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:29:06.204145 containerd[1536]: time="2025-12-16T12:29:06.204082643Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c5llv,Uid:de20fd2d-5c54-466a-8865-47a00aec7cac,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a6a0b42517d3f8f8607027d3f55b27365e22ac83deb4df2c4cf59d5a89dc5f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:29:06.205146 containerd[1536]: time="2025-12-16T12:29:06.204706776Z" level=error msg="Failed to destroy network for sandbox \"8d979c7558700b072dcf086fcdec442c005b6b68b0e0a22665c3e69c687f6e38\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:29:06.205243 kubelet[2757]: E1216 12:29:06.204710 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a6a0b42517d3f8f8607027d3f55b27365e22ac83deb4df2c4cf59d5a89dc5f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:29:06.205243 kubelet[2757]: E1216 12:29:06.204779 2757 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a6a0b42517d3f8f8607027d3f55b27365e22ac83deb4df2c4cf59d5a89dc5f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c5llv" Dec 16 12:29:06.205243 kubelet[2757]: E1216 12:29:06.204799 2757 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a6a0b42517d3f8f8607027d3f55b27365e22ac83deb4df2c4cf59d5a89dc5f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c5llv" Dec 16 12:29:06.205339 kubelet[2757]: E1216 12:29:06.204867 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-c5llv_calico-system(de20fd2d-5c54-466a-8865-47a00aec7cac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-c5llv_calico-system(de20fd2d-5c54-466a-8865-47a00aec7cac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a6a0b42517d3f8f8607027d3f55b27365e22ac83deb4df2c4cf59d5a89dc5f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c5llv" podUID="de20fd2d-5c54-466a-8865-47a00aec7cac" Dec 16 12:29:06.211100 containerd[1536]: time="2025-12-16T12:29:06.211037509Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t2s8d,Uid:cff0ff36-6ff6-4100-b765-5149b21c7f04,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d979c7558700b072dcf086fcdec442c005b6b68b0e0a22665c3e69c687f6e38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:29:06.211401 kubelet[2757]: E1216 12:29:06.211298 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d979c7558700b072dcf086fcdec442c005b6b68b0e0a22665c3e69c687f6e38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:29:06.211401 kubelet[2757]: E1216 12:29:06.211351 2757 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d979c7558700b072dcf086fcdec442c005b6b68b0e0a22665c3e69c687f6e38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t2s8d" Dec 16 12:29:06.211401 kubelet[2757]: E1216 12:29:06.211388 2757 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d979c7558700b072dcf086fcdec442c005b6b68b0e0a22665c3e69c687f6e38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t2s8d" Dec 16 12:29:06.211617 kubelet[2757]: E1216 12:29:06.211435 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-t2s8d_kube-system(cff0ff36-6ff6-4100-b765-5149b21c7f04)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-t2s8d_kube-system(cff0ff36-6ff6-4100-b765-5149b21c7f04)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d979c7558700b072dcf086fcdec442c005b6b68b0e0a22665c3e69c687f6e38\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-t2s8d" podUID="cff0ff36-6ff6-4100-b765-5149b21c7f04" Dec 16 12:29:06.222993 containerd[1536]: time="2025-12-16T12:29:06.222872837Z" level=error msg="Failed to destroy network for sandbox \"fb9dd85c8be31d90e286785f4366aead01d8b12a1b0b8303f937ca1dca171908\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:29:06.226027 containerd[1536]: time="2025-12-16T12:29:06.225909860Z" level=error msg="Failed to destroy network for sandbox \"021727806a8e59a71f887dcda8f9ebea98cfa2978a61f52af5d403ac1e9199ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:29:06.229423 containerd[1536]: time="2025-12-16T12:29:06.228930844Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d8c786985-6rqkc,Uid:6eaa9000-d78a-4f2c-90ec-72eb46b8f615,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb9dd85c8be31d90e286785f4366aead01d8b12a1b0b8303f937ca1dca171908\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:29:06.231643 kubelet[2757]: E1216 12:29:06.231584 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb9dd85c8be31d90e286785f4366aead01d8b12a1b0b8303f937ca1dca171908\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:29:06.231995 kubelet[2757]: E1216 12:29:06.231649 2757 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb9dd85c8be31d90e286785f4366aead01d8b12a1b0b8303f937ca1dca171908\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d8c786985-6rqkc" Dec 16 12:29:06.231995 kubelet[2757]: E1216 12:29:06.231672 2757 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb9dd85c8be31d90e286785f4366aead01d8b12a1b0b8303f937ca1dca171908\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d8c786985-6rqkc" Dec 16 12:29:06.231995 kubelet[2757]: E1216 12:29:06.231710 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d8c786985-6rqkc_calico-system(6eaa9000-d78a-4f2c-90ec-72eb46b8f615)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d8c786985-6rqkc_calico-system(6eaa9000-d78a-4f2c-90ec-72eb46b8f615)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb9dd85c8be31d90e286785f4366aead01d8b12a1b0b8303f937ca1dca171908\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d8c786985-6rqkc" podUID="6eaa9000-d78a-4f2c-90ec-72eb46b8f615" Dec 16 12:29:06.233068 containerd[1536]: time="2025-12-16T12:29:06.232975688Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qk9mh,Uid:9e9d246f-dbf0-4c36-8e21-415862de6ecd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"021727806a8e59a71f887dcda8f9ebea98cfa2978a61f52af5d403ac1e9199ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:29:06.234027 kubelet[2757]: E1216 12:29:06.233416 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"021727806a8e59a71f887dcda8f9ebea98cfa2978a61f52af5d403ac1e9199ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:29:06.234027 kubelet[2757]: E1216 12:29:06.233467 2757 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"021727806a8e59a71f887dcda8f9ebea98cfa2978a61f52af5d403ac1e9199ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-qk9mh" Dec 16 12:29:06.234027 kubelet[2757]: E1216 12:29:06.233485 2757 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"021727806a8e59a71f887dcda8f9ebea98cfa2978a61f52af5d403ac1e9199ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-qk9mh" Dec 16 12:29:06.234160 kubelet[2757]: E1216 12:29:06.233535 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-qk9mh_calico-system(9e9d246f-dbf0-4c36-8e21-415862de6ecd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-qk9mh_calico-system(9e9d246f-dbf0-4c36-8e21-415862de6ecd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"021727806a8e59a71f887dcda8f9ebea98cfa2978a61f52af5d403ac1e9199ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-qk9mh" podUID="9e9d246f-dbf0-4c36-8e21-415862de6ecd" Dec 16 12:29:06.260298 containerd[1536]: time="2025-12-16T12:29:06.260214259Z" level=error msg="Failed to destroy network for sandbox \"feea247f753e6f3f2373a6dc61bdebdab61ca899704fe1f21c49f6c38354af23\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:29:06.262878 containerd[1536]: time="2025-12-16T12:29:06.262232622Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654c6f7d5b-9x9hg,Uid:95ed81e2-a00e-4be1-9fa5-31d53c3e4933,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"feea247f753e6f3f2373a6dc61bdebdab61ca899704fe1f21c49f6c38354af23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:29:06.263514 kubelet[2757]: E1216 12:29:06.263470 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"feea247f753e6f3f2373a6dc61bdebdab61ca899704fe1f21c49f6c38354af23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:29:06.263619 kubelet[2757]: E1216 12:29:06.263538 2757 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"feea247f753e6f3f2373a6dc61bdebdab61ca899704fe1f21c49f6c38354af23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-654c6f7d5b-9x9hg" Dec 16 12:29:06.263619 kubelet[2757]: E1216 12:29:06.263560 2757 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"feea247f753e6f3f2373a6dc61bdebdab61ca899704fe1f21c49f6c38354af23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-654c6f7d5b-9x9hg" Dec 16 12:29:06.263793 kubelet[2757]: E1216 12:29:06.263614 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-654c6f7d5b-9x9hg_calico-apiserver(95ed81e2-a00e-4be1-9fa5-31d53c3e4933)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-654c6f7d5b-9x9hg_calico-apiserver(95ed81e2-a00e-4be1-9fa5-31d53c3e4933)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"feea247f753e6f3f2373a6dc61bdebdab61ca899704fe1f21c49f6c38354af23\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-654c6f7d5b-9x9hg" podUID="95ed81e2-a00e-4be1-9fa5-31d53c3e4933" Dec 16 12:29:06.282448 containerd[1536]: time="2025-12-16T12:29:06.282400804Z" level=error msg="Failed to destroy network for sandbox \"a0f03b65865aaae637d6db7cc6433725d0b5e251b580b8777489be09824f40b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:29:06.284433 containerd[1536]: time="2025-12-16T12:29:06.284388446Z" level=error msg="Failed to destroy network for sandbox \"32d9025b63c3a0b52b1911ce49f8cf35346cb8a157d99d3bc5609406a124526b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:29:06.285183 containerd[1536]: time="2025-12-16T12:29:06.285144902Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f775f4fbb-txbqj,Uid:10459573-8b1e-499f-ad3a-d0c1037d6278,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0f03b65865aaae637d6db7cc6433725d0b5e251b580b8777489be09824f40b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:29:06.285434 kubelet[2757]: E1216 12:29:06.285397 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0f03b65865aaae637d6db7cc6433725d0b5e251b580b8777489be09824f40b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:29:06.285502 kubelet[2757]: E1216 12:29:06.285456 2757 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0f03b65865aaae637d6db7cc6433725d0b5e251b580b8777489be09824f40b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f775f4fbb-txbqj" Dec 16 12:29:06.285502 kubelet[2757]: E1216 12:29:06.285477 2757 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0f03b65865aaae637d6db7cc6433725d0b5e251b580b8777489be09824f40b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f775f4fbb-txbqj" Dec 16 12:29:06.285567 kubelet[2757]: E1216 12:29:06.285523 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5f775f4fbb-txbqj_calico-system(10459573-8b1e-499f-ad3a-d0c1037d6278)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5f775f4fbb-txbqj_calico-system(10459573-8b1e-499f-ad3a-d0c1037d6278)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a0f03b65865aaae637d6db7cc6433725d0b5e251b580b8777489be09824f40b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5f775f4fbb-txbqj" podUID="10459573-8b1e-499f-ad3a-d0c1037d6278" Dec 16 12:29:06.286477 containerd[1536]: time="2025-12-16T12:29:06.286437729Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hdcn2,Uid:c774d147-6d0a-4215-84e8-95061b791c9c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"32d9025b63c3a0b52b1911ce49f8cf35346cb8a157d99d3bc5609406a124526b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:29:06.286804 kubelet[2757]: E1216 12:29:06.286763 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32d9025b63c3a0b52b1911ce49f8cf35346cb8a157d99d3bc5609406a124526b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:29:06.286886 kubelet[2757]: E1216 12:29:06.286823 2757 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32d9025b63c3a0b52b1911ce49f8cf35346cb8a157d99d3bc5609406a124526b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hdcn2" Dec 16 12:29:06.286886 kubelet[2757]: E1216 12:29:06.286859 2757 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32d9025b63c3a0b52b1911ce49f8cf35346cb8a157d99d3bc5609406a124526b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hdcn2" Dec 16 12:29:06.286945 kubelet[2757]: E1216 12:29:06.286893 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-hdcn2_kube-system(c774d147-6d0a-4215-84e8-95061b791c9c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-hdcn2_kube-system(c774d147-6d0a-4215-84e8-95061b791c9c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"32d9025b63c3a0b52b1911ce49f8cf35346cb8a157d99d3bc5609406a124526b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-hdcn2" podUID="c774d147-6d0a-4215-84e8-95061b791c9c" Dec 16 12:29:09.952233 kubelet[2757]: I1216 12:29:09.951188 2757 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:29:10.882932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount114774020.mount: Deactivated successfully. Dec 16 12:29:10.910704 containerd[1536]: time="2025-12-16T12:29:10.909860876Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:10.912265 containerd[1536]: time="2025-12-16T12:29:10.912210081Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Dec 16 12:29:10.912587 containerd[1536]: time="2025-12-16T12:29:10.912544207Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:10.914835 containerd[1536]: time="2025-12-16T12:29:10.914797890Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:10.915809 containerd[1536]: time="2025-12-16T12:29:10.915375100Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.72619321s" Dec 16 12:29:10.915809 containerd[1536]: time="2025-12-16T12:29:10.915414941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 16 12:29:10.936552 containerd[1536]: time="2025-12-16T12:29:10.936501098Z" level=info msg="CreateContainer within sandbox \"c26e64f62cc1876da140fc074e0b474df3bcd5479d679339e4d39b4e656e1d29\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 12:29:10.954847 containerd[1536]: time="2025-12-16T12:29:10.954766643Z" level=info msg="Container 8b77b194b4b5ae080ed0cf46e3620d76c4704aa6bd42ad405da5449204a12032: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:29:10.961455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount403601856.mount: Deactivated successfully. Dec 16 12:29:10.970390 containerd[1536]: time="2025-12-16T12:29:10.970310495Z" level=info msg="CreateContainer within sandbox \"c26e64f62cc1876da140fc074e0b474df3bcd5479d679339e4d39b4e656e1d29\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8b77b194b4b5ae080ed0cf46e3620d76c4704aa6bd42ad405da5449204a12032\"" Dec 16 12:29:10.975543 containerd[1536]: time="2025-12-16T12:29:10.975436432Z" level=info msg="StartContainer for \"8b77b194b4b5ae080ed0cf46e3620d76c4704aa6bd42ad405da5449204a12032\"" Dec 16 12:29:10.979435 containerd[1536]: time="2025-12-16T12:29:10.979392747Z" level=info msg="connecting to shim 8b77b194b4b5ae080ed0cf46e3620d76c4704aa6bd42ad405da5449204a12032" address="unix:///run/containerd/s/4e6c0baecbff5f392a169dc714ea52ea2d17724a0ea56e0e2b5e5d4a648090ef" protocol=ttrpc version=3 Dec 16 12:29:11.011661 systemd[1]: Started cri-containerd-8b77b194b4b5ae080ed0cf46e3620d76c4704aa6bd42ad405da5449204a12032.scope - libcontainer container 8b77b194b4b5ae080ed0cf46e3620d76c4704aa6bd42ad405da5449204a12032. Dec 16 12:29:11.093613 containerd[1536]: time="2025-12-16T12:29:11.093511411Z" level=info msg="StartContainer for \"8b77b194b4b5ae080ed0cf46e3620d76c4704aa6bd42ad405da5449204a12032\" returns successfully" Dec 16 12:29:11.235467 kubelet[2757]: I1216 12:29:11.233825 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-sj62k" podStartSLOduration=1.472464204 podStartE2EDuration="14.233808266s" podCreationTimestamp="2025-12-16 12:28:57 +0000 UTC" firstStartedPulling="2025-12-16 12:28:58.155780311 +0000 UTC m=+28.256053048" lastFinishedPulling="2025-12-16 12:29:10.917124413 +0000 UTC m=+41.017397110" observedRunningTime="2025-12-16 12:29:11.233017852 +0000 UTC m=+41.333290589" watchObservedRunningTime="2025-12-16 12:29:11.233808266 +0000 UTC m=+41.334081003" Dec 16 12:29:11.286547 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 12:29:11.286667 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 12:29:11.543490 kubelet[2757]: I1216 12:29:11.542606 2757 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/10459573-8b1e-499f-ad3a-d0c1037d6278-whisker-backend-key-pair\") pod \"10459573-8b1e-499f-ad3a-d0c1037d6278\" (UID: \"10459573-8b1e-499f-ad3a-d0c1037d6278\") " Dec 16 12:29:11.543490 kubelet[2757]: I1216 12:29:11.542698 2757 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10459573-8b1e-499f-ad3a-d0c1037d6278-whisker-ca-bundle\") pod \"10459573-8b1e-499f-ad3a-d0c1037d6278\" (UID: \"10459573-8b1e-499f-ad3a-d0c1037d6278\") " Dec 16 12:29:11.543490 kubelet[2757]: I1216 12:29:11.542781 2757 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsdwd\" (UniqueName: \"kubernetes.io/projected/10459573-8b1e-499f-ad3a-d0c1037d6278-kube-api-access-xsdwd\") pod \"10459573-8b1e-499f-ad3a-d0c1037d6278\" (UID: \"10459573-8b1e-499f-ad3a-d0c1037d6278\") " Dec 16 12:29:11.551732 kubelet[2757]: I1216 12:29:11.551678 2757 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10459573-8b1e-499f-ad3a-d0c1037d6278-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "10459573-8b1e-499f-ad3a-d0c1037d6278" (UID: "10459573-8b1e-499f-ad3a-d0c1037d6278"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 12:29:11.552047 kubelet[2757]: I1216 12:29:11.552020 2757 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10459573-8b1e-499f-ad3a-d0c1037d6278-kube-api-access-xsdwd" (OuterVolumeSpecName: "kube-api-access-xsdwd") pod "10459573-8b1e-499f-ad3a-d0c1037d6278" (UID: "10459573-8b1e-499f-ad3a-d0c1037d6278"). InnerVolumeSpecName "kube-api-access-xsdwd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:29:11.552233 kubelet[2757]: I1216 12:29:11.552203 2757 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10459573-8b1e-499f-ad3a-d0c1037d6278-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "10459573-8b1e-499f-ad3a-d0c1037d6278" (UID: "10459573-8b1e-499f-ad3a-d0c1037d6278"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 12:29:11.643307 kubelet[2757]: I1216 12:29:11.643253 2757 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/10459573-8b1e-499f-ad3a-d0c1037d6278-whisker-backend-key-pair\") on node \"ci-4459-2-2-0-7f64ef3ba0\" DevicePath \"\"" Dec 16 12:29:11.643307 kubelet[2757]: I1216 12:29:11.643291 2757 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10459573-8b1e-499f-ad3a-d0c1037d6278-whisker-ca-bundle\") on node \"ci-4459-2-2-0-7f64ef3ba0\" DevicePath \"\"" Dec 16 12:29:11.643307 kubelet[2757]: I1216 12:29:11.643308 2757 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xsdwd\" (UniqueName: \"kubernetes.io/projected/10459573-8b1e-499f-ad3a-d0c1037d6278-kube-api-access-xsdwd\") on node \"ci-4459-2-2-0-7f64ef3ba0\" DevicePath \"\"" Dec 16 12:29:11.883699 systemd[1]: var-lib-kubelet-pods-10459573\x2d8b1e\x2d499f\x2dad3a\x2dd0c1037d6278-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxsdwd.mount: Deactivated successfully. Dec 16 12:29:11.883856 systemd[1]: var-lib-kubelet-pods-10459573\x2d8b1e\x2d499f\x2dad3a\x2dd0c1037d6278-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 12:29:12.019872 systemd[1]: Removed slice kubepods-besteffort-pod10459573_8b1e_499f_ad3a_d0c1037d6278.slice - libcontainer container kubepods-besteffort-pod10459573_8b1e_499f_ad3a_d0c1037d6278.slice. Dec 16 12:29:12.305050 systemd[1]: Created slice kubepods-besteffort-pod0af3bb6b_9dd4_40a9_8f44_2121edd9c07c.slice - libcontainer container kubepods-besteffort-pod0af3bb6b_9dd4_40a9_8f44_2121edd9c07c.slice. Dec 16 12:29:12.348378 kubelet[2757]: I1216 12:29:12.348171 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0af3bb6b-9dd4-40a9-8f44-2121edd9c07c-whisker-ca-bundle\") pod \"whisker-6cc8765bd4-8622q\" (UID: \"0af3bb6b-9dd4-40a9-8f44-2121edd9c07c\") " pod="calico-system/whisker-6cc8765bd4-8622q" Dec 16 12:29:12.348378 kubelet[2757]: I1216 12:29:12.348226 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85b54\" (UniqueName: \"kubernetes.io/projected/0af3bb6b-9dd4-40a9-8f44-2121edd9c07c-kube-api-access-85b54\") pod \"whisker-6cc8765bd4-8622q\" (UID: \"0af3bb6b-9dd4-40a9-8f44-2121edd9c07c\") " pod="calico-system/whisker-6cc8765bd4-8622q" Dec 16 12:29:12.348378 kubelet[2757]: I1216 12:29:12.348262 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0af3bb6b-9dd4-40a9-8f44-2121edd9c07c-whisker-backend-key-pair\") pod \"whisker-6cc8765bd4-8622q\" (UID: \"0af3bb6b-9dd4-40a9-8f44-2121edd9c07c\") " pod="calico-system/whisker-6cc8765bd4-8622q" Dec 16 12:29:12.612953 containerd[1536]: time="2025-12-16T12:29:12.612752408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cc8765bd4-8622q,Uid:0af3bb6b-9dd4-40a9-8f44-2121edd9c07c,Namespace:calico-system,Attempt:0,}" Dec 16 12:29:12.867195 systemd-networkd[1418]: cali930975d4954: Link UP Dec 16 12:29:12.868339 systemd-networkd[1418]: cali930975d4954: Gained carrier Dec 16 12:29:12.890108 containerd[1536]: 2025-12-16 12:29:12.643 [INFO][3881] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:29:12.890108 containerd[1536]: 2025-12-16 12:29:12.695 [INFO][3881] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--0--7f64ef3ba0-k8s-whisker--6cc8765bd4--8622q-eth0 whisker-6cc8765bd4- calico-system 0af3bb6b-9dd4-40a9-8f44-2121edd9c07c 905 0 2025-12-16 12:29:12 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6cc8765bd4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459-2-2-0-7f64ef3ba0 whisker-6cc8765bd4-8622q eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali930975d4954 [] [] }} ContainerID="517bbfd064e4614e6f2c636bcae30c08e3190281dd6480d9c949660756f7131b" Namespace="calico-system" Pod="whisker-6cc8765bd4-8622q" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-whisker--6cc8765bd4--8622q-" Dec 16 12:29:12.890108 containerd[1536]: 2025-12-16 12:29:12.695 [INFO][3881] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="517bbfd064e4614e6f2c636bcae30c08e3190281dd6480d9c949660756f7131b" Namespace="calico-system" Pod="whisker-6cc8765bd4-8622q" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-whisker--6cc8765bd4--8622q-eth0" Dec 16 12:29:12.890108 containerd[1536]: 2025-12-16 12:29:12.788 [INFO][3914] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="517bbfd064e4614e6f2c636bcae30c08e3190281dd6480d9c949660756f7131b" HandleID="k8s-pod-network.517bbfd064e4614e6f2c636bcae30c08e3190281dd6480d9c949660756f7131b" Workload="ci--4459--2--2--0--7f64ef3ba0-k8s-whisker--6cc8765bd4--8622q-eth0" Dec 16 12:29:12.890642 containerd[1536]: 2025-12-16 12:29:12.788 [INFO][3914] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="517bbfd064e4614e6f2c636bcae30c08e3190281dd6480d9c949660756f7131b" HandleID="k8s-pod-network.517bbfd064e4614e6f2c636bcae30c08e3190281dd6480d9c949660756f7131b" Workload="ci--4459--2--2--0--7f64ef3ba0-k8s-whisker--6cc8765bd4--8622q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400038b910), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-2-0-7f64ef3ba0", "pod":"whisker-6cc8765bd4-8622q", "timestamp":"2025-12-16 12:29:12.788373509 +0000 UTC"}, Hostname:"ci-4459-2-2-0-7f64ef3ba0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:29:12.890642 containerd[1536]: 2025-12-16 12:29:12.789 [INFO][3914] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:29:12.890642 containerd[1536]: 2025-12-16 12:29:12.789 [INFO][3914] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:29:12.890642 containerd[1536]: 2025-12-16 12:29:12.789 [INFO][3914] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-0-7f64ef3ba0' Dec 16 12:29:12.890642 containerd[1536]: 2025-12-16 12:29:12.804 [INFO][3914] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.517bbfd064e4614e6f2c636bcae30c08e3190281dd6480d9c949660756f7131b" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:12.890642 containerd[1536]: 2025-12-16 12:29:12.812 [INFO][3914] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:12.890642 containerd[1536]: 2025-12-16 12:29:12.825 [INFO][3914] ipam/ipam.go 511: Trying affinity for 192.168.111.0/26 host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:12.890642 containerd[1536]: 2025-12-16 12:29:12.829 [INFO][3914] ipam/ipam.go 158: Attempting to load block cidr=192.168.111.0/26 host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:12.890642 containerd[1536]: 2025-12-16 12:29:12.835 [INFO][3914] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.111.0/26 host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:12.891865 containerd[1536]: 2025-12-16 12:29:12.835 [INFO][3914] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.111.0/26 handle="k8s-pod-network.517bbfd064e4614e6f2c636bcae30c08e3190281dd6480d9c949660756f7131b" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:12.891865 containerd[1536]: 2025-12-16 12:29:12.838 [INFO][3914] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.517bbfd064e4614e6f2c636bcae30c08e3190281dd6480d9c949660756f7131b Dec 16 12:29:12.891865 containerd[1536]: 2025-12-16 12:29:12.846 [INFO][3914] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.111.0/26 handle="k8s-pod-network.517bbfd064e4614e6f2c636bcae30c08e3190281dd6480d9c949660756f7131b" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:12.891865 containerd[1536]: 2025-12-16 12:29:12.853 [INFO][3914] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.111.1/26] block=192.168.111.0/26 handle="k8s-pod-network.517bbfd064e4614e6f2c636bcae30c08e3190281dd6480d9c949660756f7131b" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:12.891865 containerd[1536]: 2025-12-16 12:29:12.853 [INFO][3914] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.111.1/26] handle="k8s-pod-network.517bbfd064e4614e6f2c636bcae30c08e3190281dd6480d9c949660756f7131b" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:12.891865 containerd[1536]: 2025-12-16 12:29:12.853 [INFO][3914] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:29:12.891865 containerd[1536]: 2025-12-16 12:29:12.853 [INFO][3914] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.111.1/26] IPv6=[] ContainerID="517bbfd064e4614e6f2c636bcae30c08e3190281dd6480d9c949660756f7131b" HandleID="k8s-pod-network.517bbfd064e4614e6f2c636bcae30c08e3190281dd6480d9c949660756f7131b" Workload="ci--4459--2--2--0--7f64ef3ba0-k8s-whisker--6cc8765bd4--8622q-eth0" Dec 16 12:29:12.892301 containerd[1536]: 2025-12-16 12:29:12.857 [INFO][3881] cni-plugin/k8s.go 418: Populated endpoint ContainerID="517bbfd064e4614e6f2c636bcae30c08e3190281dd6480d9c949660756f7131b" Namespace="calico-system" Pod="whisker-6cc8765bd4-8622q" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-whisker--6cc8765bd4--8622q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--0--7f64ef3ba0-k8s-whisker--6cc8765bd4--8622q-eth0", GenerateName:"whisker-6cc8765bd4-", Namespace:"calico-system", SelfLink:"", UID:"0af3bb6b-9dd4-40a9-8f44-2121edd9c07c", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 29, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6cc8765bd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-0-7f64ef3ba0", ContainerID:"", Pod:"whisker-6cc8765bd4-8622q", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.111.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali930975d4954", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:29:12.892301 containerd[1536]: 2025-12-16 12:29:12.857 [INFO][3881] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.111.1/32] ContainerID="517bbfd064e4614e6f2c636bcae30c08e3190281dd6480d9c949660756f7131b" Namespace="calico-system" Pod="whisker-6cc8765bd4-8622q" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-whisker--6cc8765bd4--8622q-eth0" Dec 16 12:29:12.892767 containerd[1536]: 2025-12-16 12:29:12.857 [INFO][3881] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali930975d4954 ContainerID="517bbfd064e4614e6f2c636bcae30c08e3190281dd6480d9c949660756f7131b" Namespace="calico-system" Pod="whisker-6cc8765bd4-8622q" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-whisker--6cc8765bd4--8622q-eth0" Dec 16 12:29:12.892767 containerd[1536]: 2025-12-16 12:29:12.867 [INFO][3881] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="517bbfd064e4614e6f2c636bcae30c08e3190281dd6480d9c949660756f7131b" Namespace="calico-system" Pod="whisker-6cc8765bd4-8622q" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-whisker--6cc8765bd4--8622q-eth0" Dec 16 12:29:12.892815 containerd[1536]: 2025-12-16 12:29:12.869 [INFO][3881] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="517bbfd064e4614e6f2c636bcae30c08e3190281dd6480d9c949660756f7131b" Namespace="calico-system" Pod="whisker-6cc8765bd4-8622q" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-whisker--6cc8765bd4--8622q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--0--7f64ef3ba0-k8s-whisker--6cc8765bd4--8622q-eth0", GenerateName:"whisker-6cc8765bd4-", Namespace:"calico-system", SelfLink:"", UID:"0af3bb6b-9dd4-40a9-8f44-2121edd9c07c", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 29, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6cc8765bd4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-0-7f64ef3ba0", ContainerID:"517bbfd064e4614e6f2c636bcae30c08e3190281dd6480d9c949660756f7131b", Pod:"whisker-6cc8765bd4-8622q", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.111.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali930975d4954", MAC:"a6:3a:0d:94:0b:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:29:12.892866 containerd[1536]: 2025-12-16 12:29:12.885 [INFO][3881] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="517bbfd064e4614e6f2c636bcae30c08e3190281dd6480d9c949660756f7131b" Namespace="calico-system" Pod="whisker-6cc8765bd4-8622q" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-whisker--6cc8765bd4--8622q-eth0" Dec 16 12:29:12.927248 containerd[1536]: time="2025-12-16T12:29:12.927158710Z" level=info msg="connecting to shim 517bbfd064e4614e6f2c636bcae30c08e3190281dd6480d9c949660756f7131b" address="unix:///run/containerd/s/90296ce306f58e38d138ad30b040efc5176ef94f8e7345d3849da8e6aa79374c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:29:12.989872 systemd[1]: Started cri-containerd-517bbfd064e4614e6f2c636bcae30c08e3190281dd6480d9c949660756f7131b.scope - libcontainer container 517bbfd064e4614e6f2c636bcae30c08e3190281dd6480d9c949660756f7131b. Dec 16 12:29:13.083630 containerd[1536]: time="2025-12-16T12:29:13.083540869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cc8765bd4-8622q,Uid:0af3bb6b-9dd4-40a9-8f44-2121edd9c07c,Namespace:calico-system,Attempt:0,} returns sandbox id \"517bbfd064e4614e6f2c636bcae30c08e3190281dd6480d9c949660756f7131b\"" Dec 16 12:29:13.093442 containerd[1536]: time="2025-12-16T12:29:13.093336439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:29:13.449639 containerd[1536]: time="2025-12-16T12:29:13.449586727Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:13.451675 containerd[1536]: time="2025-12-16T12:29:13.451606202Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:29:13.451802 containerd[1536]: time="2025-12-16T12:29:13.451722644Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 12:29:13.455994 kubelet[2757]: E1216 12:29:13.455918 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:29:13.455994 kubelet[2757]: E1216 12:29:13.456003 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:29:13.465721 kubelet[2757]: E1216 12:29:13.465323 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f58879817308484fbfd5b74fefa0ca63,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-85b54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cc8765bd4-8622q_calico-system(0af3bb6b-9dd4-40a9-8f44-2121edd9c07c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:13.469090 containerd[1536]: time="2025-12-16T12:29:13.468249892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:29:13.516484 systemd-networkd[1418]: vxlan.calico: Link UP Dec 16 12:29:13.516495 systemd-networkd[1418]: vxlan.calico: Gained carrier Dec 16 12:29:13.979516 containerd[1536]: time="2025-12-16T12:29:13.979349678Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:13.980872 containerd[1536]: time="2025-12-16T12:29:13.980807463Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:29:13.981002 containerd[1536]: time="2025-12-16T12:29:13.980965906Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 12:29:13.981314 kubelet[2757]: E1216 12:29:13.981242 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:29:13.981419 kubelet[2757]: E1216 12:29:13.981330 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:29:13.982044 kubelet[2757]: E1216 12:29:13.981797 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-85b54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cc8765bd4-8622q_calico-system(0af3bb6b-9dd4-40a9-8f44-2121edd9c07c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:13.983387 kubelet[2757]: E1216 12:29:13.983323 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cc8765bd4-8622q" podUID="0af3bb6b-9dd4-40a9-8f44-2121edd9c07c" Dec 16 12:29:14.016982 kubelet[2757]: I1216 12:29:14.016740 2757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10459573-8b1e-499f-ad3a-d0c1037d6278" path="/var/lib/kubelet/pods/10459573-8b1e-499f-ad3a-d0c1037d6278/volumes" Dec 16 12:29:14.218735 kubelet[2757]: E1216 12:29:14.218654 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cc8765bd4-8622q" podUID="0af3bb6b-9dd4-40a9-8f44-2121edd9c07c" Dec 16 12:29:14.603630 systemd-networkd[1418]: cali930975d4954: Gained IPv6LL Dec 16 12:29:14.859842 systemd-networkd[1418]: vxlan.calico: Gained IPv6LL Dec 16 12:29:17.008281 containerd[1536]: time="2025-12-16T12:29:17.008235838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hdcn2,Uid:c774d147-6d0a-4215-84e8-95061b791c9c,Namespace:kube-system,Attempt:0,}" Dec 16 12:29:17.154519 systemd-networkd[1418]: cali3e910f2cc71: Link UP Dec 16 12:29:17.154700 systemd-networkd[1418]: cali3e910f2cc71: Gained carrier Dec 16 12:29:17.179065 containerd[1536]: 2025-12-16 12:29:17.058 [INFO][4147] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--0--7f64ef3ba0-k8s-coredns--668d6bf9bc--hdcn2-eth0 coredns-668d6bf9bc- kube-system c774d147-6d0a-4215-84e8-95061b791c9c 833 0 2025-12-16 12:28:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-2-0-7f64ef3ba0 coredns-668d6bf9bc-hdcn2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3e910f2cc71 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="acb700ea271ab086daeaedf38fa1c03cd460e31a86c8dda4c740d4428786ecea" Namespace="kube-system" Pod="coredns-668d6bf9bc-hdcn2" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-coredns--668d6bf9bc--hdcn2-" Dec 16 12:29:17.179065 containerd[1536]: 2025-12-16 12:29:17.058 [INFO][4147] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="acb700ea271ab086daeaedf38fa1c03cd460e31a86c8dda4c740d4428786ecea" Namespace="kube-system" Pod="coredns-668d6bf9bc-hdcn2" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-coredns--668d6bf9bc--hdcn2-eth0" Dec 16 12:29:17.179065 containerd[1536]: 2025-12-16 12:29:17.092 [INFO][4159] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="acb700ea271ab086daeaedf38fa1c03cd460e31a86c8dda4c740d4428786ecea" HandleID="k8s-pod-network.acb700ea271ab086daeaedf38fa1c03cd460e31a86c8dda4c740d4428786ecea" Workload="ci--4459--2--2--0--7f64ef3ba0-k8s-coredns--668d6bf9bc--hdcn2-eth0" Dec 16 12:29:17.179261 containerd[1536]: 2025-12-16 12:29:17.092 [INFO][4159] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="acb700ea271ab086daeaedf38fa1c03cd460e31a86c8dda4c740d4428786ecea" HandleID="k8s-pod-network.acb700ea271ab086daeaedf38fa1c03cd460e31a86c8dda4c740d4428786ecea" Workload="ci--4459--2--2--0--7f64ef3ba0-k8s-coredns--668d6bf9bc--hdcn2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-2-0-7f64ef3ba0", "pod":"coredns-668d6bf9bc-hdcn2", "timestamp":"2025-12-16 12:29:17.092671766 +0000 UTC"}, Hostname:"ci-4459-2-2-0-7f64ef3ba0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:29:17.179261 containerd[1536]: 2025-12-16 12:29:17.092 [INFO][4159] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:29:17.179261 containerd[1536]: 2025-12-16 12:29:17.092 [INFO][4159] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:29:17.179261 containerd[1536]: 2025-12-16 12:29:17.093 [INFO][4159] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-0-7f64ef3ba0' Dec 16 12:29:17.179261 containerd[1536]: 2025-12-16 12:29:17.104 [INFO][4159] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.acb700ea271ab086daeaedf38fa1c03cd460e31a86c8dda4c740d4428786ecea" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:17.179261 containerd[1536]: 2025-12-16 12:29:17.112 [INFO][4159] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:17.179261 containerd[1536]: 2025-12-16 12:29:17.120 [INFO][4159] ipam/ipam.go 511: Trying affinity for 192.168.111.0/26 host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:17.179261 containerd[1536]: 2025-12-16 12:29:17.123 [INFO][4159] ipam/ipam.go 158: Attempting to load block cidr=192.168.111.0/26 host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:17.179261 containerd[1536]: 2025-12-16 12:29:17.127 [INFO][4159] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.111.0/26 host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:17.179724 containerd[1536]: 2025-12-16 12:29:17.127 [INFO][4159] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.111.0/26 handle="k8s-pod-network.acb700ea271ab086daeaedf38fa1c03cd460e31a86c8dda4c740d4428786ecea" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:17.179724 containerd[1536]: 2025-12-16 12:29:17.130 [INFO][4159] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.acb700ea271ab086daeaedf38fa1c03cd460e31a86c8dda4c740d4428786ecea Dec 16 12:29:17.179724 containerd[1536]: 2025-12-16 12:29:17.136 [INFO][4159] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.111.0/26 handle="k8s-pod-network.acb700ea271ab086daeaedf38fa1c03cd460e31a86c8dda4c740d4428786ecea" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:17.179724 containerd[1536]: 2025-12-16 12:29:17.145 [INFO][4159] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.111.2/26] block=192.168.111.0/26 handle="k8s-pod-network.acb700ea271ab086daeaedf38fa1c03cd460e31a86c8dda4c740d4428786ecea" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:17.179724 containerd[1536]: 2025-12-16 12:29:17.145 [INFO][4159] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.111.2/26] handle="k8s-pod-network.acb700ea271ab086daeaedf38fa1c03cd460e31a86c8dda4c740d4428786ecea" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:17.179724 containerd[1536]: 2025-12-16 12:29:17.146 [INFO][4159] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:29:17.179724 containerd[1536]: 2025-12-16 12:29:17.146 [INFO][4159] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.111.2/26] IPv6=[] ContainerID="acb700ea271ab086daeaedf38fa1c03cd460e31a86c8dda4c740d4428786ecea" HandleID="k8s-pod-network.acb700ea271ab086daeaedf38fa1c03cd460e31a86c8dda4c740d4428786ecea" Workload="ci--4459--2--2--0--7f64ef3ba0-k8s-coredns--668d6bf9bc--hdcn2-eth0" Dec 16 12:29:17.179870 containerd[1536]: 2025-12-16 12:29:17.150 [INFO][4147] cni-plugin/k8s.go 418: Populated endpoint ContainerID="acb700ea271ab086daeaedf38fa1c03cd460e31a86c8dda4c740d4428786ecea" Namespace="kube-system" Pod="coredns-668d6bf9bc-hdcn2" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-coredns--668d6bf9bc--hdcn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--0--7f64ef3ba0-k8s-coredns--668d6bf9bc--hdcn2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c774d147-6d0a-4215-84e8-95061b791c9c", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 28, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-0-7f64ef3ba0", ContainerID:"", Pod:"coredns-668d6bf9bc-hdcn2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.111.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3e910f2cc71", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:29:17.179870 containerd[1536]: 2025-12-16 12:29:17.151 [INFO][4147] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.111.2/32] ContainerID="acb700ea271ab086daeaedf38fa1c03cd460e31a86c8dda4c740d4428786ecea" Namespace="kube-system" Pod="coredns-668d6bf9bc-hdcn2" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-coredns--668d6bf9bc--hdcn2-eth0" Dec 16 12:29:17.179870 containerd[1536]: 2025-12-16 12:29:17.151 [INFO][4147] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3e910f2cc71 ContainerID="acb700ea271ab086daeaedf38fa1c03cd460e31a86c8dda4c740d4428786ecea" Namespace="kube-system" Pod="coredns-668d6bf9bc-hdcn2" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-coredns--668d6bf9bc--hdcn2-eth0" Dec 16 12:29:17.179870 containerd[1536]: 2025-12-16 12:29:17.154 [INFO][4147] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="acb700ea271ab086daeaedf38fa1c03cd460e31a86c8dda4c740d4428786ecea" Namespace="kube-system" Pod="coredns-668d6bf9bc-hdcn2" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-coredns--668d6bf9bc--hdcn2-eth0" Dec 16 12:29:17.179870 containerd[1536]: 2025-12-16 12:29:17.156 [INFO][4147] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="acb700ea271ab086daeaedf38fa1c03cd460e31a86c8dda4c740d4428786ecea" Namespace="kube-system" Pod="coredns-668d6bf9bc-hdcn2" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-coredns--668d6bf9bc--hdcn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--0--7f64ef3ba0-k8s-coredns--668d6bf9bc--hdcn2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c774d147-6d0a-4215-84e8-95061b791c9c", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 28, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-0-7f64ef3ba0", ContainerID:"acb700ea271ab086daeaedf38fa1c03cd460e31a86c8dda4c740d4428786ecea", Pod:"coredns-668d6bf9bc-hdcn2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.111.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3e910f2cc71", MAC:"ce:a6:90:63:b9:71", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:29:17.179870 containerd[1536]: 2025-12-16 12:29:17.173 [INFO][4147] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="acb700ea271ab086daeaedf38fa1c03cd460e31a86c8dda4c740d4428786ecea" Namespace="kube-system" Pod="coredns-668d6bf9bc-hdcn2" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-coredns--668d6bf9bc--hdcn2-eth0" Dec 16 12:29:17.221423 containerd[1536]: time="2025-12-16T12:29:17.221191628Z" level=info msg="connecting to shim acb700ea271ab086daeaedf38fa1c03cd460e31a86c8dda4c740d4428786ecea" address="unix:///run/containerd/s/0f367a5204e29bfd08cec556aeb1ca0c1fba283abaf94175df67084f5a6a63c3" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:29:17.255608 systemd[1]: Started cri-containerd-acb700ea271ab086daeaedf38fa1c03cd460e31a86c8dda4c740d4428786ecea.scope - libcontainer container acb700ea271ab086daeaedf38fa1c03cd460e31a86c8dda4c740d4428786ecea. Dec 16 12:29:17.300351 containerd[1536]: time="2025-12-16T12:29:17.300308633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hdcn2,Uid:c774d147-6d0a-4215-84e8-95061b791c9c,Namespace:kube-system,Attempt:0,} returns sandbox id \"acb700ea271ab086daeaedf38fa1c03cd460e31a86c8dda4c740d4428786ecea\"" Dec 16 12:29:17.305244 containerd[1536]: time="2025-12-16T12:29:17.305099588Z" level=info msg="CreateContainer within sandbox \"acb700ea271ab086daeaedf38fa1c03cd460e31a86c8dda4c740d4428786ecea\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:29:17.319124 containerd[1536]: time="2025-12-16T12:29:17.319080528Z" level=info msg="Container 9f8b6365d42173b881175db1ef7a5c1b944f5baf38fa171ace5c992e2d2e1f88: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:29:17.327395 containerd[1536]: time="2025-12-16T12:29:17.327186616Z" level=info msg="CreateContainer within sandbox \"acb700ea271ab086daeaedf38fa1c03cd460e31a86c8dda4c740d4428786ecea\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9f8b6365d42173b881175db1ef7a5c1b944f5baf38fa171ace5c992e2d2e1f88\"" Dec 16 12:29:17.329064 containerd[1536]: time="2025-12-16T12:29:17.328696999Z" level=info msg="StartContainer for \"9f8b6365d42173b881175db1ef7a5c1b944f5baf38fa171ace5c992e2d2e1f88\"" Dec 16 12:29:17.330933 containerd[1536]: time="2025-12-16T12:29:17.330885954Z" level=info msg="connecting to shim 9f8b6365d42173b881175db1ef7a5c1b944f5baf38fa171ace5c992e2d2e1f88" address="unix:///run/containerd/s/0f367a5204e29bfd08cec556aeb1ca0c1fba283abaf94175df67084f5a6a63c3" protocol=ttrpc version=3 Dec 16 12:29:17.354705 systemd[1]: Started cri-containerd-9f8b6365d42173b881175db1ef7a5c1b944f5baf38fa171ace5c992e2d2e1f88.scope - libcontainer container 9f8b6365d42173b881175db1ef7a5c1b944f5baf38fa171ace5c992e2d2e1f88. Dec 16 12:29:17.393403 containerd[1536]: time="2025-12-16T12:29:17.392115597Z" level=info msg="StartContainer for \"9f8b6365d42173b881175db1ef7a5c1b944f5baf38fa171ace5c992e2d2e1f88\" returns successfully" Dec 16 12:29:18.009010 containerd[1536]: time="2025-12-16T12:29:18.008426450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t2s8d,Uid:cff0ff36-6ff6-4100-b765-5149b21c7f04,Namespace:kube-system,Attempt:0,}" Dec 16 12:29:18.009715 containerd[1536]: time="2025-12-16T12:29:18.009672149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d8c786985-6rqkc,Uid:6eaa9000-d78a-4f2c-90ec-72eb46b8f615,Namespace:calico-system,Attempt:0,}" Dec 16 12:29:18.215247 systemd-networkd[1418]: caliabe7e8ff0f9: Link UP Dec 16 12:29:18.217410 systemd-networkd[1418]: caliabe7e8ff0f9: Gained carrier Dec 16 12:29:18.255106 containerd[1536]: 2025-12-16 12:29:18.083 [INFO][4255] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--0--7f64ef3ba0-k8s-coredns--668d6bf9bc--t2s8d-eth0 coredns-668d6bf9bc- kube-system cff0ff36-6ff6-4100-b765-5149b21c7f04 837 0 2025-12-16 12:28:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-2-0-7f64ef3ba0 coredns-668d6bf9bc-t2s8d eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliabe7e8ff0f9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="77ebf87fb7d2074c3b08e0180ef64db26aae9d2bd692c836766a00d8c4b988cb" Namespace="kube-system" Pod="coredns-668d6bf9bc-t2s8d" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-coredns--668d6bf9bc--t2s8d-" Dec 16 12:29:18.255106 containerd[1536]: 2025-12-16 12:29:18.083 [INFO][4255] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="77ebf87fb7d2074c3b08e0180ef64db26aae9d2bd692c836766a00d8c4b988cb" Namespace="kube-system" Pod="coredns-668d6bf9bc-t2s8d" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-coredns--668d6bf9bc--t2s8d-eth0" Dec 16 12:29:18.255106 containerd[1536]: 2025-12-16 12:29:18.128 [INFO][4281] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="77ebf87fb7d2074c3b08e0180ef64db26aae9d2bd692c836766a00d8c4b988cb" HandleID="k8s-pod-network.77ebf87fb7d2074c3b08e0180ef64db26aae9d2bd692c836766a00d8c4b988cb" Workload="ci--4459--2--2--0--7f64ef3ba0-k8s-coredns--668d6bf9bc--t2s8d-eth0" Dec 16 12:29:18.255106 containerd[1536]: 2025-12-16 12:29:18.128 [INFO][4281] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="77ebf87fb7d2074c3b08e0180ef64db26aae9d2bd692c836766a00d8c4b988cb" HandleID="k8s-pod-network.77ebf87fb7d2074c3b08e0180ef64db26aae9d2bd692c836766a00d8c4b988cb" Workload="ci--4459--2--2--0--7f64ef3ba0-k8s-coredns--668d6bf9bc--t2s8d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b8b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-2-0-7f64ef3ba0", "pod":"coredns-668d6bf9bc-t2s8d", "timestamp":"2025-12-16 12:29:18.128508573 +0000 UTC"}, Hostname:"ci-4459-2-2-0-7f64ef3ba0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:29:18.255106 containerd[1536]: 2025-12-16 12:29:18.128 [INFO][4281] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:29:18.255106 containerd[1536]: 2025-12-16 12:29:18.129 [INFO][4281] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:29:18.255106 containerd[1536]: 2025-12-16 12:29:18.129 [INFO][4281] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-0-7f64ef3ba0' Dec 16 12:29:18.255106 containerd[1536]: 2025-12-16 12:29:18.142 [INFO][4281] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.77ebf87fb7d2074c3b08e0180ef64db26aae9d2bd692c836766a00d8c4b988cb" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:18.255106 containerd[1536]: 2025-12-16 12:29:18.156 [INFO][4281] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:18.255106 containerd[1536]: 2025-12-16 12:29:18.167 [INFO][4281] ipam/ipam.go 511: Trying affinity for 192.168.111.0/26 host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:18.255106 containerd[1536]: 2025-12-16 12:29:18.171 [INFO][4281] ipam/ipam.go 158: Attempting to load block cidr=192.168.111.0/26 host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:18.255106 containerd[1536]: 2025-12-16 12:29:18.175 [INFO][4281] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.111.0/26 host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:18.255106 containerd[1536]: 2025-12-16 12:29:18.175 [INFO][4281] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.111.0/26 handle="k8s-pod-network.77ebf87fb7d2074c3b08e0180ef64db26aae9d2bd692c836766a00d8c4b988cb" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:18.255106 containerd[1536]: 2025-12-16 12:29:18.182 [INFO][4281] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.77ebf87fb7d2074c3b08e0180ef64db26aae9d2bd692c836766a00d8c4b988cb Dec 16 12:29:18.255106 containerd[1536]: 2025-12-16 12:29:18.189 [INFO][4281] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.111.0/26 handle="k8s-pod-network.77ebf87fb7d2074c3b08e0180ef64db26aae9d2bd692c836766a00d8c4b988cb" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:18.255106 containerd[1536]: 2025-12-16 12:29:18.201 [INFO][4281] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.111.3/26] block=192.168.111.0/26 handle="k8s-pod-network.77ebf87fb7d2074c3b08e0180ef64db26aae9d2bd692c836766a00d8c4b988cb" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:18.255106 containerd[1536]: 2025-12-16 12:29:18.201 [INFO][4281] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.111.3/26] handle="k8s-pod-network.77ebf87fb7d2074c3b08e0180ef64db26aae9d2bd692c836766a00d8c4b988cb" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:18.255106 containerd[1536]: 2025-12-16 12:29:18.201 [INFO][4281] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:29:18.255106 containerd[1536]: 2025-12-16 12:29:18.201 [INFO][4281] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.111.3/26] IPv6=[] ContainerID="77ebf87fb7d2074c3b08e0180ef64db26aae9d2bd692c836766a00d8c4b988cb" HandleID="k8s-pod-network.77ebf87fb7d2074c3b08e0180ef64db26aae9d2bd692c836766a00d8c4b988cb" Workload="ci--4459--2--2--0--7f64ef3ba0-k8s-coredns--668d6bf9bc--t2s8d-eth0" Dec 16 12:29:18.255998 containerd[1536]: 2025-12-16 12:29:18.210 [INFO][4255] cni-plugin/k8s.go 418: Populated endpoint ContainerID="77ebf87fb7d2074c3b08e0180ef64db26aae9d2bd692c836766a00d8c4b988cb" Namespace="kube-system" Pod="coredns-668d6bf9bc-t2s8d" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-coredns--668d6bf9bc--t2s8d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--0--7f64ef3ba0-k8s-coredns--668d6bf9bc--t2s8d-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"cff0ff36-6ff6-4100-b765-5149b21c7f04", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 28, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-0-7f64ef3ba0", ContainerID:"", Pod:"coredns-668d6bf9bc-t2s8d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.111.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliabe7e8ff0f9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:29:18.255998 containerd[1536]: 2025-12-16 12:29:18.210 [INFO][4255] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.111.3/32] ContainerID="77ebf87fb7d2074c3b08e0180ef64db26aae9d2bd692c836766a00d8c4b988cb" Namespace="kube-system" Pod="coredns-668d6bf9bc-t2s8d" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-coredns--668d6bf9bc--t2s8d-eth0" Dec 16 12:29:18.255998 containerd[1536]: 2025-12-16 12:29:18.210 [INFO][4255] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliabe7e8ff0f9 ContainerID="77ebf87fb7d2074c3b08e0180ef64db26aae9d2bd692c836766a00d8c4b988cb" Namespace="kube-system" Pod="coredns-668d6bf9bc-t2s8d" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-coredns--668d6bf9bc--t2s8d-eth0" Dec 16 12:29:18.255998 containerd[1536]: 2025-12-16 12:29:18.219 [INFO][4255] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="77ebf87fb7d2074c3b08e0180ef64db26aae9d2bd692c836766a00d8c4b988cb" Namespace="kube-system" Pod="coredns-668d6bf9bc-t2s8d" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-coredns--668d6bf9bc--t2s8d-eth0" Dec 16 12:29:18.255998 containerd[1536]: 2025-12-16 12:29:18.226 [INFO][4255] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="77ebf87fb7d2074c3b08e0180ef64db26aae9d2bd692c836766a00d8c4b988cb" Namespace="kube-system" Pod="coredns-668d6bf9bc-t2s8d" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-coredns--668d6bf9bc--t2s8d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--0--7f64ef3ba0-k8s-coredns--668d6bf9bc--t2s8d-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"cff0ff36-6ff6-4100-b765-5149b21c7f04", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 28, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-0-7f64ef3ba0", ContainerID:"77ebf87fb7d2074c3b08e0180ef64db26aae9d2bd692c836766a00d8c4b988cb", Pod:"coredns-668d6bf9bc-t2s8d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.111.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliabe7e8ff0f9", MAC:"ee:43:0c:cb:07:43", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:29:18.255998 containerd[1536]: 2025-12-16 12:29:18.240 [INFO][4255] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="77ebf87fb7d2074c3b08e0180ef64db26aae9d2bd692c836766a00d8c4b988cb" Namespace="kube-system" Pod="coredns-668d6bf9bc-t2s8d" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-coredns--668d6bf9bc--t2s8d-eth0" Dec 16 12:29:18.308425 containerd[1536]: time="2025-12-16T12:29:18.308348772Z" level=info msg="connecting to shim 77ebf87fb7d2074c3b08e0180ef64db26aae9d2bd692c836766a00d8c4b988cb" address="unix:///run/containerd/s/21bdf858caa75054bcd5174bf1d22b85e0e88aee3ea0a57676a7c6904c66980e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:29:18.316455 systemd-networkd[1418]: cali3e910f2cc71: Gained IPv6LL Dec 16 12:29:18.336622 kubelet[2757]: I1216 12:29:18.331882 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hdcn2" podStartSLOduration=41.331865853 podStartE2EDuration="41.331865853s" podCreationTimestamp="2025-12-16 12:28:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:29:18.27304551 +0000 UTC m=+48.373318247" watchObservedRunningTime="2025-12-16 12:29:18.331865853 +0000 UTC m=+48.432138590" Dec 16 12:29:18.382881 systemd-networkd[1418]: cali83cb755d746: Link UP Dec 16 12:29:18.386988 systemd-networkd[1418]: cali83cb755d746: Gained carrier Dec 16 12:29:18.412655 systemd[1]: Started cri-containerd-77ebf87fb7d2074c3b08e0180ef64db26aae9d2bd692c836766a00d8c4b988cb.scope - libcontainer container 77ebf87fb7d2074c3b08e0180ef64db26aae9d2bd692c836766a00d8c4b988cb. Dec 16 12:29:18.421947 containerd[1536]: 2025-12-16 12:29:18.090 [INFO][4258] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--0--7f64ef3ba0-k8s-calico--kube--controllers--7d8c786985--6rqkc-eth0 calico-kube-controllers-7d8c786985- calico-system 6eaa9000-d78a-4f2c-90ec-72eb46b8f615 835 0 2025-12-16 12:28:58 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7d8c786985 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459-2-2-0-7f64ef3ba0 calico-kube-controllers-7d8c786985-6rqkc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali83cb755d746 [] [] }} ContainerID="b1e3e7d5646d800f252bbac93bb63596f4c18f57037faa8c8589d7738d8814b9" Namespace="calico-system" Pod="calico-kube-controllers-7d8c786985-6rqkc" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-calico--kube--controllers--7d8c786985--6rqkc-" Dec 16 12:29:18.421947 containerd[1536]: 2025-12-16 12:29:18.090 [INFO][4258] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b1e3e7d5646d800f252bbac93bb63596f4c18f57037faa8c8589d7738d8814b9" Namespace="calico-system" Pod="calico-kube-controllers-7d8c786985-6rqkc" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-calico--kube--controllers--7d8c786985--6rqkc-eth0" Dec 16 12:29:18.421947 containerd[1536]: 2025-12-16 12:29:18.169 [INFO][4286] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b1e3e7d5646d800f252bbac93bb63596f4c18f57037faa8c8589d7738d8814b9" HandleID="k8s-pod-network.b1e3e7d5646d800f252bbac93bb63596f4c18f57037faa8c8589d7738d8814b9" Workload="ci--4459--2--2--0--7f64ef3ba0-k8s-calico--kube--controllers--7d8c786985--6rqkc-eth0" Dec 16 12:29:18.421947 containerd[1536]: 2025-12-16 12:29:18.170 [INFO][4286] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b1e3e7d5646d800f252bbac93bb63596f4c18f57037faa8c8589d7738d8814b9" HandleID="k8s-pod-network.b1e3e7d5646d800f252bbac93bb63596f4c18f57037faa8c8589d7738d8814b9" Workload="ci--4459--2--2--0--7f64ef3ba0-k8s-calico--kube--controllers--7d8c786985--6rqkc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b660), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-2-0-7f64ef3ba0", "pod":"calico-kube-controllers-7d8c786985-6rqkc", "timestamp":"2025-12-16 12:29:18.169656724 +0000 UTC"}, Hostname:"ci-4459-2-2-0-7f64ef3ba0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:29:18.421947 containerd[1536]: 2025-12-16 12:29:18.170 [INFO][4286] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:29:18.421947 containerd[1536]: 2025-12-16 12:29:18.201 [INFO][4286] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:29:18.421947 containerd[1536]: 2025-12-16 12:29:18.202 [INFO][4286] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-0-7f64ef3ba0' Dec 16 12:29:18.421947 containerd[1536]: 2025-12-16 12:29:18.251 [INFO][4286] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b1e3e7d5646d800f252bbac93bb63596f4c18f57037faa8c8589d7738d8814b9" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:18.421947 containerd[1536]: 2025-12-16 12:29:18.266 [INFO][4286] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:18.421947 containerd[1536]: 2025-12-16 12:29:18.290 [INFO][4286] ipam/ipam.go 511: Trying affinity for 192.168.111.0/26 host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:18.421947 containerd[1536]: 2025-12-16 12:29:18.300 [INFO][4286] ipam/ipam.go 158: Attempting to load block cidr=192.168.111.0/26 host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:18.421947 containerd[1536]: 2025-12-16 12:29:18.309 [INFO][4286] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.111.0/26 host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:18.421947 containerd[1536]: 2025-12-16 12:29:18.309 [INFO][4286] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.111.0/26 handle="k8s-pod-network.b1e3e7d5646d800f252bbac93bb63596f4c18f57037faa8c8589d7738d8814b9" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:18.421947 containerd[1536]: 2025-12-16 12:29:18.314 [INFO][4286] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b1e3e7d5646d800f252bbac93bb63596f4c18f57037faa8c8589d7738d8814b9 Dec 16 12:29:18.421947 containerd[1536]: 2025-12-16 12:29:18.350 [INFO][4286] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.111.0/26 handle="k8s-pod-network.b1e3e7d5646d800f252bbac93bb63596f4c18f57037faa8c8589d7738d8814b9" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:18.421947 containerd[1536]: 2025-12-16 12:29:18.363 [INFO][4286] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.111.4/26] block=192.168.111.0/26 handle="k8s-pod-network.b1e3e7d5646d800f252bbac93bb63596f4c18f57037faa8c8589d7738d8814b9" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:18.421947 containerd[1536]: 2025-12-16 12:29:18.363 [INFO][4286] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.111.4/26] handle="k8s-pod-network.b1e3e7d5646d800f252bbac93bb63596f4c18f57037faa8c8589d7738d8814b9" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:18.421947 containerd[1536]: 2025-12-16 12:29:18.363 [INFO][4286] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:29:18.421947 containerd[1536]: 2025-12-16 12:29:18.363 [INFO][4286] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.111.4/26] IPv6=[] ContainerID="b1e3e7d5646d800f252bbac93bb63596f4c18f57037faa8c8589d7738d8814b9" HandleID="k8s-pod-network.b1e3e7d5646d800f252bbac93bb63596f4c18f57037faa8c8589d7738d8814b9" Workload="ci--4459--2--2--0--7f64ef3ba0-k8s-calico--kube--controllers--7d8c786985--6rqkc-eth0" Dec 16 12:29:18.422527 containerd[1536]: 2025-12-16 12:29:18.377 [INFO][4258] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b1e3e7d5646d800f252bbac93bb63596f4c18f57037faa8c8589d7738d8814b9" Namespace="calico-system" Pod="calico-kube-controllers-7d8c786985-6rqkc" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-calico--kube--controllers--7d8c786985--6rqkc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--0--7f64ef3ba0-k8s-calico--kube--controllers--7d8c786985--6rqkc-eth0", GenerateName:"calico-kube-controllers-7d8c786985-", Namespace:"calico-system", SelfLink:"", UID:"6eaa9000-d78a-4f2c-90ec-72eb46b8f615", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 28, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d8c786985", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-0-7f64ef3ba0", ContainerID:"", Pod:"calico-kube-controllers-7d8c786985-6rqkc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.111.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali83cb755d746", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:29:18.422527 containerd[1536]: 2025-12-16 12:29:18.377 [INFO][4258] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.111.4/32] ContainerID="b1e3e7d5646d800f252bbac93bb63596f4c18f57037faa8c8589d7738d8814b9" Namespace="calico-system" Pod="calico-kube-controllers-7d8c786985-6rqkc" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-calico--kube--controllers--7d8c786985--6rqkc-eth0" Dec 16 12:29:18.422527 containerd[1536]: 2025-12-16 12:29:18.377 [INFO][4258] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali83cb755d746 ContainerID="b1e3e7d5646d800f252bbac93bb63596f4c18f57037faa8c8589d7738d8814b9" Namespace="calico-system" Pod="calico-kube-controllers-7d8c786985-6rqkc" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-calico--kube--controllers--7d8c786985--6rqkc-eth0" Dec 16 12:29:18.422527 containerd[1536]: 2025-12-16 12:29:18.389 [INFO][4258] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b1e3e7d5646d800f252bbac93bb63596f4c18f57037faa8c8589d7738d8814b9" Namespace="calico-system" Pod="calico-kube-controllers-7d8c786985-6rqkc" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-calico--kube--controllers--7d8c786985--6rqkc-eth0" Dec 16 12:29:18.422527 containerd[1536]: 2025-12-16 12:29:18.391 [INFO][4258] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b1e3e7d5646d800f252bbac93bb63596f4c18f57037faa8c8589d7738d8814b9" Namespace="calico-system" Pod="calico-kube-controllers-7d8c786985-6rqkc" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-calico--kube--controllers--7d8c786985--6rqkc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--0--7f64ef3ba0-k8s-calico--kube--controllers--7d8c786985--6rqkc-eth0", GenerateName:"calico-kube-controllers-7d8c786985-", Namespace:"calico-system", SelfLink:"", UID:"6eaa9000-d78a-4f2c-90ec-72eb46b8f615", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 28, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d8c786985", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-0-7f64ef3ba0", ContainerID:"b1e3e7d5646d800f252bbac93bb63596f4c18f57037faa8c8589d7738d8814b9", Pod:"calico-kube-controllers-7d8c786985-6rqkc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.111.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali83cb755d746", MAC:"0e:bb:6c:dd:81:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:29:18.422527 containerd[1536]: 2025-12-16 12:29:18.416 [INFO][4258] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b1e3e7d5646d800f252bbac93bb63596f4c18f57037faa8c8589d7738d8814b9" Namespace="calico-system" Pod="calico-kube-controllers-7d8c786985-6rqkc" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-calico--kube--controllers--7d8c786985--6rqkc-eth0" Dec 16 12:29:18.465748 containerd[1536]: time="2025-12-16T12:29:18.465450982Z" level=info msg="connecting to shim b1e3e7d5646d800f252bbac93bb63596f4c18f57037faa8c8589d7738d8814b9" address="unix:///run/containerd/s/8c852f012cb68361c36f498423b850939e40f565c061099474cb757569949937" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:29:18.495394 containerd[1536]: time="2025-12-16T12:29:18.495248399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t2s8d,Uid:cff0ff36-6ff6-4100-b765-5149b21c7f04,Namespace:kube-system,Attempt:0,} returns sandbox id \"77ebf87fb7d2074c3b08e0180ef64db26aae9d2bd692c836766a00d8c4b988cb\"" Dec 16 12:29:18.504332 containerd[1536]: time="2025-12-16T12:29:18.503953133Z" level=info msg="CreateContainer within sandbox \"77ebf87fb7d2074c3b08e0180ef64db26aae9d2bd692c836766a00d8c4b988cb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:29:18.511771 systemd[1]: Started cri-containerd-b1e3e7d5646d800f252bbac93bb63596f4c18f57037faa8c8589d7738d8814b9.scope - libcontainer container b1e3e7d5646d800f252bbac93bb63596f4c18f57037faa8c8589d7738d8814b9. Dec 16 12:29:18.515825 containerd[1536]: time="2025-12-16T12:29:18.515770714Z" level=info msg="Container 09c3cf45a68c270aefbb15dc846d1d62d13cdc37a1ada33a29f7d4c5ee50ddc5: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:29:18.523026 containerd[1536]: time="2025-12-16T12:29:18.522974985Z" level=info msg="CreateContainer within sandbox \"77ebf87fb7d2074c3b08e0180ef64db26aae9d2bd692c836766a00d8c4b988cb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"09c3cf45a68c270aefbb15dc846d1d62d13cdc37a1ada33a29f7d4c5ee50ddc5\"" Dec 16 12:29:18.525238 containerd[1536]: time="2025-12-16T12:29:18.525199539Z" level=info msg="StartContainer for \"09c3cf45a68c270aefbb15dc846d1d62d13cdc37a1ada33a29f7d4c5ee50ddc5\"" Dec 16 12:29:18.526703 containerd[1536]: time="2025-12-16T12:29:18.526668562Z" level=info msg="connecting to shim 09c3cf45a68c270aefbb15dc846d1d62d13cdc37a1ada33a29f7d4c5ee50ddc5" address="unix:///run/containerd/s/21bdf858caa75054bcd5174bf1d22b85e0e88aee3ea0a57676a7c6904c66980e" protocol=ttrpc version=3 Dec 16 12:29:18.553851 systemd[1]: Started cri-containerd-09c3cf45a68c270aefbb15dc846d1d62d13cdc37a1ada33a29f7d4c5ee50ddc5.scope - libcontainer container 09c3cf45a68c270aefbb15dc846d1d62d13cdc37a1ada33a29f7d4c5ee50ddc5. Dec 16 12:29:18.582714 containerd[1536]: time="2025-12-16T12:29:18.581764967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d8c786985-6rqkc,Uid:6eaa9000-d78a-4f2c-90ec-72eb46b8f615,Namespace:calico-system,Attempt:0,} returns sandbox id \"b1e3e7d5646d800f252bbac93bb63596f4c18f57037faa8c8589d7738d8814b9\"" Dec 16 12:29:18.585197 containerd[1536]: time="2025-12-16T12:29:18.584960576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:29:18.600352 containerd[1536]: time="2025-12-16T12:29:18.600315931Z" level=info msg="StartContainer for \"09c3cf45a68c270aefbb15dc846d1d62d13cdc37a1ada33a29f7d4c5ee50ddc5\" returns successfully" Dec 16 12:29:18.930691 containerd[1536]: time="2025-12-16T12:29:18.930175352Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:18.932150 containerd[1536]: time="2025-12-16T12:29:18.931991780Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:29:18.932150 containerd[1536]: time="2025-12-16T12:29:18.932118382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 12:29:18.932730 kubelet[2757]: E1216 12:29:18.932636 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:29:18.932824 kubelet[2757]: E1216 12:29:18.932740 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:29:18.938130 kubelet[2757]: E1216 12:29:18.938010 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kvjgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7d8c786985-6rqkc_calico-system(6eaa9000-d78a-4f2c-90ec-72eb46b8f615): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:18.939657 kubelet[2757]: E1216 12:29:18.939600 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d8c786985-6rqkc" podUID="6eaa9000-d78a-4f2c-90ec-72eb46b8f615" Dec 16 12:29:19.256159 kubelet[2757]: E1216 12:29:19.255732 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d8c786985-6rqkc" podUID="6eaa9000-d78a-4f2c-90ec-72eb46b8f615" Dec 16 12:29:19.292308 kubelet[2757]: I1216 12:29:19.292228 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-t2s8d" podStartSLOduration=42.292203877 podStartE2EDuration="42.292203877s" podCreationTimestamp="2025-12-16 12:28:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:29:19.270468792 +0000 UTC m=+49.370741489" watchObservedRunningTime="2025-12-16 12:29:19.292203877 +0000 UTC m=+49.392476654" Dec 16 12:29:19.531696 systemd-networkd[1418]: caliabe7e8ff0f9: Gained IPv6LL Dec 16 12:29:20.259376 kubelet[2757]: E1216 12:29:20.259313 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d8c786985-6rqkc" podUID="6eaa9000-d78a-4f2c-90ec-72eb46b8f615" Dec 16 12:29:20.363650 systemd-networkd[1418]: cali83cb755d746: Gained IPv6LL Dec 16 12:29:21.009381 containerd[1536]: time="2025-12-16T12:29:21.009302960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c5llv,Uid:de20fd2d-5c54-466a-8865-47a00aec7cac,Namespace:calico-system,Attempt:0,}" Dec 16 12:29:21.010431 containerd[1536]: time="2025-12-16T12:29:21.009303320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654c6f7d5b-9x9hg,Uid:95ed81e2-a00e-4be1-9fa5-31d53c3e4933,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:29:21.010747 containerd[1536]: time="2025-12-16T12:29:21.010721140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654c6f7d5b-snhmr,Uid:b160a20e-ffb1-4e7e-91c1-0be63ff8efe3,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:29:21.286747 systemd-networkd[1418]: calibe80e4c49a1: Link UP Dec 16 12:29:21.292411 systemd-networkd[1418]: calibe80e4c49a1: Gained carrier Dec 16 12:29:21.310678 containerd[1536]: 2025-12-16 12:29:21.110 [INFO][4461] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--0--7f64ef3ba0-k8s-calico--apiserver--654c6f7d5b--snhmr-eth0 calico-apiserver-654c6f7d5b- calico-apiserver b160a20e-ffb1-4e7e-91c1-0be63ff8efe3 828 0 2025-12-16 12:28:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:654c6f7d5b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-2-0-7f64ef3ba0 calico-apiserver-654c6f7d5b-snhmr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibe80e4c49a1 [] [] }} ContainerID="3c4907caeb8f14ad824418f0942d37170829630cfb270909ea0fe2d79140b912" Namespace="calico-apiserver" Pod="calico-apiserver-654c6f7d5b-snhmr" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-calico--apiserver--654c6f7d5b--snhmr-" Dec 16 12:29:21.310678 containerd[1536]: 2025-12-16 12:29:21.111 [INFO][4461] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3c4907caeb8f14ad824418f0942d37170829630cfb270909ea0fe2d79140b912" Namespace="calico-apiserver" Pod="calico-apiserver-654c6f7d5b-snhmr" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-calico--apiserver--654c6f7d5b--snhmr-eth0" Dec 16 12:29:21.310678 containerd[1536]: 2025-12-16 12:29:21.203 [INFO][4484] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c4907caeb8f14ad824418f0942d37170829630cfb270909ea0fe2d79140b912" HandleID="k8s-pod-network.3c4907caeb8f14ad824418f0942d37170829630cfb270909ea0fe2d79140b912" Workload="ci--4459--2--2--0--7f64ef3ba0-k8s-calico--apiserver--654c6f7d5b--snhmr-eth0" Dec 16 12:29:21.310678 containerd[1536]: 2025-12-16 12:29:21.203 [INFO][4484] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3c4907caeb8f14ad824418f0942d37170829630cfb270909ea0fe2d79140b912" HandleID="k8s-pod-network.3c4907caeb8f14ad824418f0942d37170829630cfb270909ea0fe2d79140b912" Workload="ci--4459--2--2--0--7f64ef3ba0-k8s-calico--apiserver--654c6f7d5b--snhmr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b200), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-2-2-0-7f64ef3ba0", "pod":"calico-apiserver-654c6f7d5b-snhmr", "timestamp":"2025-12-16 12:29:21.203449045 +0000 UTC"}, Hostname:"ci-4459-2-2-0-7f64ef3ba0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:29:21.310678 containerd[1536]: 2025-12-16 12:29:21.203 [INFO][4484] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:29:21.310678 containerd[1536]: 2025-12-16 12:29:21.203 [INFO][4484] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:29:21.310678 containerd[1536]: 2025-12-16 12:29:21.203 [INFO][4484] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-0-7f64ef3ba0' Dec 16 12:29:21.310678 containerd[1536]: 2025-12-16 12:29:21.222 [INFO][4484] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3c4907caeb8f14ad824418f0942d37170829630cfb270909ea0fe2d79140b912" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:21.310678 containerd[1536]: 2025-12-16 12:29:21.233 [INFO][4484] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:21.310678 containerd[1536]: 2025-12-16 12:29:21.241 [INFO][4484] ipam/ipam.go 511: Trying affinity for 192.168.111.0/26 host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:21.310678 containerd[1536]: 2025-12-16 12:29:21.245 [INFO][4484] ipam/ipam.go 158: Attempting to load block cidr=192.168.111.0/26 host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:21.310678 containerd[1536]: 2025-12-16 12:29:21.249 [INFO][4484] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.111.0/26 host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:21.310678 containerd[1536]: 2025-12-16 12:29:21.249 [INFO][4484] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.111.0/26 handle="k8s-pod-network.3c4907caeb8f14ad824418f0942d37170829630cfb270909ea0fe2d79140b912" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:21.310678 containerd[1536]: 2025-12-16 12:29:21.252 [INFO][4484] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3c4907caeb8f14ad824418f0942d37170829630cfb270909ea0fe2d79140b912 Dec 16 12:29:21.310678 containerd[1536]: 2025-12-16 12:29:21.258 [INFO][4484] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.111.0/26 handle="k8s-pod-network.3c4907caeb8f14ad824418f0942d37170829630cfb270909ea0fe2d79140b912" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:21.310678 containerd[1536]: 2025-12-16 12:29:21.271 [INFO][4484] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.111.5/26] block=192.168.111.0/26 handle="k8s-pod-network.3c4907caeb8f14ad824418f0942d37170829630cfb270909ea0fe2d79140b912" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:21.310678 containerd[1536]: 2025-12-16 12:29:21.271 [INFO][4484] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.111.5/26] handle="k8s-pod-network.3c4907caeb8f14ad824418f0942d37170829630cfb270909ea0fe2d79140b912" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:21.310678 containerd[1536]: 2025-12-16 12:29:21.273 [INFO][4484] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:29:21.310678 containerd[1536]: 2025-12-16 12:29:21.273 [INFO][4484] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.111.5/26] IPv6=[] ContainerID="3c4907caeb8f14ad824418f0942d37170829630cfb270909ea0fe2d79140b912" HandleID="k8s-pod-network.3c4907caeb8f14ad824418f0942d37170829630cfb270909ea0fe2d79140b912" Workload="ci--4459--2--2--0--7f64ef3ba0-k8s-calico--apiserver--654c6f7d5b--snhmr-eth0" Dec 16 12:29:21.311626 containerd[1536]: 2025-12-16 12:29:21.278 [INFO][4461] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3c4907caeb8f14ad824418f0942d37170829630cfb270909ea0fe2d79140b912" Namespace="calico-apiserver" Pod="calico-apiserver-654c6f7d5b-snhmr" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-calico--apiserver--654c6f7d5b--snhmr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--0--7f64ef3ba0-k8s-calico--apiserver--654c6f7d5b--snhmr-eth0", GenerateName:"calico-apiserver-654c6f7d5b-", Namespace:"calico-apiserver", SelfLink:"", UID:"b160a20e-ffb1-4e7e-91c1-0be63ff8efe3", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"654c6f7d5b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-0-7f64ef3ba0", ContainerID:"", Pod:"calico-apiserver-654c6f7d5b-snhmr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.111.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe80e4c49a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:29:21.311626 containerd[1536]: 2025-12-16 12:29:21.279 [INFO][4461] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.111.5/32] ContainerID="3c4907caeb8f14ad824418f0942d37170829630cfb270909ea0fe2d79140b912" Namespace="calico-apiserver" Pod="calico-apiserver-654c6f7d5b-snhmr" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-calico--apiserver--654c6f7d5b--snhmr-eth0" Dec 16 12:29:21.311626 containerd[1536]: 2025-12-16 12:29:21.279 [INFO][4461] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe80e4c49a1 ContainerID="3c4907caeb8f14ad824418f0942d37170829630cfb270909ea0fe2d79140b912" Namespace="calico-apiserver" Pod="calico-apiserver-654c6f7d5b-snhmr" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-calico--apiserver--654c6f7d5b--snhmr-eth0" Dec 16 12:29:21.311626 containerd[1536]: 2025-12-16 12:29:21.282 [INFO][4461] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c4907caeb8f14ad824418f0942d37170829630cfb270909ea0fe2d79140b912" Namespace="calico-apiserver" Pod="calico-apiserver-654c6f7d5b-snhmr" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-calico--apiserver--654c6f7d5b--snhmr-eth0" Dec 16 12:29:21.311626 containerd[1536]: 2025-12-16 12:29:21.283 [INFO][4461] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3c4907caeb8f14ad824418f0942d37170829630cfb270909ea0fe2d79140b912" Namespace="calico-apiserver" Pod="calico-apiserver-654c6f7d5b-snhmr" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-calico--apiserver--654c6f7d5b--snhmr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--0--7f64ef3ba0-k8s-calico--apiserver--654c6f7d5b--snhmr-eth0", GenerateName:"calico-apiserver-654c6f7d5b-", Namespace:"calico-apiserver", SelfLink:"", UID:"b160a20e-ffb1-4e7e-91c1-0be63ff8efe3", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"654c6f7d5b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-0-7f64ef3ba0", ContainerID:"3c4907caeb8f14ad824418f0942d37170829630cfb270909ea0fe2d79140b912", Pod:"calico-apiserver-654c6f7d5b-snhmr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.111.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe80e4c49a1", MAC:"8a:0a:c3:7b:ca:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:29:21.311626 containerd[1536]: 2025-12-16 12:29:21.307 [INFO][4461] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3c4907caeb8f14ad824418f0942d37170829630cfb270909ea0fe2d79140b912" Namespace="calico-apiserver" Pod="calico-apiserver-654c6f7d5b-snhmr" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-calico--apiserver--654c6f7d5b--snhmr-eth0" Dec 16 12:29:21.352587 containerd[1536]: time="2025-12-16T12:29:21.352530849Z" level=info msg="connecting to shim 3c4907caeb8f14ad824418f0942d37170829630cfb270909ea0fe2d79140b912" address="unix:///run/containerd/s/c1cd2c3568fb5a4d5620c48847a002fffd81ab3bf770f8498f84111784b5a8a9" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:29:21.393538 systemd[1]: Started cri-containerd-3c4907caeb8f14ad824418f0942d37170829630cfb270909ea0fe2d79140b912.scope - libcontainer container 3c4907caeb8f14ad824418f0942d37170829630cfb270909ea0fe2d79140b912. Dec 16 12:29:21.395511 systemd-networkd[1418]: calidf899755c88: Link UP Dec 16 12:29:21.396715 systemd-networkd[1418]: calidf899755c88: Gained carrier Dec 16 12:29:21.435986 containerd[1536]: 2025-12-16 12:29:21.139 [INFO][4449] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--0--7f64ef3ba0-k8s-csi--node--driver--c5llv-eth0 csi-node-driver- calico-system de20fd2d-5c54-466a-8865-47a00aec7cac 741 0 2025-12-16 12:28:57 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459-2-2-0-7f64ef3ba0 csi-node-driver-c5llv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calidf899755c88 [] [] }} ContainerID="b19133b21d51ca12dc1d074ab73716a76b88ea0898155800be5adb9e9e4f6d1c" Namespace="calico-system" Pod="csi-node-driver-c5llv" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-csi--node--driver--c5llv-" Dec 16 12:29:21.435986 containerd[1536]: 2025-12-16 12:29:21.141 [INFO][4449] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b19133b21d51ca12dc1d074ab73716a76b88ea0898155800be5adb9e9e4f6d1c" Namespace="calico-system" Pod="csi-node-driver-c5llv" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-csi--node--driver--c5llv-eth0" Dec 16 12:29:21.435986 containerd[1536]: 2025-12-16 12:29:21.208 [INFO][4489] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b19133b21d51ca12dc1d074ab73716a76b88ea0898155800be5adb9e9e4f6d1c" HandleID="k8s-pod-network.b19133b21d51ca12dc1d074ab73716a76b88ea0898155800be5adb9e9e4f6d1c" Workload="ci--4459--2--2--0--7f64ef3ba0-k8s-csi--node--driver--c5llv-eth0" Dec 16 12:29:21.435986 containerd[1536]: 2025-12-16 12:29:21.208 [INFO][4489] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b19133b21d51ca12dc1d074ab73716a76b88ea0898155800be5adb9e9e4f6d1c" HandleID="k8s-pod-network.b19133b21d51ca12dc1d074ab73716a76b88ea0898155800be5adb9e9e4f6d1c" Workload="ci--4459--2--2--0--7f64ef3ba0-k8s-csi--node--driver--c5llv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031f6c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-2-0-7f64ef3ba0", "pod":"csi-node-driver-c5llv", "timestamp":"2025-12-16 12:29:21.208087631 +0000 UTC"}, Hostname:"ci-4459-2-2-0-7f64ef3ba0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:29:21.435986 containerd[1536]: 2025-12-16 12:29:21.208 [INFO][4489] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:29:21.435986 containerd[1536]: 2025-12-16 12:29:21.272 [INFO][4489] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:29:21.435986 containerd[1536]: 2025-12-16 12:29:21.272 [INFO][4489] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-0-7f64ef3ba0' Dec 16 12:29:21.435986 containerd[1536]: 2025-12-16 12:29:21.324 [INFO][4489] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b19133b21d51ca12dc1d074ab73716a76b88ea0898155800be5adb9e9e4f6d1c" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:21.435986 containerd[1536]: 2025-12-16 12:29:21.335 [INFO][4489] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:21.435986 containerd[1536]: 2025-12-16 12:29:21.344 [INFO][4489] ipam/ipam.go 511: Trying affinity for 192.168.111.0/26 host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:21.435986 containerd[1536]: 2025-12-16 12:29:21.349 [INFO][4489] ipam/ipam.go 158: Attempting to load block cidr=192.168.111.0/26 host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:21.435986 containerd[1536]: 2025-12-16 12:29:21.353 [INFO][4489] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.111.0/26 host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:21.435986 containerd[1536]: 2025-12-16 12:29:21.354 [INFO][4489] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.111.0/26 handle="k8s-pod-network.b19133b21d51ca12dc1d074ab73716a76b88ea0898155800be5adb9e9e4f6d1c" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:21.435986 containerd[1536]: 2025-12-16 12:29:21.356 [INFO][4489] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b19133b21d51ca12dc1d074ab73716a76b88ea0898155800be5adb9e9e4f6d1c Dec 16 12:29:21.435986 containerd[1536]: 2025-12-16 12:29:21.367 [INFO][4489] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.111.0/26 handle="k8s-pod-network.b19133b21d51ca12dc1d074ab73716a76b88ea0898155800be5adb9e9e4f6d1c" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:21.435986 containerd[1536]: 2025-12-16 12:29:21.379 [INFO][4489] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.111.6/26] block=192.168.111.0/26 handle="k8s-pod-network.b19133b21d51ca12dc1d074ab73716a76b88ea0898155800be5adb9e9e4f6d1c" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:21.435986 containerd[1536]: 2025-12-16 12:29:21.379 [INFO][4489] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.111.6/26] handle="k8s-pod-network.b19133b21d51ca12dc1d074ab73716a76b88ea0898155800be5adb9e9e4f6d1c" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:21.435986 containerd[1536]: 2025-12-16 12:29:21.379 [INFO][4489] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:29:21.435986 containerd[1536]: 2025-12-16 12:29:21.379 [INFO][4489] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.111.6/26] IPv6=[] ContainerID="b19133b21d51ca12dc1d074ab73716a76b88ea0898155800be5adb9e9e4f6d1c" HandleID="k8s-pod-network.b19133b21d51ca12dc1d074ab73716a76b88ea0898155800be5adb9e9e4f6d1c" Workload="ci--4459--2--2--0--7f64ef3ba0-k8s-csi--node--driver--c5llv-eth0" Dec 16 12:29:21.437036 containerd[1536]: 2025-12-16 12:29:21.387 [INFO][4449] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b19133b21d51ca12dc1d074ab73716a76b88ea0898155800be5adb9e9e4f6d1c" Namespace="calico-system" Pod="csi-node-driver-c5llv" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-csi--node--driver--c5llv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--0--7f64ef3ba0-k8s-csi--node--driver--c5llv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"de20fd2d-5c54-466a-8865-47a00aec7cac", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 28, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-0-7f64ef3ba0", ContainerID:"", Pod:"csi-node-driver-c5llv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.111.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidf899755c88", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:29:21.437036 containerd[1536]: 2025-12-16 12:29:21.387 [INFO][4449] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.111.6/32] ContainerID="b19133b21d51ca12dc1d074ab73716a76b88ea0898155800be5adb9e9e4f6d1c" Namespace="calico-system" Pod="csi-node-driver-c5llv" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-csi--node--driver--c5llv-eth0" Dec 16 12:29:21.437036 containerd[1536]: 2025-12-16 12:29:21.388 [INFO][4449] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidf899755c88 ContainerID="b19133b21d51ca12dc1d074ab73716a76b88ea0898155800be5adb9e9e4f6d1c" Namespace="calico-system" Pod="csi-node-driver-c5llv" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-csi--node--driver--c5llv-eth0" Dec 16 12:29:21.437036 containerd[1536]: 2025-12-16 12:29:21.405 [INFO][4449] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b19133b21d51ca12dc1d074ab73716a76b88ea0898155800be5adb9e9e4f6d1c" Namespace="calico-system" Pod="csi-node-driver-c5llv" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-csi--node--driver--c5llv-eth0" Dec 16 12:29:21.437036 containerd[1536]: 2025-12-16 12:29:21.413 [INFO][4449] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b19133b21d51ca12dc1d074ab73716a76b88ea0898155800be5adb9e9e4f6d1c" Namespace="calico-system" Pod="csi-node-driver-c5llv" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-csi--node--driver--c5llv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--0--7f64ef3ba0-k8s-csi--node--driver--c5llv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"de20fd2d-5c54-466a-8865-47a00aec7cac", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 28, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-0-7f64ef3ba0", ContainerID:"b19133b21d51ca12dc1d074ab73716a76b88ea0898155800be5adb9e9e4f6d1c", Pod:"csi-node-driver-c5llv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.111.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidf899755c88", MAC:"2a:65:a9:ac:54:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:29:21.437036 containerd[1536]: 2025-12-16 12:29:21.431 [INFO][4449] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b19133b21d51ca12dc1d074ab73716a76b88ea0898155800be5adb9e9e4f6d1c" Namespace="calico-system" Pod="csi-node-driver-c5llv" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-csi--node--driver--c5llv-eth0" Dec 16 12:29:21.483335 containerd[1536]: time="2025-12-16T12:29:21.482907546Z" level=info msg="connecting to shim b19133b21d51ca12dc1d074ab73716a76b88ea0898155800be5adb9e9e4f6d1c" address="unix:///run/containerd/s/79fc9f62f504b90259c53f6cc987b8e6bc73420290a95aa33a871c982500586a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:29:21.506607 systemd-networkd[1418]: calif6a1e006d0d: Link UP Dec 16 12:29:21.509580 systemd-networkd[1418]: calif6a1e006d0d: Gained carrier Dec 16 12:29:21.534280 containerd[1536]: time="2025-12-16T12:29:21.534010513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654c6f7d5b-snhmr,Uid:b160a20e-ffb1-4e7e-91c1-0be63ff8efe3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3c4907caeb8f14ad824418f0942d37170829630cfb270909ea0fe2d79140b912\"" Dec 16 12:29:21.541502 containerd[1536]: time="2025-12-16T12:29:21.540966533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:29:21.561976 containerd[1536]: 2025-12-16 12:29:21.150 [INFO][4456] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--0--7f64ef3ba0-k8s-calico--apiserver--654c6f7d5b--9x9hg-eth0 calico-apiserver-654c6f7d5b- calico-apiserver 95ed81e2-a00e-4be1-9fa5-31d53c3e4933 836 0 2025-12-16 12:28:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:654c6f7d5b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-2-0-7f64ef3ba0 calico-apiserver-654c6f7d5b-9x9hg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif6a1e006d0d [] [] }} ContainerID="fbaee74e90c7c539dbeace70592938d5f9acd598a6ec4953c114cc18893d0bc3" Namespace="calico-apiserver" Pod="calico-apiserver-654c6f7d5b-9x9hg" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-calico--apiserver--654c6f7d5b--9x9hg-" Dec 16 12:29:21.561976 containerd[1536]: 2025-12-16 12:29:21.151 [INFO][4456] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fbaee74e90c7c539dbeace70592938d5f9acd598a6ec4953c114cc18893d0bc3" Namespace="calico-apiserver" Pod="calico-apiserver-654c6f7d5b-9x9hg" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-calico--apiserver--654c6f7d5b--9x9hg-eth0" Dec 16 12:29:21.561976 containerd[1536]: 2025-12-16 12:29:21.222 [INFO][4494] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fbaee74e90c7c539dbeace70592938d5f9acd598a6ec4953c114cc18893d0bc3" HandleID="k8s-pod-network.fbaee74e90c7c539dbeace70592938d5f9acd598a6ec4953c114cc18893d0bc3" Workload="ci--4459--2--2--0--7f64ef3ba0-k8s-calico--apiserver--654c6f7d5b--9x9hg-eth0" Dec 16 12:29:21.561976 containerd[1536]: 2025-12-16 12:29:21.222 [INFO][4494] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fbaee74e90c7c539dbeace70592938d5f9acd598a6ec4953c114cc18893d0bc3" HandleID="k8s-pod-network.fbaee74e90c7c539dbeace70592938d5f9acd598a6ec4953c114cc18893d0bc3" Workload="ci--4459--2--2--0--7f64ef3ba0-k8s-calico--apiserver--654c6f7d5b--9x9hg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b7e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-2-2-0-7f64ef3ba0", "pod":"calico-apiserver-654c6f7d5b-9x9hg", "timestamp":"2025-12-16 12:29:21.222453156 +0000 UTC"}, Hostname:"ci-4459-2-2-0-7f64ef3ba0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:29:21.561976 containerd[1536]: 2025-12-16 12:29:21.222 [INFO][4494] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:29:21.561976 containerd[1536]: 2025-12-16 12:29:21.380 [INFO][4494] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:29:21.561976 containerd[1536]: 2025-12-16 12:29:21.380 [INFO][4494] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-0-7f64ef3ba0' Dec 16 12:29:21.561976 containerd[1536]: 2025-12-16 12:29:21.424 [INFO][4494] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fbaee74e90c7c539dbeace70592938d5f9acd598a6ec4953c114cc18893d0bc3" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:21.561976 containerd[1536]: 2025-12-16 12:29:21.440 [INFO][4494] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:21.561976 containerd[1536]: 2025-12-16 12:29:21.455 [INFO][4494] ipam/ipam.go 511: Trying affinity for 192.168.111.0/26 host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:21.561976 containerd[1536]: 2025-12-16 12:29:21.461 [INFO][4494] ipam/ipam.go 158: Attempting to load block cidr=192.168.111.0/26 host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:21.561976 containerd[1536]: 2025-12-16 12:29:21.465 [INFO][4494] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.111.0/26 host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:21.561976 containerd[1536]: 2025-12-16 12:29:21.466 [INFO][4494] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.111.0/26 handle="k8s-pod-network.fbaee74e90c7c539dbeace70592938d5f9acd598a6ec4953c114cc18893d0bc3" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:21.561976 containerd[1536]: 2025-12-16 12:29:21.470 [INFO][4494] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fbaee74e90c7c539dbeace70592938d5f9acd598a6ec4953c114cc18893d0bc3 Dec 16 12:29:21.561976 containerd[1536]: 2025-12-16 12:29:21.478 [INFO][4494] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.111.0/26 handle="k8s-pod-network.fbaee74e90c7c539dbeace70592938d5f9acd598a6ec4953c114cc18893d0bc3" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:21.561976 containerd[1536]: 2025-12-16 12:29:21.493 [INFO][4494] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.111.7/26] block=192.168.111.0/26 handle="k8s-pod-network.fbaee74e90c7c539dbeace70592938d5f9acd598a6ec4953c114cc18893d0bc3" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:21.561976 containerd[1536]: 2025-12-16 12:29:21.493 [INFO][4494] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.111.7/26] handle="k8s-pod-network.fbaee74e90c7c539dbeace70592938d5f9acd598a6ec4953c114cc18893d0bc3" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:21.561976 containerd[1536]: 2025-12-16 12:29:21.493 [INFO][4494] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:29:21.561976 containerd[1536]: 2025-12-16 12:29:21.494 [INFO][4494] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.111.7/26] IPv6=[] ContainerID="fbaee74e90c7c539dbeace70592938d5f9acd598a6ec4953c114cc18893d0bc3" HandleID="k8s-pod-network.fbaee74e90c7c539dbeace70592938d5f9acd598a6ec4953c114cc18893d0bc3" Workload="ci--4459--2--2--0--7f64ef3ba0-k8s-calico--apiserver--654c6f7d5b--9x9hg-eth0" Dec 16 12:29:21.562677 containerd[1536]: 2025-12-16 12:29:21.498 [INFO][4456] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fbaee74e90c7c539dbeace70592938d5f9acd598a6ec4953c114cc18893d0bc3" Namespace="calico-apiserver" Pod="calico-apiserver-654c6f7d5b-9x9hg" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-calico--apiserver--654c6f7d5b--9x9hg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--0--7f64ef3ba0-k8s-calico--apiserver--654c6f7d5b--9x9hg-eth0", GenerateName:"calico-apiserver-654c6f7d5b-", Namespace:"calico-apiserver", SelfLink:"", UID:"95ed81e2-a00e-4be1-9fa5-31d53c3e4933", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"654c6f7d5b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-0-7f64ef3ba0", ContainerID:"", Pod:"calico-apiserver-654c6f7d5b-9x9hg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.111.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif6a1e006d0d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:29:21.562677 containerd[1536]: 2025-12-16 12:29:21.501 [INFO][4456] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.111.7/32] ContainerID="fbaee74e90c7c539dbeace70592938d5f9acd598a6ec4953c114cc18893d0bc3" Namespace="calico-apiserver" Pod="calico-apiserver-654c6f7d5b-9x9hg" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-calico--apiserver--654c6f7d5b--9x9hg-eth0" Dec 16 12:29:21.562677 containerd[1536]: 2025-12-16 12:29:21.501 [INFO][4456] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif6a1e006d0d ContainerID="fbaee74e90c7c539dbeace70592938d5f9acd598a6ec4953c114cc18893d0bc3" Namespace="calico-apiserver" Pod="calico-apiserver-654c6f7d5b-9x9hg" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-calico--apiserver--654c6f7d5b--9x9hg-eth0" Dec 16 12:29:21.562677 containerd[1536]: 2025-12-16 12:29:21.513 [INFO][4456] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fbaee74e90c7c539dbeace70592938d5f9acd598a6ec4953c114cc18893d0bc3" Namespace="calico-apiserver" Pod="calico-apiserver-654c6f7d5b-9x9hg" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-calico--apiserver--654c6f7d5b--9x9hg-eth0" Dec 16 12:29:21.562677 containerd[1536]: 2025-12-16 12:29:21.514 [INFO][4456] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fbaee74e90c7c539dbeace70592938d5f9acd598a6ec4953c114cc18893d0bc3" Namespace="calico-apiserver" Pod="calico-apiserver-654c6f7d5b-9x9hg" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-calico--apiserver--654c6f7d5b--9x9hg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--0--7f64ef3ba0-k8s-calico--apiserver--654c6f7d5b--9x9hg-eth0", GenerateName:"calico-apiserver-654c6f7d5b-", Namespace:"calico-apiserver", SelfLink:"", UID:"95ed81e2-a00e-4be1-9fa5-31d53c3e4933", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"654c6f7d5b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-0-7f64ef3ba0", ContainerID:"fbaee74e90c7c539dbeace70592938d5f9acd598a6ec4953c114cc18893d0bc3", Pod:"calico-apiserver-654c6f7d5b-9x9hg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.111.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif6a1e006d0d", MAC:"d2:78:55:ab:ba:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:29:21.562677 containerd[1536]: 2025-12-16 12:29:21.539 [INFO][4456] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fbaee74e90c7c539dbeace70592938d5f9acd598a6ec4953c114cc18893d0bc3" Namespace="calico-apiserver" Pod="calico-apiserver-654c6f7d5b-9x9hg" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-calico--apiserver--654c6f7d5b--9x9hg-eth0" Dec 16 12:29:21.592892 systemd[1]: Started cri-containerd-b19133b21d51ca12dc1d074ab73716a76b88ea0898155800be5adb9e9e4f6d1c.scope - libcontainer container b19133b21d51ca12dc1d074ab73716a76b88ea0898155800be5adb9e9e4f6d1c. Dec 16 12:29:21.626256 containerd[1536]: time="2025-12-16T12:29:21.626157986Z" level=info msg="connecting to shim fbaee74e90c7c539dbeace70592938d5f9acd598a6ec4953c114cc18893d0bc3" address="unix:///run/containerd/s/8bf611f290c4023ab0278e31a3b7c7404593263a3a38ba0bb3acfe48b2b906ec" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:29:21.667651 systemd[1]: Started cri-containerd-fbaee74e90c7c539dbeace70592938d5f9acd598a6ec4953c114cc18893d0bc3.scope - libcontainer container fbaee74e90c7c539dbeace70592938d5f9acd598a6ec4953c114cc18893d0bc3. Dec 16 12:29:21.684387 containerd[1536]: time="2025-12-16T12:29:21.683404241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c5llv,Uid:de20fd2d-5c54-466a-8865-47a00aec7cac,Namespace:calico-system,Attempt:0,} returns sandbox id \"b19133b21d51ca12dc1d074ab73716a76b88ea0898155800be5adb9e9e4f6d1c\"" Dec 16 12:29:21.731288 containerd[1536]: time="2025-12-16T12:29:21.731225602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-654c6f7d5b-9x9hg,Uid:95ed81e2-a00e-4be1-9fa5-31d53c3e4933,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"fbaee74e90c7c539dbeace70592938d5f9acd598a6ec4953c114cc18893d0bc3\"" Dec 16 12:29:21.933388 containerd[1536]: time="2025-12-16T12:29:21.933253440Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:21.934869 containerd[1536]: time="2025-12-16T12:29:21.934700261Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:29:21.935011 containerd[1536]: time="2025-12-16T12:29:21.934783262Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:29:21.935362 kubelet[2757]: E1216 12:29:21.935292 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:29:21.937043 kubelet[2757]: E1216 12:29:21.935417 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:29:21.937043 kubelet[2757]: E1216 12:29:21.936031 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bmbc5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-654c6f7d5b-snhmr_calico-apiserver(b160a20e-ffb1-4e7e-91c1-0be63ff8efe3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:21.937616 containerd[1536]: time="2025-12-16T12:29:21.936072400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:29:21.937713 kubelet[2757]: E1216 12:29:21.937511 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654c6f7d5b-snhmr" podUID="b160a20e-ffb1-4e7e-91c1-0be63ff8efe3" Dec 16 12:29:22.009713 containerd[1536]: time="2025-12-16T12:29:22.009650965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qk9mh,Uid:9e9d246f-dbf0-4c36-8e21-415862de6ecd,Namespace:calico-system,Attempt:0,}" Dec 16 12:29:22.168839 systemd-networkd[1418]: cali7a1f996e8b5: Link UP Dec 16 12:29:22.170334 systemd-networkd[1418]: cali7a1f996e8b5: Gained carrier Dec 16 12:29:22.192825 containerd[1536]: 2025-12-16 12:29:22.068 [INFO][4667] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--0--7f64ef3ba0-k8s-goldmane--666569f655--qk9mh-eth0 goldmane-666569f655- calico-system 9e9d246f-dbf0-4c36-8e21-415862de6ecd 832 0 2025-12-16 12:28:54 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459-2-2-0-7f64ef3ba0 goldmane-666569f655-qk9mh eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7a1f996e8b5 [] [] }} ContainerID="3a5ff9e5295c773b3b43def326752da8f2368c91812e4fe43b88f0a34c3e0762" Namespace="calico-system" Pod="goldmane-666569f655-qk9mh" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-goldmane--666569f655--qk9mh-" Dec 16 12:29:22.192825 containerd[1536]: 2025-12-16 12:29:22.068 [INFO][4667] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3a5ff9e5295c773b3b43def326752da8f2368c91812e4fe43b88f0a34c3e0762" Namespace="calico-system" Pod="goldmane-666569f655-qk9mh" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-goldmane--666569f655--qk9mh-eth0" Dec 16 12:29:22.192825 containerd[1536]: 2025-12-16 12:29:22.099 [INFO][4678] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a5ff9e5295c773b3b43def326752da8f2368c91812e4fe43b88f0a34c3e0762" HandleID="k8s-pod-network.3a5ff9e5295c773b3b43def326752da8f2368c91812e4fe43b88f0a34c3e0762" Workload="ci--4459--2--2--0--7f64ef3ba0-k8s-goldmane--666569f655--qk9mh-eth0" Dec 16 12:29:22.192825 containerd[1536]: 2025-12-16 12:29:22.099 [INFO][4678] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3a5ff9e5295c773b3b43def326752da8f2368c91812e4fe43b88f0a34c3e0762" HandleID="k8s-pod-network.3a5ff9e5295c773b3b43def326752da8f2368c91812e4fe43b88f0a34c3e0762" Workload="ci--4459--2--2--0--7f64ef3ba0-k8s-goldmane--666569f655--qk9mh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b2d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-2-0-7f64ef3ba0", "pod":"goldmane-666569f655-qk9mh", "timestamp":"2025-12-16 12:29:22.099497574 +0000 UTC"}, Hostname:"ci-4459-2-2-0-7f64ef3ba0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:29:22.192825 containerd[1536]: 2025-12-16 12:29:22.099 [INFO][4678] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:29:22.192825 containerd[1536]: 2025-12-16 12:29:22.099 [INFO][4678] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:29:22.192825 containerd[1536]: 2025-12-16 12:29:22.099 [INFO][4678] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-0-7f64ef3ba0' Dec 16 12:29:22.192825 containerd[1536]: 2025-12-16 12:29:22.115 [INFO][4678] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3a5ff9e5295c773b3b43def326752da8f2368c91812e4fe43b88f0a34c3e0762" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:22.192825 containerd[1536]: 2025-12-16 12:29:22.122 [INFO][4678] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:22.192825 containerd[1536]: 2025-12-16 12:29:22.128 [INFO][4678] ipam/ipam.go 511: Trying affinity for 192.168.111.0/26 host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:22.192825 containerd[1536]: 2025-12-16 12:29:22.131 [INFO][4678] ipam/ipam.go 158: Attempting to load block cidr=192.168.111.0/26 host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:22.192825 containerd[1536]: 2025-12-16 12:29:22.134 [INFO][4678] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.111.0/26 host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:22.192825 containerd[1536]: 2025-12-16 12:29:22.135 [INFO][4678] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.111.0/26 handle="k8s-pod-network.3a5ff9e5295c773b3b43def326752da8f2368c91812e4fe43b88f0a34c3e0762" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:22.192825 containerd[1536]: 2025-12-16 12:29:22.137 [INFO][4678] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3a5ff9e5295c773b3b43def326752da8f2368c91812e4fe43b88f0a34c3e0762 Dec 16 12:29:22.192825 containerd[1536]: 2025-12-16 12:29:22.144 [INFO][4678] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.111.0/26 handle="k8s-pod-network.3a5ff9e5295c773b3b43def326752da8f2368c91812e4fe43b88f0a34c3e0762" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:22.192825 containerd[1536]: 2025-12-16 12:29:22.157 [INFO][4678] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.111.8/26] block=192.168.111.0/26 handle="k8s-pod-network.3a5ff9e5295c773b3b43def326752da8f2368c91812e4fe43b88f0a34c3e0762" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:22.192825 containerd[1536]: 2025-12-16 12:29:22.158 [INFO][4678] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.111.8/26] handle="k8s-pod-network.3a5ff9e5295c773b3b43def326752da8f2368c91812e4fe43b88f0a34c3e0762" host="ci-4459-2-2-0-7f64ef3ba0" Dec 16 12:29:22.192825 containerd[1536]: 2025-12-16 12:29:22.158 [INFO][4678] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:29:22.192825 containerd[1536]: 2025-12-16 12:29:22.158 [INFO][4678] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.111.8/26] IPv6=[] ContainerID="3a5ff9e5295c773b3b43def326752da8f2368c91812e4fe43b88f0a34c3e0762" HandleID="k8s-pod-network.3a5ff9e5295c773b3b43def326752da8f2368c91812e4fe43b88f0a34c3e0762" Workload="ci--4459--2--2--0--7f64ef3ba0-k8s-goldmane--666569f655--qk9mh-eth0" Dec 16 12:29:22.194017 containerd[1536]: 2025-12-16 12:29:22.163 [INFO][4667] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3a5ff9e5295c773b3b43def326752da8f2368c91812e4fe43b88f0a34c3e0762" Namespace="calico-system" Pod="goldmane-666569f655-qk9mh" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-goldmane--666569f655--qk9mh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--0--7f64ef3ba0-k8s-goldmane--666569f655--qk9mh-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9e9d246f-dbf0-4c36-8e21-415862de6ecd", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 28, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-0-7f64ef3ba0", ContainerID:"", Pod:"goldmane-666569f655-qk9mh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.111.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7a1f996e8b5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:29:22.194017 containerd[1536]: 2025-12-16 12:29:22.163 [INFO][4667] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.111.8/32] ContainerID="3a5ff9e5295c773b3b43def326752da8f2368c91812e4fe43b88f0a34c3e0762" Namespace="calico-system" Pod="goldmane-666569f655-qk9mh" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-goldmane--666569f655--qk9mh-eth0" Dec 16 12:29:22.194017 containerd[1536]: 2025-12-16 12:29:22.163 [INFO][4667] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7a1f996e8b5 ContainerID="3a5ff9e5295c773b3b43def326752da8f2368c91812e4fe43b88f0a34c3e0762" Namespace="calico-system" Pod="goldmane-666569f655-qk9mh" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-goldmane--666569f655--qk9mh-eth0" Dec 16 12:29:22.194017 containerd[1536]: 2025-12-16 12:29:22.172 [INFO][4667] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a5ff9e5295c773b3b43def326752da8f2368c91812e4fe43b88f0a34c3e0762" Namespace="calico-system" Pod="goldmane-666569f655-qk9mh" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-goldmane--666569f655--qk9mh-eth0" Dec 16 12:29:22.194017 containerd[1536]: 2025-12-16 12:29:22.173 [INFO][4667] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3a5ff9e5295c773b3b43def326752da8f2368c91812e4fe43b88f0a34c3e0762" Namespace="calico-system" Pod="goldmane-666569f655-qk9mh" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-goldmane--666569f655--qk9mh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--0--7f64ef3ba0-k8s-goldmane--666569f655--qk9mh-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9e9d246f-dbf0-4c36-8e21-415862de6ecd", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 28, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-0-7f64ef3ba0", ContainerID:"3a5ff9e5295c773b3b43def326752da8f2368c91812e4fe43b88f0a34c3e0762", Pod:"goldmane-666569f655-qk9mh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.111.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7a1f996e8b5", MAC:"a6:e9:0b:6a:db:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:29:22.194017 containerd[1536]: 2025-12-16 12:29:22.189 [INFO][4667] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3a5ff9e5295c773b3b43def326752da8f2368c91812e4fe43b88f0a34c3e0762" Namespace="calico-system" Pod="goldmane-666569f655-qk9mh" WorkloadEndpoint="ci--4459--2--2--0--7f64ef3ba0-k8s-goldmane--666569f655--qk9mh-eth0" Dec 16 12:29:22.221879 containerd[1536]: time="2025-12-16T12:29:22.221835234Z" level=info msg="connecting to shim 3a5ff9e5295c773b3b43def326752da8f2368c91812e4fe43b88f0a34c3e0762" address="unix:///run/containerd/s/0a4b04f026380b3966f70df35a4e60b7e3de16dec54814da9bf3d3a84aaed787" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:29:22.251576 systemd[1]: Started cri-containerd-3a5ff9e5295c773b3b43def326752da8f2368c91812e4fe43b88f0a34c3e0762.scope - libcontainer container 3a5ff9e5295c773b3b43def326752da8f2368c91812e4fe43b88f0a34c3e0762. Dec 16 12:29:22.276465 kubelet[2757]: E1216 12:29:22.276408 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654c6f7d5b-snhmr" podUID="b160a20e-ffb1-4e7e-91c1-0be63ff8efe3" Dec 16 12:29:22.296841 containerd[1536]: time="2025-12-16T12:29:22.296296749Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:22.299262 containerd[1536]: time="2025-12-16T12:29:22.299212590Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:29:22.299586 containerd[1536]: time="2025-12-16T12:29:22.299221070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 12:29:22.300660 kubelet[2757]: E1216 12:29:22.300596 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:29:22.300971 kubelet[2757]: E1216 12:29:22.300838 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:29:22.301538 kubelet[2757]: E1216 12:29:22.301118 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pkl2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-c5llv_calico-system(de20fd2d-5c54-466a-8865-47a00aec7cac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:22.303225 containerd[1536]: time="2025-12-16T12:29:22.303180805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:29:22.342587 containerd[1536]: time="2025-12-16T12:29:22.342543232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qk9mh,Uid:9e9d246f-dbf0-4c36-8e21-415862de6ecd,Namespace:calico-system,Attempt:0,} returns sandbox id \"3a5ff9e5295c773b3b43def326752da8f2368c91812e4fe43b88f0a34c3e0762\"" Dec 16 12:29:22.696865 containerd[1536]: time="2025-12-16T12:29:22.696713035Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:22.698495 containerd[1536]: time="2025-12-16T12:29:22.698440059Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:29:22.698655 containerd[1536]: time="2025-12-16T12:29:22.698546460Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:29:22.698757 kubelet[2757]: E1216 12:29:22.698709 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:29:22.698823 kubelet[2757]: E1216 12:29:22.698764 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:29:22.699104 kubelet[2757]: E1216 12:29:22.699016 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mv48m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-654c6f7d5b-9x9hg_calico-apiserver(95ed81e2-a00e-4be1-9fa5-31d53c3e4933): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:22.699714 containerd[1536]: time="2025-12-16T12:29:22.699657796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:29:22.700796 kubelet[2757]: E1216 12:29:22.700737 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654c6f7d5b-9x9hg" podUID="95ed81e2-a00e-4be1-9fa5-31d53c3e4933" Dec 16 12:29:22.860050 systemd-networkd[1418]: calidf899755c88: Gained IPv6LL Dec 16 12:29:22.924078 systemd-networkd[1418]: calibe80e4c49a1: Gained IPv6LL Dec 16 12:29:23.053367 containerd[1536]: time="2025-12-16T12:29:23.053176452Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:23.055432 containerd[1536]: time="2025-12-16T12:29:23.055366601Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:29:23.055670 containerd[1536]: time="2025-12-16T12:29:23.055408082Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 12:29:23.055710 kubelet[2757]: E1216 12:29:23.055652 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:29:23.055710 kubelet[2757]: E1216 12:29:23.055700 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:29:23.056066 kubelet[2757]: E1216 12:29:23.055906 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pkl2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-c5llv_calico-system(de20fd2d-5c54-466a-8865-47a00aec7cac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:23.057404 containerd[1536]: time="2025-12-16T12:29:23.056724340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:29:23.057715 kubelet[2757]: E1216 12:29:23.057657 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c5llv" podUID="de20fd2d-5c54-466a-8865-47a00aec7cac" Dec 16 12:29:23.179637 systemd-networkd[1418]: calif6a1e006d0d: Gained IPv6LL Dec 16 12:29:23.289989 kubelet[2757]: E1216 12:29:23.289870 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654c6f7d5b-snhmr" podUID="b160a20e-ffb1-4e7e-91c1-0be63ff8efe3" Dec 16 12:29:23.290374 kubelet[2757]: E1216 12:29:23.290274 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654c6f7d5b-9x9hg" podUID="95ed81e2-a00e-4be1-9fa5-31d53c3e4933" Dec 16 12:29:23.291598 kubelet[2757]: E1216 12:29:23.291546 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c5llv" podUID="de20fd2d-5c54-466a-8865-47a00aec7cac" Dec 16 12:29:23.410253 containerd[1536]: time="2025-12-16T12:29:23.409774009Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:23.413425 containerd[1536]: time="2025-12-16T12:29:23.413250496Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:29:23.413870 containerd[1536]: time="2025-12-16T12:29:23.413409899Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 12:29:23.414134 kubelet[2757]: E1216 12:29:23.414065 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:29:23.414212 kubelet[2757]: E1216 12:29:23.414146 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:29:23.415494 kubelet[2757]: E1216 12:29:23.415339 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dfw7l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qk9mh_calico-system(9e9d246f-dbf0-4c36-8e21-415862de6ecd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:23.416910 kubelet[2757]: E1216 12:29:23.416719 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qk9mh" podUID="9e9d246f-dbf0-4c36-8e21-415862de6ecd" Dec 16 12:29:23.948843 systemd-networkd[1418]: cali7a1f996e8b5: Gained IPv6LL Dec 16 12:29:24.292278 kubelet[2757]: E1216 12:29:24.292128 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qk9mh" podUID="9e9d246f-dbf0-4c36-8e21-415862de6ecd" Dec 16 12:29:28.008553 containerd[1536]: time="2025-12-16T12:29:28.008480141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:29:28.354520 containerd[1536]: time="2025-12-16T12:29:28.354402869Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:28.356528 containerd[1536]: time="2025-12-16T12:29:28.356390093Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:29:28.356528 containerd[1536]: time="2025-12-16T12:29:28.356463614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 12:29:28.357221 kubelet[2757]: E1216 12:29:28.357083 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:29:28.357221 kubelet[2757]: E1216 12:29:28.357182 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:29:28.358453 kubelet[2757]: E1216 12:29:28.358011 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f58879817308484fbfd5b74fefa0ca63,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-85b54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cc8765bd4-8622q_calico-system(0af3bb6b-9dd4-40a9-8f44-2121edd9c07c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:28.360571 containerd[1536]: time="2025-12-16T12:29:28.360497663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:29:28.729538 containerd[1536]: time="2025-12-16T12:29:28.729404228Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:28.731476 containerd[1536]: time="2025-12-16T12:29:28.731365252Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:29:28.731596 containerd[1536]: time="2025-12-16T12:29:28.731379412Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 12:29:28.731687 kubelet[2757]: E1216 12:29:28.731619 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:29:28.731687 kubelet[2757]: E1216 12:29:28.731672 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:29:28.731842 kubelet[2757]: E1216 12:29:28.731786 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-85b54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cc8765bd4-8622q_calico-system(0af3bb6b-9dd4-40a9-8f44-2121edd9c07c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:28.733298 kubelet[2757]: E1216 12:29:28.733245 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cc8765bd4-8622q" podUID="0af3bb6b-9dd4-40a9-8f44-2121edd9c07c" Dec 16 12:29:33.008886 containerd[1536]: time="2025-12-16T12:29:33.008001954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:29:33.355810 containerd[1536]: time="2025-12-16T12:29:33.355727735Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:33.358121 containerd[1536]: time="2025-12-16T12:29:33.358061520Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:29:33.358328 containerd[1536]: time="2025-12-16T12:29:33.358166961Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 12:29:33.358422 kubelet[2757]: E1216 12:29:33.358374 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:29:33.358706 kubelet[2757]: E1216 12:29:33.358437 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:29:33.358706 kubelet[2757]: E1216 12:29:33.358629 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kvjgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7d8c786985-6rqkc_calico-system(6eaa9000-d78a-4f2c-90ec-72eb46b8f615): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:33.360041 kubelet[2757]: E1216 12:29:33.359989 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d8c786985-6rqkc" podUID="6eaa9000-d78a-4f2c-90ec-72eb46b8f615" Dec 16 12:29:35.010782 containerd[1536]: time="2025-12-16T12:29:35.009760969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:29:35.402607 containerd[1536]: time="2025-12-16T12:29:35.402542053Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:35.403971 containerd[1536]: time="2025-12-16T12:29:35.403846867Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:29:35.403971 containerd[1536]: time="2025-12-16T12:29:35.403971668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 12:29:35.404285 kubelet[2757]: E1216 12:29:35.404207 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:29:35.404705 kubelet[2757]: E1216 12:29:35.404286 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:29:35.404783 kubelet[2757]: E1216 12:29:35.404708 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dfw7l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qk9mh_calico-system(9e9d246f-dbf0-4c36-8e21-415862de6ecd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:35.405856 containerd[1536]: time="2025-12-16T12:29:35.405817447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:29:35.406222 kubelet[2757]: E1216 12:29:35.406082 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qk9mh" podUID="9e9d246f-dbf0-4c36-8e21-415862de6ecd" Dec 16 12:29:35.759094 containerd[1536]: time="2025-12-16T12:29:35.758927242Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:35.761134 containerd[1536]: time="2025-12-16T12:29:35.761055784Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:29:35.761271 containerd[1536]: time="2025-12-16T12:29:35.761167145Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:29:35.762123 kubelet[2757]: E1216 12:29:35.761466 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:29:35.762123 kubelet[2757]: E1216 12:29:35.761517 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:29:35.762123 kubelet[2757]: E1216 12:29:35.761700 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mv48m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-654c6f7d5b-9x9hg_calico-apiserver(95ed81e2-a00e-4be1-9fa5-31d53c3e4933): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:35.763285 kubelet[2757]: E1216 12:29:35.763247 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654c6f7d5b-9x9hg" podUID="95ed81e2-a00e-4be1-9fa5-31d53c3e4933" Dec 16 12:29:36.014669 containerd[1536]: time="2025-12-16T12:29:36.012855734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:29:36.387885 containerd[1536]: time="2025-12-16T12:29:36.387681710Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:36.389123 containerd[1536]: time="2025-12-16T12:29:36.389026044Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:29:36.389265 containerd[1536]: time="2025-12-16T12:29:36.389149405Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 12:29:36.389604 kubelet[2757]: E1216 12:29:36.389448 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:29:36.389604 kubelet[2757]: E1216 12:29:36.389542 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:29:36.390210 kubelet[2757]: E1216 12:29:36.389856 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pkl2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-c5llv_calico-system(de20fd2d-5c54-466a-8865-47a00aec7cac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:36.390703 containerd[1536]: time="2025-12-16T12:29:36.390206456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:29:36.731879 containerd[1536]: time="2025-12-16T12:29:36.731632775Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:36.733634 containerd[1536]: time="2025-12-16T12:29:36.733553475Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:29:36.733634 containerd[1536]: time="2025-12-16T12:29:36.733591595Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:29:36.734092 kubelet[2757]: E1216 12:29:36.733969 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:29:36.734092 kubelet[2757]: E1216 12:29:36.734048 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:29:36.734733 kubelet[2757]: E1216 12:29:36.734520 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bmbc5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-654c6f7d5b-snhmr_calico-apiserver(b160a20e-ffb1-4e7e-91c1-0be63ff8efe3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:36.734852 containerd[1536]: time="2025-12-16T12:29:36.734790927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:29:36.736484 kubelet[2757]: E1216 12:29:36.736383 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654c6f7d5b-snhmr" podUID="b160a20e-ffb1-4e7e-91c1-0be63ff8efe3" Dec 16 12:29:37.072581 containerd[1536]: time="2025-12-16T12:29:37.072498675Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:37.074135 containerd[1536]: time="2025-12-16T12:29:37.074063850Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:29:37.074288 containerd[1536]: time="2025-12-16T12:29:37.074212812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 12:29:37.074628 kubelet[2757]: E1216 12:29:37.074445 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:29:37.074797 kubelet[2757]: E1216 12:29:37.074606 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:29:37.075250 kubelet[2757]: E1216 12:29:37.075156 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pkl2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-c5llv_calico-system(de20fd2d-5c54-466a-8865-47a00aec7cac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:37.077094 kubelet[2757]: E1216 12:29:37.077042 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c5llv" podUID="de20fd2d-5c54-466a-8865-47a00aec7cac" Dec 16 12:29:42.012998 kubelet[2757]: E1216 12:29:42.012721 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cc8765bd4-8622q" podUID="0af3bb6b-9dd4-40a9-8f44-2121edd9c07c" Dec 16 12:29:46.009932 kubelet[2757]: E1216 12:29:46.009852 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d8c786985-6rqkc" podUID="6eaa9000-d78a-4f2c-90ec-72eb46b8f615" Dec 16 12:29:48.011829 kubelet[2757]: E1216 12:29:48.011754 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654c6f7d5b-9x9hg" podUID="95ed81e2-a00e-4be1-9fa5-31d53c3e4933" Dec 16 12:29:48.013539 kubelet[2757]: E1216 12:29:48.013472 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c5llv" podUID="de20fd2d-5c54-466a-8865-47a00aec7cac" Dec 16 12:29:49.009696 kubelet[2757]: E1216 12:29:49.009538 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654c6f7d5b-snhmr" podUID="b160a20e-ffb1-4e7e-91c1-0be63ff8efe3" Dec 16 12:29:50.014121 kubelet[2757]: E1216 12:29:50.014063 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qk9mh" podUID="9e9d246f-dbf0-4c36-8e21-415862de6ecd" Dec 16 12:29:55.013604 containerd[1536]: time="2025-12-16T12:29:55.013257493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:29:55.387543 containerd[1536]: time="2025-12-16T12:29:55.386985701Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:55.390289 containerd[1536]: time="2025-12-16T12:29:55.388874074Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:29:55.390289 containerd[1536]: time="2025-12-16T12:29:55.388976915Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 12:29:55.390476 kubelet[2757]: E1216 12:29:55.389139 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:29:55.390476 kubelet[2757]: E1216 12:29:55.389188 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:29:55.390476 kubelet[2757]: E1216 12:29:55.389329 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f58879817308484fbfd5b74fefa0ca63,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-85b54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cc8765bd4-8622q_calico-system(0af3bb6b-9dd4-40a9-8f44-2121edd9c07c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:55.394225 containerd[1536]: time="2025-12-16T12:29:55.392419499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:29:55.765904 containerd[1536]: time="2025-12-16T12:29:55.765342102Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:55.767073 containerd[1536]: time="2025-12-16T12:29:55.767011433Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:29:55.767557 containerd[1536]: time="2025-12-16T12:29:55.767115074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 12:29:55.767662 kubelet[2757]: E1216 12:29:55.767308 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:29:55.767662 kubelet[2757]: E1216 12:29:55.767397 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:29:55.767759 kubelet[2757]: E1216 12:29:55.767635 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-85b54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cc8765bd4-8622q_calico-system(0af3bb6b-9dd4-40a9-8f44-2121edd9c07c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:55.769429 kubelet[2757]: E1216 12:29:55.769339 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cc8765bd4-8622q" podUID="0af3bb6b-9dd4-40a9-8f44-2121edd9c07c" Dec 16 12:29:58.010874 containerd[1536]: time="2025-12-16T12:29:58.010502902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:29:58.368572 containerd[1536]: time="2025-12-16T12:29:58.368487238Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:58.370456 containerd[1536]: time="2025-12-16T12:29:58.370328371Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:29:58.370655 containerd[1536]: time="2025-12-16T12:29:58.370494612Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 12:29:58.370711 kubelet[2757]: E1216 12:29:58.370671 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:29:58.371120 kubelet[2757]: E1216 12:29:58.370732 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:29:58.371120 kubelet[2757]: E1216 12:29:58.370934 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kvjgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7d8c786985-6rqkc_calico-system(6eaa9000-d78a-4f2c-90ec-72eb46b8f615): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:58.372734 kubelet[2757]: E1216 12:29:58.372598 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d8c786985-6rqkc" podUID="6eaa9000-d78a-4f2c-90ec-72eb46b8f615" Dec 16 12:30:00.010095 containerd[1536]: time="2025-12-16T12:30:00.009858829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:30:00.341577 containerd[1536]: time="2025-12-16T12:30:00.341479481Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:00.343342 containerd[1536]: time="2025-12-16T12:30:00.343111732Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:30:00.343342 containerd[1536]: time="2025-12-16T12:30:00.343241333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 12:30:00.344471 kubelet[2757]: E1216 12:30:00.343707 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:30:00.344471 kubelet[2757]: E1216 12:30:00.343777 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:30:00.344471 kubelet[2757]: E1216 12:30:00.343955 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pkl2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-c5llv_calico-system(de20fd2d-5c54-466a-8865-47a00aec7cac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:00.349886 containerd[1536]: time="2025-12-16T12:30:00.349504253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:30:00.677814 containerd[1536]: time="2025-12-16T12:30:00.677665643Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:00.679477 containerd[1536]: time="2025-12-16T12:30:00.679423614Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:30:00.679623 containerd[1536]: time="2025-12-16T12:30:00.679535335Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 12:30:00.679794 kubelet[2757]: E1216 12:30:00.679752 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:30:00.680293 kubelet[2757]: E1216 12:30:00.680050 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:30:00.680293 kubelet[2757]: E1216 12:30:00.680186 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pkl2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-c5llv_calico-system(de20fd2d-5c54-466a-8865-47a00aec7cac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:00.681680 kubelet[2757]: E1216 12:30:00.681618 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c5llv" podUID="de20fd2d-5c54-466a-8865-47a00aec7cac" Dec 16 12:30:01.009622 containerd[1536]: time="2025-12-16T12:30:01.009482576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:30:01.541600 containerd[1536]: time="2025-12-16T12:30:01.541537264Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:01.543253 containerd[1536]: time="2025-12-16T12:30:01.543149475Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:30:01.543253 containerd[1536]: time="2025-12-16T12:30:01.543253435Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:30:01.543604 kubelet[2757]: E1216 12:30:01.543508 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:30:01.543604 kubelet[2757]: E1216 12:30:01.543570 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:30:01.544216 kubelet[2757]: E1216 12:30:01.543919 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mv48m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-654c6f7d5b-9x9hg_calico-apiserver(95ed81e2-a00e-4be1-9fa5-31d53c3e4933): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:01.545194 kubelet[2757]: E1216 12:30:01.545047 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654c6f7d5b-9x9hg" podUID="95ed81e2-a00e-4be1-9fa5-31d53c3e4933" Dec 16 12:30:03.011771 containerd[1536]: time="2025-12-16T12:30:03.011719874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:30:03.380546 containerd[1536]: time="2025-12-16T12:30:03.380428658Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:03.382380 containerd[1536]: time="2025-12-16T12:30:03.382235429Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:30:03.382380 containerd[1536]: time="2025-12-16T12:30:03.382343149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:30:03.383369 kubelet[2757]: E1216 12:30:03.383132 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:30:03.383369 kubelet[2757]: E1216 12:30:03.383188 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:30:03.384425 kubelet[2757]: E1216 12:30:03.383349 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bmbc5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-654c6f7d5b-snhmr_calico-apiserver(b160a20e-ffb1-4e7e-91c1-0be63ff8efe3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:03.385480 kubelet[2757]: E1216 12:30:03.385120 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654c6f7d5b-snhmr" podUID="b160a20e-ffb1-4e7e-91c1-0be63ff8efe3" Dec 16 12:30:04.010531 containerd[1536]: time="2025-12-16T12:30:04.010431046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:30:04.364129 containerd[1536]: time="2025-12-16T12:30:04.364025505Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:04.367243 containerd[1536]: time="2025-12-16T12:30:04.367179284Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:30:04.367376 containerd[1536]: time="2025-12-16T12:30:04.367219404Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 12:30:04.367548 kubelet[2757]: E1216 12:30:04.367474 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:30:04.367602 kubelet[2757]: E1216 12:30:04.367545 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:30:04.367722 kubelet[2757]: E1216 12:30:04.367677 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dfw7l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qk9mh_calico-system(9e9d246f-dbf0-4c36-8e21-415862de6ecd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:04.369434 kubelet[2757]: E1216 12:30:04.369365 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qk9mh" podUID="9e9d246f-dbf0-4c36-8e21-415862de6ecd" Dec 16 12:30:07.010955 kubelet[2757]: E1216 12:30:07.010756 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cc8765bd4-8622q" podUID="0af3bb6b-9dd4-40a9-8f44-2121edd9c07c" Dec 16 12:30:12.009663 kubelet[2757]: E1216 12:30:12.008978 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d8c786985-6rqkc" podUID="6eaa9000-d78a-4f2c-90ec-72eb46b8f615" Dec 16 12:30:14.011083 kubelet[2757]: E1216 12:30:14.011028 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c5llv" podUID="de20fd2d-5c54-466a-8865-47a00aec7cac" Dec 16 12:30:15.010113 kubelet[2757]: E1216 12:30:15.010046 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654c6f7d5b-snhmr" podUID="b160a20e-ffb1-4e7e-91c1-0be63ff8efe3" Dec 16 12:30:15.010523 kubelet[2757]: E1216 12:30:15.009999 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654c6f7d5b-9x9hg" podUID="95ed81e2-a00e-4be1-9fa5-31d53c3e4933" Dec 16 12:30:18.010204 kubelet[2757]: E1216 12:30:18.010102 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qk9mh" podUID="9e9d246f-dbf0-4c36-8e21-415862de6ecd" Dec 16 12:30:18.011746 kubelet[2757]: E1216 12:30:18.011637 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cc8765bd4-8622q" podUID="0af3bb6b-9dd4-40a9-8f44-2121edd9c07c" Dec 16 12:30:26.008893 kubelet[2757]: E1216 12:30:26.008834 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654c6f7d5b-snhmr" podUID="b160a20e-ffb1-4e7e-91c1-0be63ff8efe3" Dec 16 12:30:27.009307 kubelet[2757]: E1216 12:30:27.009059 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d8c786985-6rqkc" podUID="6eaa9000-d78a-4f2c-90ec-72eb46b8f615" Dec 16 12:30:27.010404 kubelet[2757]: E1216 12:30:27.010180 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c5llv" podUID="de20fd2d-5c54-466a-8865-47a00aec7cac" Dec 16 12:30:29.008158 kubelet[2757]: E1216 12:30:29.008087 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654c6f7d5b-9x9hg" podUID="95ed81e2-a00e-4be1-9fa5-31d53c3e4933" Dec 16 12:30:31.008568 kubelet[2757]: E1216 12:30:31.008443 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qk9mh" podUID="9e9d246f-dbf0-4c36-8e21-415862de6ecd" Dec 16 12:30:31.010838 kubelet[2757]: E1216 12:30:31.010768 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cc8765bd4-8622q" podUID="0af3bb6b-9dd4-40a9-8f44-2121edd9c07c" Dec 16 12:30:37.008169 kubelet[2757]: E1216 12:30:37.008069 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654c6f7d5b-snhmr" podUID="b160a20e-ffb1-4e7e-91c1-0be63ff8efe3" Dec 16 12:30:39.008331 containerd[1536]: time="2025-12-16T12:30:39.008276840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:30:39.345492 containerd[1536]: time="2025-12-16T12:30:39.345162243Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:39.348545 containerd[1536]: time="2025-12-16T12:30:39.348468537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 12:30:39.348545 containerd[1536]: time="2025-12-16T12:30:39.348689938Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:30:39.349447 kubelet[2757]: E1216 12:30:39.349374 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:30:39.349447 kubelet[2757]: E1216 12:30:39.349437 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:30:39.352051 kubelet[2757]: E1216 12:30:39.349586 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kvjgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7d8c786985-6rqkc_calico-system(6eaa9000-d78a-4f2c-90ec-72eb46b8f615): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:39.352051 kubelet[2757]: E1216 12:30:39.350742 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d8c786985-6rqkc" podUID="6eaa9000-d78a-4f2c-90ec-72eb46b8f615" Dec 16 12:30:40.010662 kubelet[2757]: E1216 12:30:40.010511 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c5llv" podUID="de20fd2d-5c54-466a-8865-47a00aec7cac" Dec 16 12:30:43.009117 containerd[1536]: time="2025-12-16T12:30:43.008957335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:30:43.369435 containerd[1536]: time="2025-12-16T12:30:43.369373036Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:43.370849 containerd[1536]: time="2025-12-16T12:30:43.370792362Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:30:43.370963 containerd[1536]: time="2025-12-16T12:30:43.370888442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 12:30:43.373385 kubelet[2757]: E1216 12:30:43.371169 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:30:43.373385 kubelet[2757]: E1216 12:30:43.371219 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:30:43.374401 kubelet[2757]: E1216 12:30:43.374129 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f58879817308484fbfd5b74fefa0ca63,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-85b54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cc8765bd4-8622q_calico-system(0af3bb6b-9dd4-40a9-8f44-2121edd9c07c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:43.374795 containerd[1536]: time="2025-12-16T12:30:43.374736338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:30:43.737218 containerd[1536]: time="2025-12-16T12:30:43.736592165Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:43.738142 containerd[1536]: time="2025-12-16T12:30:43.738079292Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:30:43.738426 containerd[1536]: time="2025-12-16T12:30:43.738190292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:30:43.738814 kubelet[2757]: E1216 12:30:43.738379 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:30:43.738814 kubelet[2757]: E1216 12:30:43.738433 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:30:43.740621 kubelet[2757]: E1216 12:30:43.738777 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mv48m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-654c6f7d5b-9x9hg_calico-apiserver(95ed81e2-a00e-4be1-9fa5-31d53c3e4933): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:43.740853 containerd[1536]: time="2025-12-16T12:30:43.739584578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:30:43.741115 kubelet[2757]: E1216 12:30:43.741065 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654c6f7d5b-9x9hg" podUID="95ed81e2-a00e-4be1-9fa5-31d53c3e4933" Dec 16 12:30:44.097757 containerd[1536]: time="2025-12-16T12:30:44.097694907Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:44.099515 containerd[1536]: time="2025-12-16T12:30:44.099439674Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:30:44.099588 containerd[1536]: time="2025-12-16T12:30:44.099577155Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 12:30:44.099948 kubelet[2757]: E1216 12:30:44.099826 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:30:44.099948 kubelet[2757]: E1216 12:30:44.099897 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:30:44.100236 kubelet[2757]: E1216 12:30:44.100188 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-85b54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cc8765bd4-8622q_calico-system(0af3bb6b-9dd4-40a9-8f44-2121edd9c07c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:44.101571 kubelet[2757]: E1216 12:30:44.101497 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cc8765bd4-8622q" podUID="0af3bb6b-9dd4-40a9-8f44-2121edd9c07c" Dec 16 12:30:45.008806 containerd[1536]: time="2025-12-16T12:30:45.008765219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:30:45.358334 containerd[1536]: time="2025-12-16T12:30:45.358216578Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:45.361915 containerd[1536]: time="2025-12-16T12:30:45.361772913Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:30:45.361915 containerd[1536]: time="2025-12-16T12:30:45.361882513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 12:30:45.363533 kubelet[2757]: E1216 12:30:45.362819 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:30:45.363533 kubelet[2757]: E1216 12:30:45.362884 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:30:45.363533 kubelet[2757]: E1216 12:30:45.363019 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dfw7l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qk9mh_calico-system(9e9d246f-dbf0-4c36-8e21-415862de6ecd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:45.366203 kubelet[2757]: E1216 12:30:45.364631 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qk9mh" podUID="9e9d246f-dbf0-4c36-8e21-415862de6ecd" Dec 16 12:30:49.008556 containerd[1536]: time="2025-12-16T12:30:49.008478658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:30:49.367416 containerd[1536]: time="2025-12-16T12:30:49.367175942Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:49.369136 containerd[1536]: time="2025-12-16T12:30:49.369003429Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:30:49.369136 containerd[1536]: time="2025-12-16T12:30:49.369079109Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:30:49.369312 kubelet[2757]: E1216 12:30:49.369241 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:30:49.369312 kubelet[2757]: E1216 12:30:49.369288 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:30:49.369691 kubelet[2757]: E1216 12:30:49.369428 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bmbc5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-654c6f7d5b-snhmr_calico-apiserver(b160a20e-ffb1-4e7e-91c1-0be63ff8efe3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:49.371027 kubelet[2757]: E1216 12:30:49.370909 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654c6f7d5b-snhmr" podUID="b160a20e-ffb1-4e7e-91c1-0be63ff8efe3" Dec 16 12:30:54.019935 kubelet[2757]: E1216 12:30:54.019825 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d8c786985-6rqkc" podUID="6eaa9000-d78a-4f2c-90ec-72eb46b8f615" Dec 16 12:30:54.023381 containerd[1536]: time="2025-12-16T12:30:54.023327801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:30:54.387242 containerd[1536]: time="2025-12-16T12:30:54.387180149Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:54.389883 containerd[1536]: time="2025-12-16T12:30:54.389815199Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:30:54.390027 containerd[1536]: time="2025-12-16T12:30:54.389949399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 12:30:54.390332 kubelet[2757]: E1216 12:30:54.390242 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:30:54.390332 kubelet[2757]: E1216 12:30:54.390315 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:30:54.390998 kubelet[2757]: E1216 12:30:54.390938 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pkl2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-c5llv_calico-system(de20fd2d-5c54-466a-8865-47a00aec7cac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:54.393865 containerd[1536]: time="2025-12-16T12:30:54.393707254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:30:54.757902 containerd[1536]: time="2025-12-16T12:30:54.757656522Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:54.759280 containerd[1536]: time="2025-12-16T12:30:54.759171048Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:30:54.759280 containerd[1536]: time="2025-12-16T12:30:54.759234608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 12:30:54.759689 kubelet[2757]: E1216 12:30:54.759611 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:30:54.759689 kubelet[2757]: E1216 12:30:54.759670 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:30:54.760115 kubelet[2757]: E1216 12:30:54.760016 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pkl2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-c5llv_calico-system(de20fd2d-5c54-466a-8865-47a00aec7cac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:54.762198 kubelet[2757]: E1216 12:30:54.761435 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c5llv" podUID="de20fd2d-5c54-466a-8865-47a00aec7cac" Dec 16 12:30:55.009439 kubelet[2757]: E1216 12:30:55.008983 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654c6f7d5b-9x9hg" podUID="95ed81e2-a00e-4be1-9fa5-31d53c3e4933" Dec 16 12:30:56.011465 kubelet[2757]: E1216 12:30:56.011398 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cc8765bd4-8622q" podUID="0af3bb6b-9dd4-40a9-8f44-2121edd9c07c" Dec 16 12:31:00.009871 kubelet[2757]: E1216 12:31:00.008941 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qk9mh" podUID="9e9d246f-dbf0-4c36-8e21-415862de6ecd" Dec 16 12:31:01.010143 kubelet[2757]: E1216 12:31:01.009780 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654c6f7d5b-snhmr" podUID="b160a20e-ffb1-4e7e-91c1-0be63ff8efe3" Dec 16 12:31:07.011168 kubelet[2757]: E1216 12:31:07.011118 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cc8765bd4-8622q" podUID="0af3bb6b-9dd4-40a9-8f44-2121edd9c07c" Dec 16 12:31:07.021064 kubelet[2757]: E1216 12:31:07.021008 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d8c786985-6rqkc" podUID="6eaa9000-d78a-4f2c-90ec-72eb46b8f615" Dec 16 12:31:08.010928 kubelet[2757]: E1216 12:31:08.010844 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654c6f7d5b-9x9hg" podUID="95ed81e2-a00e-4be1-9fa5-31d53c3e4933" Dec 16 12:31:08.011844 kubelet[2757]: E1216 12:31:08.011679 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c5llv" podUID="de20fd2d-5c54-466a-8865-47a00aec7cac" Dec 16 12:31:08.578935 systemd[1]: Started sshd@7-88.99.82.111:22-139.178.89.65:34252.service - OpenSSH per-connection server daemon (139.178.89.65:34252). Dec 16 12:31:09.599424 sshd[4902]: Accepted publickey for core from 139.178.89.65 port 34252 ssh2: RSA SHA256:dvgfFUX4LTtI/InRtbpPXJ1Q5z3l9e9H0QH68YHigeA Dec 16 12:31:09.603322 sshd-session[4902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:09.614722 systemd-logind[1519]: New session 8 of user core. Dec 16 12:31:09.618658 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 12:31:10.411935 sshd[4905]: Connection closed by 139.178.89.65 port 34252 Dec 16 12:31:10.413619 sshd-session[4902]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:10.420840 systemd[1]: sshd@7-88.99.82.111:22-139.178.89.65:34252.service: Deactivated successfully. Dec 16 12:31:10.425941 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 12:31:10.428615 systemd-logind[1519]: Session 8 logged out. Waiting for processes to exit. Dec 16 12:31:10.430978 systemd-logind[1519]: Removed session 8. Dec 16 12:31:15.011220 kubelet[2757]: E1216 12:31:15.010473 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qk9mh" podUID="9e9d246f-dbf0-4c36-8e21-415862de6ecd" Dec 16 12:31:15.013242 kubelet[2757]: E1216 12:31:15.010645 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654c6f7d5b-snhmr" podUID="b160a20e-ffb1-4e7e-91c1-0be63ff8efe3" Dec 16 12:31:15.586779 systemd[1]: Started sshd@8-88.99.82.111:22-139.178.89.65:44432.service - OpenSSH per-connection server daemon (139.178.89.65:44432). Dec 16 12:31:16.613397 sshd[4946]: Accepted publickey for core from 139.178.89.65 port 44432 ssh2: RSA SHA256:dvgfFUX4LTtI/InRtbpPXJ1Q5z3l9e9H0QH68YHigeA Dec 16 12:31:16.615954 sshd-session[4946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:16.620944 systemd-logind[1519]: New session 9 of user core. Dec 16 12:31:16.630480 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 12:31:17.408059 sshd[4949]: Connection closed by 139.178.89.65 port 44432 Dec 16 12:31:17.411677 sshd-session[4946]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:17.418618 systemd[1]: sshd@8-88.99.82.111:22-139.178.89.65:44432.service: Deactivated successfully. Dec 16 12:31:17.421345 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 12:31:17.423005 systemd-logind[1519]: Session 9 logged out. Waiting for processes to exit. Dec 16 12:31:17.425220 systemd-logind[1519]: Removed session 9. Dec 16 12:31:17.593971 systemd[1]: Started sshd@9-88.99.82.111:22-139.178.89.65:44434.service - OpenSSH per-connection server daemon (139.178.89.65:44434). Dec 16 12:31:18.010430 kubelet[2757]: E1216 12:31:18.009655 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d8c786985-6rqkc" podUID="6eaa9000-d78a-4f2c-90ec-72eb46b8f615" Dec 16 12:31:18.672425 sshd[4962]: Accepted publickey for core from 139.178.89.65 port 44434 ssh2: RSA SHA256:dvgfFUX4LTtI/InRtbpPXJ1Q5z3l9e9H0QH68YHigeA Dec 16 12:31:18.676037 sshd-session[4962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:18.683060 systemd-logind[1519]: New session 10 of user core. Dec 16 12:31:18.689614 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 12:31:19.537503 sshd[4965]: Connection closed by 139.178.89.65 port 44434 Dec 16 12:31:19.539742 sshd-session[4962]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:19.545519 systemd-logind[1519]: Session 10 logged out. Waiting for processes to exit. Dec 16 12:31:19.545750 systemd[1]: sshd@9-88.99.82.111:22-139.178.89.65:44434.service: Deactivated successfully. Dec 16 12:31:19.549855 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 12:31:19.552417 systemd-logind[1519]: Removed session 10. Dec 16 12:31:19.718710 systemd[1]: Started sshd@10-88.99.82.111:22-139.178.89.65:44450.service - OpenSSH per-connection server daemon (139.178.89.65:44450). Dec 16 12:31:20.012729 kubelet[2757]: E1216 12:31:20.011261 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654c6f7d5b-9x9hg" podUID="95ed81e2-a00e-4be1-9fa5-31d53c3e4933" Dec 16 12:31:20.741744 sshd[4974]: Accepted publickey for core from 139.178.89.65 port 44450 ssh2: RSA SHA256:dvgfFUX4LTtI/InRtbpPXJ1Q5z3l9e9H0QH68YHigeA Dec 16 12:31:20.743340 sshd-session[4974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:20.752680 systemd-logind[1519]: New session 11 of user core. Dec 16 12:31:20.758771 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 12:31:21.010369 kubelet[2757]: E1216 12:31:21.010208 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cc8765bd4-8622q" podUID="0af3bb6b-9dd4-40a9-8f44-2121edd9c07c" Dec 16 12:31:21.010767 kubelet[2757]: E1216 12:31:21.010632 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c5llv" podUID="de20fd2d-5c54-466a-8865-47a00aec7cac" Dec 16 12:31:21.558698 sshd[4977]: Connection closed by 139.178.89.65 port 44450 Dec 16 12:31:21.559724 sshd-session[4974]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:21.567348 systemd[1]: sshd@10-88.99.82.111:22-139.178.89.65:44450.service: Deactivated successfully. Dec 16 12:31:21.570185 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 12:31:21.571337 systemd-logind[1519]: Session 11 logged out. Waiting for processes to exit. Dec 16 12:31:21.573827 systemd-logind[1519]: Removed session 11. Dec 16 12:31:26.739758 systemd[1]: Started sshd@11-88.99.82.111:22-139.178.89.65:49990.service - OpenSSH per-connection server daemon (139.178.89.65:49990). Dec 16 12:31:27.009388 kubelet[2757]: E1216 12:31:27.008403 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654c6f7d5b-snhmr" podUID="b160a20e-ffb1-4e7e-91c1-0be63ff8efe3" Dec 16 12:31:27.757914 sshd[4993]: Accepted publickey for core from 139.178.89.65 port 49990 ssh2: RSA SHA256:dvgfFUX4LTtI/InRtbpPXJ1Q5z3l9e9H0QH68YHigeA Dec 16 12:31:27.760999 sshd-session[4993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:27.771630 systemd-logind[1519]: New session 12 of user core. Dec 16 12:31:27.779748 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 12:31:28.539989 sshd[4996]: Connection closed by 139.178.89.65 port 49990 Dec 16 12:31:28.539873 sshd-session[4993]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:28.545945 systemd[1]: sshd@11-88.99.82.111:22-139.178.89.65:49990.service: Deactivated successfully. Dec 16 12:31:28.549129 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 12:31:28.552916 systemd-logind[1519]: Session 12 logged out. Waiting for processes to exit. Dec 16 12:31:28.555003 systemd-logind[1519]: Removed session 12. Dec 16 12:31:29.009060 kubelet[2757]: E1216 12:31:29.008637 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qk9mh" podUID="9e9d246f-dbf0-4c36-8e21-415862de6ecd" Dec 16 12:31:31.009050 kubelet[2757]: E1216 12:31:31.008613 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654c6f7d5b-9x9hg" podUID="95ed81e2-a00e-4be1-9fa5-31d53c3e4933" Dec 16 12:31:32.009373 kubelet[2757]: E1216 12:31:32.008863 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d8c786985-6rqkc" podUID="6eaa9000-d78a-4f2c-90ec-72eb46b8f615" Dec 16 12:31:33.715009 systemd[1]: Started sshd@12-88.99.82.111:22-139.178.89.65:60180.service - OpenSSH per-connection server daemon (139.178.89.65:60180). Dec 16 12:31:34.022796 kubelet[2757]: E1216 12:31:34.022611 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cc8765bd4-8622q" podUID="0af3bb6b-9dd4-40a9-8f44-2121edd9c07c" Dec 16 12:31:34.025342 kubelet[2757]: E1216 12:31:34.025186 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c5llv" podUID="de20fd2d-5c54-466a-8865-47a00aec7cac" Dec 16 12:31:34.745436 sshd[5010]: Accepted publickey for core from 139.178.89.65 port 60180 ssh2: RSA SHA256:dvgfFUX4LTtI/InRtbpPXJ1Q5z3l9e9H0QH68YHigeA Dec 16 12:31:34.747684 sshd-session[5010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:34.755136 systemd-logind[1519]: New session 13 of user core. Dec 16 12:31:34.764679 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 12:31:35.525227 sshd[5013]: Connection closed by 139.178.89.65 port 60180 Dec 16 12:31:35.525949 sshd-session[5010]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:35.534380 systemd-logind[1519]: Session 13 logged out. Waiting for processes to exit. Dec 16 12:31:35.535195 systemd[1]: sshd@12-88.99.82.111:22-139.178.89.65:60180.service: Deactivated successfully. Dec 16 12:31:35.539312 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 12:31:35.543475 systemd-logind[1519]: Removed session 13. Dec 16 12:31:40.702167 systemd[1]: Started sshd@13-88.99.82.111:22-139.178.89.65:41080.service - OpenSSH per-connection server daemon (139.178.89.65:41080). Dec 16 12:31:41.715744 sshd[5027]: Accepted publickey for core from 139.178.89.65 port 41080 ssh2: RSA SHA256:dvgfFUX4LTtI/InRtbpPXJ1Q5z3l9e9H0QH68YHigeA Dec 16 12:31:41.716848 sshd-session[5027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:41.723130 systemd-logind[1519]: New session 14 of user core. Dec 16 12:31:41.729587 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 12:31:42.014465 kubelet[2757]: E1216 12:31:42.013399 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654c6f7d5b-9x9hg" podUID="95ed81e2-a00e-4be1-9fa5-31d53c3e4933" Dec 16 12:31:42.015866 kubelet[2757]: E1216 12:31:42.015804 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654c6f7d5b-snhmr" podUID="b160a20e-ffb1-4e7e-91c1-0be63ff8efe3" Dec 16 12:31:42.511506 sshd[5030]: Connection closed by 139.178.89.65 port 41080 Dec 16 12:31:42.512619 sshd-session[5027]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:42.517597 systemd-logind[1519]: Session 14 logged out. Waiting for processes to exit. Dec 16 12:31:42.518199 systemd[1]: sshd@13-88.99.82.111:22-139.178.89.65:41080.service: Deactivated successfully. Dec 16 12:31:42.523068 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 12:31:42.527184 systemd-logind[1519]: Removed session 14. Dec 16 12:31:42.694945 systemd[1]: Started sshd@14-88.99.82.111:22-139.178.89.65:41086.service - OpenSSH per-connection server daemon (139.178.89.65:41086). Dec 16 12:31:43.760705 sshd[5065]: Accepted publickey for core from 139.178.89.65 port 41086 ssh2: RSA SHA256:dvgfFUX4LTtI/InRtbpPXJ1Q5z3l9e9H0QH68YHigeA Dec 16 12:31:43.762798 sshd-session[5065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:43.768077 systemd-logind[1519]: New session 15 of user core. Dec 16 12:31:43.776791 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 12:31:44.009063 kubelet[2757]: E1216 12:31:44.008896 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qk9mh" podUID="9e9d246f-dbf0-4c36-8e21-415862de6ecd" Dec 16 12:31:44.719605 sshd[5068]: Connection closed by 139.178.89.65 port 41086 Dec 16 12:31:44.721737 sshd-session[5065]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:44.730969 systemd[1]: sshd@14-88.99.82.111:22-139.178.89.65:41086.service: Deactivated successfully. Dec 16 12:31:44.731494 systemd-logind[1519]: Session 15 logged out. Waiting for processes to exit. Dec 16 12:31:44.734738 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 12:31:44.738783 systemd-logind[1519]: Removed session 15. Dec 16 12:31:44.899925 systemd[1]: Started sshd@15-88.99.82.111:22-139.178.89.65:41102.service - OpenSSH per-connection server daemon (139.178.89.65:41102). Dec 16 12:31:45.009376 kubelet[2757]: E1216 12:31:45.008986 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d8c786985-6rqkc" podUID="6eaa9000-d78a-4f2c-90ec-72eb46b8f615" Dec 16 12:31:45.979311 sshd[5078]: Accepted publickey for core from 139.178.89.65 port 41102 ssh2: RSA SHA256:dvgfFUX4LTtI/InRtbpPXJ1Q5z3l9e9H0QH68YHigeA Dec 16 12:31:45.981189 sshd-session[5078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:45.988605 systemd-logind[1519]: New session 16 of user core. Dec 16 12:31:45.994629 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 12:31:46.014711 kubelet[2757]: E1216 12:31:46.014572 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c5llv" podUID="de20fd2d-5c54-466a-8865-47a00aec7cac" Dec 16 12:31:46.016073 kubelet[2757]: E1216 12:31:46.016035 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cc8765bd4-8622q" podUID="0af3bb6b-9dd4-40a9-8f44-2121edd9c07c" Dec 16 12:31:47.540991 sshd[5081]: Connection closed by 139.178.89.65 port 41102 Dec 16 12:31:47.544704 sshd-session[5078]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:47.549965 systemd[1]: sshd@15-88.99.82.111:22-139.178.89.65:41102.service: Deactivated successfully. Dec 16 12:31:47.553230 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 12:31:47.557760 systemd-logind[1519]: Session 16 logged out. Waiting for processes to exit. Dec 16 12:31:47.559096 systemd-logind[1519]: Removed session 16. Dec 16 12:31:47.726681 systemd[1]: Started sshd@16-88.99.82.111:22-139.178.89.65:41110.service - OpenSSH per-connection server daemon (139.178.89.65:41110). Dec 16 12:31:48.807981 sshd[5100]: Accepted publickey for core from 139.178.89.65 port 41110 ssh2: RSA SHA256:dvgfFUX4LTtI/InRtbpPXJ1Q5z3l9e9H0QH68YHigeA Dec 16 12:31:48.809628 sshd-session[5100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:48.815881 systemd-logind[1519]: New session 17 of user core. Dec 16 12:31:48.824511 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 12:31:49.790620 sshd[5103]: Connection closed by 139.178.89.65 port 41110 Dec 16 12:31:49.792764 sshd-session[5100]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:49.798689 systemd-logind[1519]: Session 17 logged out. Waiting for processes to exit. Dec 16 12:31:49.799499 systemd[1]: sshd@16-88.99.82.111:22-139.178.89.65:41110.service: Deactivated successfully. Dec 16 12:31:49.801602 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 12:31:49.804058 systemd-logind[1519]: Removed session 17. Dec 16 12:31:49.974669 systemd[1]: Started sshd@17-88.99.82.111:22-139.178.89.65:41112.service - OpenSSH per-connection server daemon (139.178.89.65:41112). Dec 16 12:31:51.044503 sshd[5113]: Accepted publickey for core from 139.178.89.65 port 41112 ssh2: RSA SHA256:dvgfFUX4LTtI/InRtbpPXJ1Q5z3l9e9H0QH68YHigeA Dec 16 12:31:51.046625 sshd-session[5113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:51.054593 systemd-logind[1519]: New session 18 of user core. Dec 16 12:31:51.061643 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 12:31:51.851501 sshd[5116]: Connection closed by 139.178.89.65 port 41112 Dec 16 12:31:51.852304 sshd-session[5113]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:51.857900 systemd-logind[1519]: Session 18 logged out. Waiting for processes to exit. Dec 16 12:31:51.858297 systemd[1]: sshd@17-88.99.82.111:22-139.178.89.65:41112.service: Deactivated successfully. Dec 16 12:31:51.862128 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 12:31:51.867575 systemd-logind[1519]: Removed session 18. Dec 16 12:31:54.013794 kubelet[2757]: E1216 12:31:54.011347 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654c6f7d5b-snhmr" podUID="b160a20e-ffb1-4e7e-91c1-0be63ff8efe3" Dec 16 12:31:55.008088 kubelet[2757]: E1216 12:31:55.007965 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654c6f7d5b-9x9hg" podUID="95ed81e2-a00e-4be1-9fa5-31d53c3e4933" Dec 16 12:31:57.009508 kubelet[2757]: E1216 12:31:57.009143 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cc8765bd4-8622q" podUID="0af3bb6b-9dd4-40a9-8f44-2121edd9c07c" Dec 16 12:31:57.035020 systemd[1]: Started sshd@18-88.99.82.111:22-139.178.89.65:40022.service - OpenSSH per-connection server daemon (139.178.89.65:40022). Dec 16 12:31:58.009613 kubelet[2757]: E1216 12:31:58.009564 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qk9mh" podUID="9e9d246f-dbf0-4c36-8e21-415862de6ecd" Dec 16 12:31:58.093168 sshd[5139]: Accepted publickey for core from 139.178.89.65 port 40022 ssh2: RSA SHA256:dvgfFUX4LTtI/InRtbpPXJ1Q5z3l9e9H0QH68YHigeA Dec 16 12:31:58.096018 sshd-session[5139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:58.103218 systemd-logind[1519]: New session 19 of user core. Dec 16 12:31:58.108727 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 12:31:58.921802 sshd[5142]: Connection closed by 139.178.89.65 port 40022 Dec 16 12:31:58.920756 sshd-session[5139]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:58.927259 systemd[1]: sshd@18-88.99.82.111:22-139.178.89.65:40022.service: Deactivated successfully. Dec 16 12:31:58.932325 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 12:31:58.933724 systemd-logind[1519]: Session 19 logged out. Waiting for processes to exit. Dec 16 12:31:58.936614 systemd-logind[1519]: Removed session 19. Dec 16 12:32:00.011205 containerd[1536]: time="2025-12-16T12:32:00.010840411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:32:00.387984 containerd[1536]: time="2025-12-16T12:32:00.387778074Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:32:00.389708 containerd[1536]: time="2025-12-16T12:32:00.389483170Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 12:32:00.389708 containerd[1536]: time="2025-12-16T12:32:00.389590011Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:32:00.390066 kubelet[2757]: E1216 12:32:00.389993 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:32:00.390584 kubelet[2757]: E1216 12:32:00.390086 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:32:00.390584 kubelet[2757]: E1216 12:32:00.390297 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kvjgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7d8c786985-6rqkc_calico-system(6eaa9000-d78a-4f2c-90ec-72eb46b8f615): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:32:00.391542 kubelet[2757]: E1216 12:32:00.391474 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d8c786985-6rqkc" podUID="6eaa9000-d78a-4f2c-90ec-72eb46b8f615" Dec 16 12:32:01.009047 kubelet[2757]: E1216 12:32:01.008973 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c5llv" podUID="de20fd2d-5c54-466a-8865-47a00aec7cac" Dec 16 12:32:04.103647 systemd[1]: Started sshd@19-88.99.82.111:22-139.178.89.65:42958.service - OpenSSH per-connection server daemon (139.178.89.65:42958). Dec 16 12:32:05.167093 sshd[5156]: Accepted publickey for core from 139.178.89.65 port 42958 ssh2: RSA SHA256:dvgfFUX4LTtI/InRtbpPXJ1Q5z3l9e9H0QH68YHigeA Dec 16 12:32:05.168588 sshd-session[5156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:05.175065 systemd-logind[1519]: New session 20 of user core. Dec 16 12:32:05.180870 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 12:32:05.996441 sshd[5159]: Connection closed by 139.178.89.65 port 42958 Dec 16 12:32:05.998135 sshd-session[5156]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:06.005562 systemd-logind[1519]: Session 20 logged out. Waiting for processes to exit. Dec 16 12:32:06.006519 systemd[1]: sshd@19-88.99.82.111:22-139.178.89.65:42958.service: Deactivated successfully. Dec 16 12:32:06.014863 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 12:32:06.022684 systemd-logind[1519]: Removed session 20. Dec 16 12:32:08.009583 containerd[1536]: time="2025-12-16T12:32:08.009520143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:32:08.354750 containerd[1536]: time="2025-12-16T12:32:08.354679382Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:32:08.356434 containerd[1536]: time="2025-12-16T12:32:08.356321397Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:32:08.356574 containerd[1536]: time="2025-12-16T12:32:08.356486879Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 12:32:08.356763 kubelet[2757]: E1216 12:32:08.356726 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:32:08.357394 kubelet[2757]: E1216 12:32:08.357119 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:32:08.357394 kubelet[2757]: E1216 12:32:08.357239 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f58879817308484fbfd5b74fefa0ca63,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-85b54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cc8765bd4-8622q_calico-system(0af3bb6b-9dd4-40a9-8f44-2121edd9c07c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:32:08.360496 containerd[1536]: time="2025-12-16T12:32:08.360441194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:32:08.711091 containerd[1536]: time="2025-12-16T12:32:08.710942282Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:32:08.712729 containerd[1536]: time="2025-12-16T12:32:08.712400535Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:32:08.712729 containerd[1536]: time="2025-12-16T12:32:08.712501016Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 12:32:08.712904 kubelet[2757]: E1216 12:32:08.712767 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:32:08.712904 kubelet[2757]: E1216 12:32:08.712844 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:32:08.713132 kubelet[2757]: E1216 12:32:08.712989 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-85b54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6cc8765bd4-8622q_calico-system(0af3bb6b-9dd4-40a9-8f44-2121edd9c07c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:32:08.714320 kubelet[2757]: E1216 12:32:08.714251 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cc8765bd4-8622q" podUID="0af3bb6b-9dd4-40a9-8f44-2121edd9c07c" Dec 16 12:32:09.010050 kubelet[2757]: E1216 12:32:09.009195 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654c6f7d5b-snhmr" podUID="b160a20e-ffb1-4e7e-91c1-0be63ff8efe3" Dec 16 12:32:10.013383 containerd[1536]: time="2025-12-16T12:32:10.013313142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:32:10.357938 containerd[1536]: time="2025-12-16T12:32:10.357886730Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:32:10.359349 containerd[1536]: time="2025-12-16T12:32:10.359257822Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:32:10.359502 containerd[1536]: time="2025-12-16T12:32:10.359309422Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:32:10.360008 kubelet[2757]: E1216 12:32:10.359704 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:32:10.360008 kubelet[2757]: E1216 12:32:10.359768 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:32:10.360008 kubelet[2757]: E1216 12:32:10.359899 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mv48m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-654c6f7d5b-9x9hg_calico-apiserver(95ed81e2-a00e-4be1-9fa5-31d53c3e4933): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:32:10.361548 kubelet[2757]: E1216 12:32:10.361393 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654c6f7d5b-9x9hg" podUID="95ed81e2-a00e-4be1-9fa5-31d53c3e4933" Dec 16 12:32:12.010047 containerd[1536]: time="2025-12-16T12:32:12.009761171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:32:12.357973 containerd[1536]: time="2025-12-16T12:32:12.357908426Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:32:12.359481 containerd[1536]: time="2025-12-16T12:32:12.359420439Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:32:12.359614 containerd[1536]: time="2025-12-16T12:32:12.359545721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 12:32:12.359803 kubelet[2757]: E1216 12:32:12.359738 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:32:12.359803 kubelet[2757]: E1216 12:32:12.359805 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:32:12.360536 kubelet[2757]: E1216 12:32:12.360185 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dfw7l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qk9mh_calico-system(9e9d246f-dbf0-4c36-8e21-415862de6ecd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:32:12.361573 kubelet[2757]: E1216 12:32:12.361487 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qk9mh" podUID="9e9d246f-dbf0-4c36-8e21-415862de6ecd" Dec 16 12:32:14.009194 kubelet[2757]: E1216 12:32:14.008614 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d8c786985-6rqkc" podUID="6eaa9000-d78a-4f2c-90ec-72eb46b8f615" Dec 16 12:32:16.011609 containerd[1536]: time="2025-12-16T12:32:16.010908386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:32:16.364939 containerd[1536]: time="2025-12-16T12:32:16.364593682Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:32:16.366719 containerd[1536]: time="2025-12-16T12:32:16.366550539Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:32:16.366719 containerd[1536]: time="2025-12-16T12:32:16.366649660Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 12:32:16.367262 kubelet[2757]: E1216 12:32:16.366988 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:32:16.367262 kubelet[2757]: E1216 12:32:16.367054 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:32:16.367262 kubelet[2757]: E1216 12:32:16.367162 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pkl2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-c5llv_calico-system(de20fd2d-5c54-466a-8865-47a00aec7cac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:32:16.370269 containerd[1536]: time="2025-12-16T12:32:16.369947848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:32:16.730920 containerd[1536]: time="2025-12-16T12:32:16.730494563Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:32:16.735260 containerd[1536]: time="2025-12-16T12:32:16.734618398Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:32:16.735260 containerd[1536]: time="2025-12-16T12:32:16.734733039Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 12:32:16.735498 kubelet[2757]: E1216 12:32:16.734871 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:32:16.735498 kubelet[2757]: E1216 12:32:16.734919 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:32:16.735498 kubelet[2757]: E1216 12:32:16.735031 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pkl2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-c5llv_calico-system(de20fd2d-5c54-466a-8865-47a00aec7cac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:32:16.736317 kubelet[2757]: E1216 12:32:16.736227 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c5llv" podUID="de20fd2d-5c54-466a-8865-47a00aec7cac" Dec 16 12:32:20.013168 containerd[1536]: time="2025-12-16T12:32:20.013065009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:32:20.013879 kubelet[2757]: E1216 12:32:20.013514 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6cc8765bd4-8622q" podUID="0af3bb6b-9dd4-40a9-8f44-2121edd9c07c" Dec 16 12:32:20.353910 containerd[1536]: time="2025-12-16T12:32:20.353837677Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:32:20.355493 containerd[1536]: time="2025-12-16T12:32:20.355414050Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:32:20.355597 containerd[1536]: time="2025-12-16T12:32:20.355539451Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:32:20.355872 kubelet[2757]: E1216 12:32:20.355793 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:32:20.356086 kubelet[2757]: E1216 12:32:20.355873 2757 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:32:20.356086 kubelet[2757]: E1216 12:32:20.356035 2757 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bmbc5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-654c6f7d5b-snhmr_calico-apiserver(b160a20e-ffb1-4e7e-91c1-0be63ff8efe3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:32:20.357440 kubelet[2757]: E1216 12:32:20.357341 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654c6f7d5b-snhmr" podUID="b160a20e-ffb1-4e7e-91c1-0be63ff8efe3" Dec 16 12:32:20.784944 systemd[1]: cri-containerd-849cb1d444f3ff2d8dc1b387fd4b775fbdc2a478ade2b21739d9b059b4b4918d.scope: Deactivated successfully. Dec 16 12:32:20.785674 systemd[1]: cri-containerd-849cb1d444f3ff2d8dc1b387fd4b775fbdc2a478ade2b21739d9b059b4b4918d.scope: Consumed 40.272s CPU time, 101M memory peak. Dec 16 12:32:20.788078 containerd[1536]: time="2025-12-16T12:32:20.787954560Z" level=info msg="received container exit event container_id:\"849cb1d444f3ff2d8dc1b387fd4b775fbdc2a478ade2b21739d9b059b4b4918d\" id:\"849cb1d444f3ff2d8dc1b387fd4b775fbdc2a478ade2b21739d9b059b4b4918d\" pid:3073 exit_status:1 exited_at:{seconds:1765888340 nanos:787536796}" Dec 16 12:32:20.813856 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-849cb1d444f3ff2d8dc1b387fd4b775fbdc2a478ade2b21739d9b059b4b4918d-rootfs.mount: Deactivated successfully. Dec 16 12:32:20.830385 kubelet[2757]: I1216 12:32:20.830091 2757 scope.go:117] "RemoveContainer" containerID="849cb1d444f3ff2d8dc1b387fd4b775fbdc2a478ade2b21739d9b059b4b4918d" Dec 16 12:32:20.832471 containerd[1536]: time="2025-12-16T12:32:20.832432329Z" level=info msg="CreateContainer within sandbox \"6aac3e64664ab2f5c2505c7e796659c6c5ad399b093b6a7e0c506b088c2536ce\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 16 12:32:20.844513 containerd[1536]: time="2025-12-16T12:32:20.842907056Z" level=info msg="Container 246cb3a0d940d10c80842edf6c836dfce7a09d9b43167fe1ba012a427ed2a4ef: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:32:20.852412 containerd[1536]: time="2025-12-16T12:32:20.852349974Z" level=info msg="CreateContainer within sandbox \"6aac3e64664ab2f5c2505c7e796659c6c5ad399b093b6a7e0c506b088c2536ce\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"246cb3a0d940d10c80842edf6c836dfce7a09d9b43167fe1ba012a427ed2a4ef\"" Dec 16 12:32:20.853988 containerd[1536]: time="2025-12-16T12:32:20.852868138Z" level=info msg="StartContainer for \"246cb3a0d940d10c80842edf6c836dfce7a09d9b43167fe1ba012a427ed2a4ef\"" Dec 16 12:32:20.853988 containerd[1536]: time="2025-12-16T12:32:20.853741706Z" level=info msg="connecting to shim 246cb3a0d940d10c80842edf6c836dfce7a09d9b43167fe1ba012a427ed2a4ef" address="unix:///run/containerd/s/07cd2268c32c21b368d764f7ec08bccf93a4b31f14e470d0c14f2aa1b46bddbe" protocol=ttrpc version=3 Dec 16 12:32:20.879825 systemd[1]: Started cri-containerd-246cb3a0d940d10c80842edf6c836dfce7a09d9b43167fe1ba012a427ed2a4ef.scope - libcontainer container 246cb3a0d940d10c80842edf6c836dfce7a09d9b43167fe1ba012a427ed2a4ef. Dec 16 12:32:20.926178 containerd[1536]: time="2025-12-16T12:32:20.926098866Z" level=info msg="StartContainer for \"246cb3a0d940d10c80842edf6c836dfce7a09d9b43167fe1ba012a427ed2a4ef\" returns successfully" Dec 16 12:32:21.010907 kubelet[2757]: E1216 12:32:21.010835 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-654c6f7d5b-9x9hg" podUID="95ed81e2-a00e-4be1-9fa5-31d53c3e4933" Dec 16 12:32:21.013406 systemd[1]: cri-containerd-bce43a6c367181a98a505482859f536e4b4f83822d4b6ce59100c4dc78eb8386.scope: Deactivated successfully. Dec 16 12:32:21.014351 systemd[1]: cri-containerd-bce43a6c367181a98a505482859f536e4b4f83822d4b6ce59100c4dc78eb8386.scope: Consumed 5.412s CPU time, 57.6M memory peak, 2.7M read from disk. Dec 16 12:32:21.016782 containerd[1536]: time="2025-12-16T12:32:21.016727097Z" level=info msg="received container exit event container_id:\"bce43a6c367181a98a505482859f536e4b4f83822d4b6ce59100c4dc78eb8386\" id:\"bce43a6c367181a98a505482859f536e4b4f83822d4b6ce59100c4dc78eb8386\" pid:2616 exit_status:1 exited_at:{seconds:1765888341 nanos:16032612}" Dec 16 12:32:21.041997 kubelet[2757]: E1216 12:32:21.041865 2757 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:42860->10.0.0.2:2379: read: connection timed out" Dec 16 12:32:21.049139 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bce43a6c367181a98a505482859f536e4b4f83822d4b6ce59100c4dc78eb8386-rootfs.mount: Deactivated successfully. Dec 16 12:32:21.102487 kubelet[2757]: E1216 12:32:21.102169 2757 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:42696->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{goldmane-666569f655-qk9mh.1881b1ed759e1805 calico-system 1355 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:goldmane-666569f655-qk9mh,UID:9e9d246f-dbf0-4c36-8e21-415862de6ecd,APIVersion:v1,ResourceVersion:822,FieldPath:spec.containers{goldmane},},Reason:Pulling,Message:Pulling image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4459-2-2-0-7f64ef3ba0,},FirstTimestamp:2025-12-16 12:29:22 +0000 UTC,LastTimestamp:2025-12-16 12:32:12.008048036 +0000 UTC m=+222.108320773,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-2-0-7f64ef3ba0,}" Dec 16 12:32:21.835326 kubelet[2757]: I1216 12:32:21.835230 2757 scope.go:117] "RemoveContainer" containerID="bce43a6c367181a98a505482859f536e4b4f83822d4b6ce59100c4dc78eb8386" Dec 16 12:32:21.845413 containerd[1536]: time="2025-12-16T12:32:21.845371968Z" level=info msg="CreateContainer within sandbox \"364d97b320adf6fca0899a22c278666d0a77062846ec2cd90cb5afa2fa941319\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 16 12:32:21.858500 containerd[1536]: time="2025-12-16T12:32:21.856840823Z" level=info msg="Container b8df205e069742374bcfe62076feffe76a26593f7578b068e0023cd884def43b: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:32:21.869775 containerd[1536]: time="2025-12-16T12:32:21.869710249Z" level=info msg="CreateContainer within sandbox \"364d97b320adf6fca0899a22c278666d0a77062846ec2cd90cb5afa2fa941319\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"b8df205e069742374bcfe62076feffe76a26593f7578b068e0023cd884def43b\"" Dec 16 12:32:21.870832 containerd[1536]: time="2025-12-16T12:32:21.870797538Z" level=info msg="StartContainer for \"b8df205e069742374bcfe62076feffe76a26593f7578b068e0023cd884def43b\"" Dec 16 12:32:21.872441 containerd[1536]: time="2025-12-16T12:32:21.872407871Z" level=info msg="connecting to shim b8df205e069742374bcfe62076feffe76a26593f7578b068e0023cd884def43b" address="unix:///run/containerd/s/11ff23c360ba8a07c55bace55d7648c48f58a95d8f7640fdb0154c1dc76b3a9a" protocol=ttrpc version=3 Dec 16 12:32:21.895606 systemd[1]: Started cri-containerd-b8df205e069742374bcfe62076feffe76a26593f7578b068e0023cd884def43b.scope - libcontainer container b8df205e069742374bcfe62076feffe76a26593f7578b068e0023cd884def43b. Dec 16 12:32:21.945124 containerd[1536]: time="2025-12-16T12:32:21.945078310Z" level=info msg="StartContainer for \"b8df205e069742374bcfe62076feffe76a26593f7578b068e0023cd884def43b\" returns successfully" Dec 16 12:32:23.008863 kubelet[2757]: E1216 12:32:23.008786 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qk9mh" podUID="9e9d246f-dbf0-4c36-8e21-415862de6ecd" Dec 16 12:32:24.430246 kubelet[2757]: I1216 12:32:24.429969 2757 status_manager.go:890] "Failed to get status for pod" podUID="6eaa9000-d78a-4f2c-90ec-72eb46b8f615" pod="calico-system/calico-kube-controllers-7d8c786985-6rqkc" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:42776->10.0.0.2:2379: read: connection timed out" Dec 16 12:32:26.009391 kubelet[2757]: E1216 12:32:26.009281 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d8c786985-6rqkc" podUID="6eaa9000-d78a-4f2c-90ec-72eb46b8f615"